Disclaimer: this is an automatic aggregator which pulls feeds and comments from many blogs of contributors that have contributed to the Mono project. The contents of these blog entries do not necessarily reflect Xamarin's position.

October 29

Particle Designer with CocosSharp

CocosSharp comes with a variety of particle systems. For example, the GoneBananas sample uses the sun particle system (CCParticleSun) to create an animated sun, as well as the explosion particle system (CCParticleExplode) to create an explosion effect whenever the monkey collides with a banana.

GoneBananas

Although they are customizable, sometimes the built-in particle systems aren’t exactly what you want. Game developers often use particle system design tools for creating custom particle systems. One such commercial tool is the excellent Particle Designer from 71Squared.

Using Particle Designer is a great way to create custom effects and I was pleased to discover it works well with CocosSharp. For example, instead of using CCParticleExplode let’s change GoneBananas to instead use particle systems designed in Particle Designer, to create concentric exploding rings, as shown below:

particle designer

Particle Designer has an export feature that supports a variety of export formats. In this case I exported the particle systems as plist files and added them to the respective Content folders of the CocosSharp iOS and Android projects respectively.

Then, to include them in the code, create a CCParticleSystemQuad for each particle system:

var particles = new[] { "innerring.plist", "outerring.plist" };

foreach (string p in particles) {
    var ps = new CCParticleSystemQuad (p);
    ps.Position = pt;
    AddChild (ps);
}

Now when you run the game, the custom particle systems appear as shown below:

rings


Automate UI Testing Using Xamarin.UITest

Before publishing an app, either to an app store or via enterprise deployment, it’s important to test it for functionality. However, due to today’s device fragmentation, testing an app on every device configuration is nearly impossible. Thanks to automated UI testing, though, developers can write automated UI tests for mobile applications and run those tests to check for expected behavior.

Xamarin.UITest is an automated UI testing framework based on Calabash that allows developers to write and execute tests in C#. Moreover, it uses the popular NUnit testing framework to validate these tests.

Writing tests and executing them from Xamarin Studio is easy and can be done from Visual Studio, as well. To get started, just create a new ‘NUnit Library Project’ in Xamarin Studio.

Xamarin-UITest-Image1

To add test functionality, simply add a ‘Xamarin.UITest’ NuGet Package to this project.

Xamarin-UITest-Image2

Now, using TestFixtures and Test attributes, it becomes easier to write automated UI tests. For example: when a user enters a credit card number less than or greater than the desired number, the test should fail. Otherwise, it should pass given a valid credit card number.

[Test]
public void CreditCardNumber_ToShort_DisplayErrorMessage()
{
  _app.EnterText(EditTextView, new string('9', 15));
  _app.Tap(ValidateButton);
  // Assert
  AppResult[] result = _app.Query(ShortErrorMessage);
  Assert.IsTrue(result.Any(), "The error message is not being displayed.");
}
[Test]
public void CreditCardNumber_TooLong_DisplayErrorMessage()
{
  _app.EnterText(EditTextView, new string('9', 17));
  _app.Tap(ValidateButton);
  // Assert
  AppResult[] result = _app.Query(LongErrorMessage);
  Assert.IsTrue(result.Any(), "The error message is not being displayed.");
}
[Test]
public void CreditCardNumber_CorrectSize_DisplaySuccessScreen()
{
  _app.EnterText(EditTextView, new string('9', 16));
  _app.Tap(ValidateButton);
  _app.WaitForElement(SuccessScreenNavBar, "Valid Credit Card Screen did not appear",
    TimeSpan.FromSeconds(5));
  // Assert - Make sure that the message is on the screen
  AppResult[] results = _app.Query(SuccessMessageLabel);
  Assert.IsTrue(results.Any(), "The success message was not displayed on the screen");
}

Once, these steps are completed, the tests can be executed from ‘Run Unit Tests’ or can be submitted to Xamarin Test Cloud to execute on thousands of different devices.

A step-by-step guide and mini-hack from Evolve 2014 is available here.

For more information about Xamarin.UITest visit: http://developer.xamarin.com/guides/testcloud/uitest/intro-to-uitest/

Enjoy your UI testing!

Discuss this blog post in the Xamarin Forums

Physically Based Shading in Unity 5: A Primer

What is Physically Based Shading? Physically Based Shading (PBS for short) simulates the interactions between materials and light in a way that mimics reality. PBS has only recently become possible in real-time graphics. In situations where lighting and materials need to play together intuitively and realistically, it’s a big win.

The idea behind Physically Based Shading is to create a user friendly way of achieving a consistent, plausible look under different lighting conditions. It models how light behaves in reality, without using multiple ad-hoc models that may or may not work.

To do so it follows principles of physics, including energy conservation (meaning that objects never reflect more light than they receive), Fresnel reflections (all surfaces become more reflective at grazing angles), and how surfaces occlude themselves (what is called Geometry Term), among others.

Unity 5 includes what we call Standard Shader, which puts together a full PBS model and makes it easily accessible to Unity users. The Standard Shader is designed with hard surfaces in mind (which are also known as “architectural” materials), and can deal with most real world materials like stone, ceramics, brass, silver or rubber. It will even do a decent job with non-hard materials like skin, hair or cloth.

What about “my” content?

Keep in mind that Physically Based Shading doesn’t necessarily mean “realistic”, and it doesn’t mean it will dictate how the game (or the assets) look by imposing limitations. It adapts to different styles and esthetics ranging from accurately scanned- , to traditionally photographed-, to hand-painted textures.

For you guys making a flat-lit 2D sprite based game – PBS is not on the top of your need-to-have list. But… if you want to play with PBS and get the best out of the Standard shader we have a few tips for you!

Let’s dive in!

When thinking about lighting in Unity 5, it is handy to divide concepts into what we called the Context, which is information that comes from Unity itself and the Content, which is the information that is authored by you directly.

The Context

When lighting an object it is important to have an understanding on what the environment around the object looks like. Unity has classically had helpers, like light probes, that would be able to sample the diffuse lighting in a location. In Unity 5 we go much further:

Covering the whole range

HDR information is one important element for PBS. For instance, it helps to have information of environments where the sun can be ten times brighter than a blue sky. Unity 5 has a new native pipeline for HDR formats, you can just import .hdr and .exr images directly.

Adding shine

Reflection probes represent the reflections that exist at a certain location. There is one by default in a scene in Unity 5 (which you can look at in Edit->Scene Render Settings->Default Reflection). That reflection can be custom or depend purely on the sky and have no location.

You can, of course, create your own reflection probes. Just go to GameObject->Light->Create Reflection probe.

You’ll get something that looks like the image to the right:

You can then just drag it to whatever location in the scene and it’ll take care of getting information on what the surroundings look like.

Every reflection probe has an area of influence (that shows as a yellow box around the probe). Objects inside that box will pick their reflection data from the probe.

More per-pixel:

Thanks to Unity 5’s dynamic GI, light probes now also contain indirect light bounces, which the Standard shader applies per pixel. Normal maps now look great whether or not light is hitting them directly.

Dynamic GI

Global illumination is an important part of the context that’s needed for PBS. To get a comprehensive overview of how it will work in Unity 5 nothing better than to check our blogpost on Dynamic GI

Color Space

PBS and the Standard shader work both in Linear and Gamma modes. HDR encoding, the data in reflections probes and the rest of the content will adapt to the color space you choose. But you should keep in Linear space whenever possible for the most correct (and usually most pleasing) visual results.

The Content

Content is the data that is directly authored by you. The Standard shader does bring a few changes to the traditional Unity material workflow that we hope you will like.

The Material Editor

(as introduced in an earlier blogpost)

The Standard shader introduces a new material editor. The new editor tries to make it easier to work with PBS materials than what it was with non-PBS materials before.

The editor is more compact now, with all possible options for the material there, from the get go. No need to choose a different shader to change texture channels, no more “texture unused, please choose another shader” messages. No more changing shader to change the blending mode.

You have a number of texture slots, which are not mandatory, any slot that is left empty will have its code optimised away so you don’t have to worry about it. Unity will take whatever data you put on the editor and create the right code to make it run at maximum efficiency.

Tip: You can Ctrl+click on textures for a large preview, which will also let you check the contents of the color  and alpha channels separately!

Lighting as you would expect

Of course this whole PBS talk also combines with Unity 5’s dynamic GI workflow, the GI system is totally aware of the way the Standard shader behaves and takes that into account when lighting a scene.

The combination of PBS and Enlighten GI make it possible to change the lighting conditions of an entire scene quickly, and get results that make sense!

For this village, you will notice that the last shot has a different lighting setup than the scene with which we opened this post. Still objects look solid and deep, everything just falls in place. That’s exactly what the physically-based shading magic is all about. Once materials are built taking PBS into account, they become completely independent of the lighting conditions, it actually is a ton less work to work with PBS.

That’s why we love it, and that’s why we think you’ll love it too.

viking_village2

The Viking Village will be invading an Asset Store near you when 5.0 ships!

Next we’ll be digging into how exactly material channels are put together, tips and hints on how to author textures and a lot more! Stay tuned in you are interested on in-depth asset creation details!

New Development Snapshot

I've been busy with other things, but there have been enough fixes to warrant a new snapshot.

Changes:

  • Bug fix. When reading a .class resource from an assembly (to attempt to dynamically define it), read all the bytes.
  • Bug fix. Reading/writing java.nio.file attributes of non-existing file should throw NoSuchFileException.
  • Bug fix. Fixed bitmap synchronization in java.awt.image.BufferedImage and com.sun.imageio.plugins.jpeg.JPEGImageWriter.
  • Ignore -Xmn and -XX: Oracle Java specific command line options in ikvm.exe.
  • Enabled pack200 unpacking algorithm.
  • IKVM.Reflection: Bug fix. If custom attribute named argument parsing fails due to missing type, ConstructorArguments should still work.

Binaries available here: ikvmbin-8.0.5415.zip

October 28

The future of Web publishing in Unity – an update

Recently we’ve had a lot of inquiries from game developers who are concerned about the future of gaming on the Web in general and the Unity Web Player in the Google Chrome browser in particular, so we wanted to post something to address those concerns. Here it is!

In the fall of 2013 Google announced their plans to discontinue NPAPI support in the Google Chrome browser by the end of 2014. The NPAPI is the API that makes it possible to run native code in the browser and is what the Unity Web Player is based on.

We are not sure exactly when the NPAPI will be discontinued but expect that Google will stick to their plan. The long and the short of it is that when Google do switch off NPAPI the Unity Web Player will no longer work in Chrome.

Currently we think that the Unity Web Player platform is the most efficient technology for gaming on the Web and we are committed to supporting it for as long as the browser landscape means it makes sense to do so. That means throughout 2015 at the very least. Even if your games cease to run on the Google Chrome browser, they’ll still work on other browsers such as Mozilla Firefox, Microsoft IE and Apple Safari.

We know that many of you have deployed great games using Unity Web Player technology, and that those games put food on your table. So, we’ll be sure to keep you posted about web deployment going forward.

Our development team is committed to actively maintaining the Unity Web Player, indeed, few weeks ago we updated support for a 64 bit Unity Web Player – it also runs now on IE 11 64 bit and Chrome 64 bit on Windows. We are working on making the Unity Web Player 64 bit on OS X too, so it can run in Chrome 64 once Google drops support for Chrome 32 bit on OS X.

We are aware that the days of running native code in web browsers are numbered; there are simply too many plugins that do not run well and this represents a security risk. Even though we at Unity have always worked hard to keep our plugin current through the auto-update system, we agree that, in the long term, allowing native code in browsers is too big a responsibility for browser manufacturers to take on.

As a consequence, we are working hard on shipping Unity 5 for WebGL. We believe it is the best and safest long-term solution for running advanced 3D and 2D content in browsers. Publishing to WebGL for Unity 5 will be free of charge and will let you target the web without a plugin.

We are collaborating with browser vendors to improve the performance of games running on WebGL and current performance is very promising – indeed, in some cases, our WebGL solution runs just as fast as executed native code.

You can read more about the new Unity WebGL benchmarking suite in this blog post and finally you can try Unity 5 for WebGL – the pre-order beta opened to subscribers and everybody who pre-ordered Unity 5 a couple of days ago. As soon as we release Unity 5, the WebGL tools will be available to free users as well.

Plastic SCM email notifications!

People love notifications! So let’s add some to Plastic SCM.

This is a DIY project; it’s not something you’ll find built-in with Plastic SCM. I came with this fast solution to get email notifications when certain Plastic SCM operations were triggered. If you want to have built-in notifications in Plastic, you know you can use the uservoice page to vote up for it.
Now, you can download the tool from here.

Triggers

The entry point for third party tools is the trigger system. Using triggers you can hook up important operations and customize their behavior.

I’ll take the following ones to start coding a simple notification center:

  • After-checkin
  • After-mkreview
  • After-editreview
  • After-mklabel

With the four triggers above, I will be able to get an email when a checkin is created, a new code review is created or edited and finally when a new label is created.

The Plastic SCM triggers provide extra information both before the operation is run (before triggers) and after it finished (after triggers). You can review all the details here. Consuming the standard input or reading the trigger environment variables will help you create smarter and customizable notifications, such us getting notified only when the “README.txt” file has been changed at the release branch by a certain user.

This is a high level diagram explaining how the tool works:

The tool

In order to attach the tool to a Plastic SCM trigger you will need to use the “cm mktrigger” command as follows:

cm mktrigger after-mklabel "mklabelnotifier" "C:\triggers\plasticnotifier.exe aftermklabel C:\triggers\mklabel.txt"

You need to specify the trigger type, a name to easily recognize the new trigger and the tool you want to run when the operation is triggered.

The “plasticnotifier.exe” tool only needs two parameters, the first one is the trigger type and then the configurator file for that trigger type.

Configuration files

The source code will understand three different file formats that will serve as an example that you can use to create additional ones.

Mail list

The most basic configuration file is the list of email recipients the trigger will use when fired. It looks like as follows:
developer1@yourCompany.com
developer2@yourCompany.com
admin1@yourCompany.com
This format is valid for the “aftermklabel” trigger. The “AftermklabelTrigger” class will read the entire list and will start sending emails.

Translation file

For some triggers you will need to provide a mechanism to translate the Plastic SCM user and get the email. This file will be used by the “AfteReviewTrigger” class to translate the code review assignee user and obtain the email to send the message.

This is the config file aspect:
dev1;developer1@yourCompany.com
dev2;developer2@yourCompany.com
dev3;developer3@yourCompany.com
Each line has two fields separated by a semicolon; the first one is the Plastic SCM user ID and the
second one the user email. The trigger will create an environment variable with the code review assignee user ID. The “plastic notifier” will use this file to obtain the email to send notifications to.

Complex file

The last file format has several fields and subfields, take the following content as an example:

%_%message%_%
Changeset {0} created with the "{1}" comment.\r\n
Changeset content:\r\n{2}
%_%subscribers%_%
developer1@yourCompany.com;br:/main/experimental;br:/main
admin1@yourCompany.com;*
This file format is used by the “AfterCheckinTrigger” class and has two parts: the message and the subscribers.

The message is the email body that you can customize.

Three extra fields are available for the message: “{0}”, “{1}” and “{2}” will be automatically replaced by the changeset specification, the changeset comment and the files changed. Again this information is provided by the trigger using environment variables and the standard output.

The subscribers part is similar to the “Mailing list” format we explained above, but this one has extra information. After the user mail you can write, separated by semicolons, the branches you are interested on for notifications. Using a star (*) you will get notification for all the checkin operations done in the server. In the example above, the user “developer1” will only get notifications for the /main and /main/experimental branches, but the user “admin1” will get an email for every single checkin done in the server.

Further work

With the tool provided, you can get notifications when:
  • A new label is created.
  • A new code review has been assigned to you.
  • A new checkin in a certain branch has been performed.
But this is just a small preview of what you can get if you continue working. Here come some suggestions:
  • Get an email when permissions are changed.
  • Get an email when somebody tries to remove a repository.
  • Get an email when certain files are changed.
The source code is written in C# but you can grab it and translate it to any other language. You can improve the code and complete it, and, if you do, please don’t forget to share it!

Happy notifying!



Join Us for a Xamarin Fall European Roadshow

Earlier this year, we embarked on a Xamarin European Road Show where we talked to thousands of developers about creating beautiful, native mobile apps in C# with Visual Studio and Xamarin.

Xamarin Evangelist James Montemagno with attendees at the Xamarin Summer 2014 European Roadshow

Due to popular demand, we are doing it again! During the month of November, we will be in Europe with stops in The Netherlands, France, Denmark, UK, Germany, Italy, and Sweden. Join us to explore the latest Xamarin Platform Previews, Xamarin Test Cloud, and Xamarin Insights.

Don’t miss the opportunity to meet the Xamarin team, connect with other developers, show off your apps, and talk all things mobile. Register now for an event near you today:

We look forward to seeing you in Europe!

October 27

How the 2d version tree works

This article explains how the 2d version tree works and what it exactly renders.

We realized 2d-version-tree is one of the less understood features in Plastic, so a detailed explanation is definitely worth.

Item-level trees vs changeset trees

Plastic displays the evolution of a repository rendering the Branch Explorer. It is an overall view of what happened at a global level instead of going file per file.

While this is very useful in almost all cases, there are users who miss the old days of individual version trees. Maybe they have a pre-Plastic 4.0 background, maybe they come from good-ol ClearCase or maybe they just find it more natural.

Plastic SCM works on a changeset basis: changesets are the core of the system and not individual file histories. The reason is merge tracking: merges are tracked at the changeset level and not the individual file revision level.

We actually change this when we moved to version 4 a few years ago. Before that merges were tracked at the individual item (directory/revision) level.

What does it mean? Well, when you had to merge, let’s say 1200 files (or directories) Plastic 3.0 had to walk 1200 different history trees finding merge links and ancestors. In 4.0 and beyond it only walks one single tree. There’s no way for 3.0 to outperform 4.x (and later) in merge speed because the old version simply had to do tons of work. I won’t cover all the details, but this radical change didn’t only benefit merge performance but also the overall system speed and distributed features.

A simple 2d tree scenario

Let’s go through a very simple branch/merge scenario and let’s follow the history of a single file inside our repository. The following figure shows the Branch Explorer of our repo and where the file “foo.c” was modified.

As you can see the file was added in changeset 1, later branched and changed on changeset 5, and this change was merged back to “main” in changeset 7.

How does the version tree of the “foo.c” file looks like?

Look at the following figure: you probably expect something like the graphic on the right, but this is not how Plastic works. Plastic actually created only 2 revisions of “foo.c” so far: one created on changeset 1 and the second one created on changeset 5.

You may wonder what happened during the merge: well, changeset 7 simply includes the revision loaded by changeset 5 because there is no merge conflict and hence no need to create an extra revision for the file. This is what we call a “revision replacement” because changeset 7, which is a children of 4, simply replaces the loaded revision of “foo.c” as the result of the merge.

You probably expected something like the graphic on the right of the figure above and if fact this is how things worked on Plastic 3 and before, but the underlying merge tracking mechanism changed in 4.0 and beyond. There’s no need to create extra revisions of the file for trivial merges which greatly reduces the amount of operations to be performed.

Think about it: suppose you added 10k files on a branch and later merged them back to main: 3.0 was actually creating another 10k revisions of the files on main, while 4.x and beyond simply say “hey, load them on the main branch, that’s all” saving precious time.

So, how does the 2d-version-tree renders the previous case? Check the following figure:

As you can see the 2d-version-tree decorates the “real” version tree of the file with information from the Branch Explorer (changeset history) so you can better understand what is going on with the file.

The changesets marked as “U” mean the file was unchanged on this changeset, but it is still rendered so we can understand how the file evolved through the repo history. Looking at this diagram you can understand that the revision changed on branch1 was the one finally labelled as BL001. Looking at the “raw” tree (or the history of the file) you wouldn’t have enough info to understand it.

A slightly more complex 2d-tree scenario

Look now at the following Branch Explorer:

It is slightly more complex than the previous since it involves 3 branches and a couple of merges. Our file “foo.c” was simply added on changeset “1” and changed on “9”. Look how the “real” version tree looks like:

Looking at this tree you’d never understand what actually happened to the branch! How did it end up in branch2? Was it ever merged? You can’t tell.

Now, let’s look at the 2d-version-tree:

Still it explains there are only two revisions of the file, but by rendering the “unchanged changesets” where it was rendered you can now understand how the file evolved and how it end up being labelled in BL001.

A 2d-tree with a file concurrently changed

The cases so far didn’t run into merge conflicts: foo.c wasn’t modified in parallel and involved in a real merge.

The following Branch Explorer renders a third scenario where foo.c is finally modified in parallel and merged:

Now foo.c is added in 1 as before but changed both on 4 and 9.

This is how the raw version tree looks like:

Whenever we have a *real* merge we’ll be able to render a merge link between two revisions, which greatly helps understanding the scenario, but still, the graphic above falls short to explain what actually happens to the file, doesn’t it?

You didn’t do a merge from “branch2” to “main” so, why do you have such a merge link?

That’s why the 2d-version-tree solves the scenario as follows:

Conclusion

I hope that reading through the previous cases helps understanding how the 2d-version-tree works and getting a better idea of why it explain the history the way it does.

Don’t hesitate to reach us if you have any questions.

Unity 5.0 pre-order beta now available!

We are happy to announce that today Unity 5 pre-order customers and Unity Pro subscribers can download the Unity 5.0 pre-order beta.

Unity 5 brings together an extraordinary collection of new features. Everyone from individuals to enterprise-size teams will be able to create content that looks and sounds significantly sharper and more polished compared to what was possible in earlier versions of the engine. There are major advancements on performance and scripting, and new platform support.

Beta access is a valuable training opportunity for what will be our largest release to date. This huge leap of quality and power comes with many new UI interfaces and scripting changes. As always, we aim to balance this out with workflows that are as straight-forward and intuitive as possible. Joining the beta offers our most loyal customers a head start in learning these new tools and APIs.

For the Unity 5.0 pre-order beta release our Learn team have created a number of tutorials to get you started with some of the new features of Unity 5:

Shading and Lighting

The Standard Shader
Lighting in Unity 5

Audio

Intro to Audio in Unity 5: Mixers and Groups
Intro to Audio Effect Processing
Send and Receive Audio Effects
Duck Volume Audio Effect
Audio Mixer Snapshots
Exposed Parameters in Unity 5’s Audio Mixer
Upgrading to the New Audio Mixer Pt 1
Upgrading to the New Audio Mixer Pt 2

Animation

State Machine Behaviours
State Machine Hierarchies

You can also peruse the following blog posts and Unite 2014 presentations:

Blog posts
Global Illumination
Frame debugger
Audio Mixer
High-performance physics
New animation features
Physically-based Standard Shader
Future of Scripting (IL2CPP)
Automatic Script Updating for API changes
WebGL Performance

Unite 2014
Mastering physically-based shading in Unity 5
Best practices for physically-based shading
Lighting workflow in Unity 5
WebGL deployment in Unity 5
Physics in Unity 5
Asset Build system in Unity 5
SpeedTree for Unity 5
Audio Mixer
Animation in Unity 5

Our developers are hard at work polishing 5.0 and awaiting your feedback. Please report any encountered bugs in the usual way. More importantly, we look forward to hearing all about your 5.0 beta experience on the forums.

Join Xamarin at TechEd Europe 2014

The Xamarin team is excited to once again be attending TechEd Europe, this year in Barcelona, Spain.

TechEd Europe 2014 Logo

You can find us at the Xamarin booth (#31) throughout the conference from October 28th – October 31st. We welcome you to stop by with your technical questions and to learn more about the awesome announcements made at Xamarin Evolve 2014, including Xamarin Platform Previews, Xamarin Insights, and updates to Xamarin Test Cloud.

In addition to chatting with us at the booth, we also invite you to check out James Montemagno’s talks on Wednesday, Thursday, and Friday:

  • Wednesday, October 29th, 12-1:15 pmDEV-B217 Go Mobile with C#, Visual Studio, and Xamarin: Learn how to leverage your existing Microsoft .NET and C# skills to create iOS and Android mobile apps in Visual Studio with Xamarin. You’ll get the tools to see how much existing C# code can go mobile to iOS and Android, plus determine the architecture necessary to support maximum code sharing and reuse, as well as guidance and best practices for handling fragmentation across and within each device platform.
  • Thursday, October 30th, 6:30-8:00 pmAsk the Experts Session, Table 43 Mobile App Development: James and Principal Program Manager Lead Ryan Salva from Microsoft will be available to answer your questions during this Ask the Experts Session on Thursday.
  • Friday, October 31st, 10:15-11:30 amDEV-B306 Building Multi-Device Applications with Xamarin and Cordova with Office 365 APIs: Ryan Short, Technical Evangelist at Microsoft, will join James during this session on how to use the Microsoft Office 365 APIs in your mobile apps. During the session, you’ll see sample apps running, understand the scenarios where you would use Office 365 APIs in mobile device applications, and learn how to get started with the Office 365 APIs.

We look forward to seeing you in Barcelona!

Can’t make it to TechEd Europe 2014? We’ve got you covered with a Xamarin Fall European Roadshow starting in November. Look for details on the blog tomorrow to register and join us!

October 26

Announcing an MSpec Parallel Test Runner for TeamCity (mspec-teamcity-prunner)

MSpec is absolutely great – no argument about it. I am using it exclusively in a current project to specify and test top to bottom. However one thing that has been bugging me for a long time is the inability to run tests in parallel on TeamCity. Most Visual Studio tools support it (CodeRush, ReSharper, etc) – why shouldn’t it be possible to do it on the build server? And as test run time kept increasing I finally got fed up and did something about it.

I am pleased to announce mspec-teamcity-prunner.exe. It is a drop-in replacement for the default console runner (mspec.exe) with the major difference that through the –threads N parameter you can specify the number of assemblies to run in parallel.

Drop-in replacement means that all you need to do is swap the path to mspec.exe with the one to mspec-teamcity-prunner.exe in your TeamCity Build Step.

For me this has lead to ~70% reduction in the time it takes to run our MSpec test suite.

Details on the project and installation instructions/NuGet package here:

https://github.com/ivanz/mspec-teamcity-prunner

 

October 24

Xamarin Evolve 2014 Xammy Award Winners

This year we hosted the inaugural Xammy Awards, recognizing top apps on the global stage of Xamarin Evolve. The competition was fierce, with 11 amazing apps in 4 categories. After much deliberation, our panel of judges selected a winner from each category and our worldwide community voted on a “Developers’ Choice” winner, who were announced on stage at Xamarin Evolve 2014.

Xamarin SVP of Sales and Customer Success, Stephanie Schatz, on stage at Xamarin Evolve 2014 to present the Xammy Awards.

This year’s winners were:

We’d like to congratulate our winners, as well as the other outstanding nominees, once again for the impressive apps that they have built. We invite you to view the videos of this year’s winners and nominees for a closer look at some of the incredible apps being built with Xamarin.

Some Things in iOS 8

iOS 8 added a lot of new functionality and APIs. Along the way, several things have changed. Here are a few items I’ve come across:

Documents and Library

UPDATE: Xamarin.iOS now handles getting the folder path correctly when using Environment.SpecialFolder in iOS 8 as well.

Prior to iOS8 it was common for Xamarin.iOS applications to access folder paths using the .NET System.Environment class, which on iOS provided a familiar abstraction around native system folders. For example, you could get to the documents folder like this:

var docs = Environment.GetFolderPath (
  Environment.SpecialFolder.MyDocuments);

However, in iOS 8 the location of some folders, namely the Documents and Library folders, has changed such that they are no longer within the app’s bundle.

Apple describes the changes in Technical Note TN2406.

The proper way of determining the location of these folders is to use the NSFileManager. For example, get the location of the Documents folder as follows:

var docs = NSFileManager.DefaultManager.GetUrls (
  NSSearchPathDirectory.DocumentDirectory, 
  NSSearchPathDomain.User) [0];

Location Manager

To use location in iOS you go through the CLLocationManager class. Before iOS 8 the first time an app attempted to start location services the user was presented with a dialog asking to turn location services on. You could set a purpose string directly on the location manager to tell the user why you need location in this dialog.

In iOS 8 you now have to call either RequestWhenInUseAuthorization or RequestAlwaysAuthorization on the location manager. Additionally you need to add either the concisely named NSLocationWhenInUseUsageDescription or NSLocationAlwaysUsageDescription to your Info.plist. Thanks to my buddy James for tracking these down.

AVSpeechSynthesizer


The AVSpeechSynthesizer was added in iOS 7, allowing apps to deliver text to speech functionality with just a few lines of code, like this:

var speechSynthesizer = new AVSpeechSynthesizer ();

var speechUtterance = new AVSpeechUtterance (text) {

  Rate = AVSpeechUtterance.MaximumSpeechRate/4,

  Voice = AVSpeechSynthesisVoice.FromLanguage ("en-US"),

  Volume = 1.0f

};


speechSynthesizer.SpeakUtterance (speechUtterance);

The above code worked on either the simulator or a device prior to iOS 8. However, when run on an iOS 8 simulator, you are now greeted with the following error message:

Speech initialization error: 2147483665

However it does appear to work on a device. There is an open bug here: http://openradar.appspot.com/17299966

Thanks to René Ruppert for discovering this. Incidentally, René has a blog post on a few other iOS 8 issues worth checking out: http://krumelur.me/2014/09/23/my-ios8-adventure-as-a-xamarin-developer/

Input Accessory Views

Before iOS 8 you could set the InputAccessoryView on a UITextField from a view contained in another controller.

aTextField.InputAccessoryView = aViewController.SomeView;

While this worked before iOS 8, it did not guarantee the view controller hierarchy would be set up properly. A better approach, even before iOS 8, would be to set the InputAccessoryView directly to an instance of UIView subclass, not one contained in another UIViewController. Practically speaking, people would take the view controller approach because it let them set things up via a xib. Therefore, to handle the view controller case, iOS 8 introduced the InputAccessoryViewController property on UIResponder. It’s still easier to just use a UIView subclass imho, but if you need to use a UIViewController, set it to InputAccessoryViewController.

Action Sheets

Apple states in their documentation that you should not add a view to a UIActionSheet’s view hierarchy. Before iOS 8 adding subviews to a UIActionSheet would actually work, although it was never the intention (nor should it be subclassed). Code that took this approach should have presented a view controller.

In iOS 8 subclassing UIActionSheet or adding subviews to it will no longer work. Additionally UIActionSheet itself has been deprecated. Instead, you should use a UIAlertController in iOS 8 (UIAlertController should also be used in iOS 8 in place of the deprecated UIAlertView) as I discussed in my iOS 8 webinar.


Mono for Unreal Engine

Earlier this year, both Epic Games and CryTech made their Unreal Engine and CryEngine available under an affordable subscription model. These are both very sophisticated game engines that power some high end and popular games.

We had previously helped Unity bring Mono as the scripting language used in their engine and we now had a chance to do this over again.

Today I am happy to introduce Mono for Unreal Engine.

This is a project that allows Unreal Engine users to build their game code in C# or F#.

Take a look at this video for a quick overview of what we did:

This is a taste of what you get out of the box:

  • Create game projects purely in C#
  • Add C# to an existing project that uses C++ or Blueprints.
  • Access any API surfaced by Blueprint to C++, and easily surface C# classes to Blueprint.
  • Quick iteration: we fully support UnrealEngine's hot reloading, with the added twist that we support it from C#. This means that you hit "Build" in your IDE and the code is automatically reloaded into the editor (with live updates!)
  • Complete support for the .NET 4.5/Mobile Profile API. This means, all the APIs you love are available for you to use.
  • Async-based programming: we have added special game schedulers that allow you to use C# async naturally in any of your game logic. Beautiful and transparent.
  • Comprehensive API coverage of the Unreal Engine Blueprint API.

This is not a supported product by Xamarin. It is currently delivered as a source code package with patches that must be applied to a precise version of Unreal Engine before you can use it. If you want to use higher versions, or lower versions, you will likely need to adjust the patches on your own.

We have set up a mailing list that you can use to join the conversation about this project.

Visit the site for Mono for Unreal Engine to learn more.

(I no longer have time to manage comments on the blog, please use the mailing list to discuss).

October 22

Leading Unity into the Future

Hello everyone! As you all know, we’ve been making some big moves at Unity lately. We’ve partnered with some amazing companies and begun putting a lot more energy and resources into developing technologies that will help you be more efficient when you create games and then help you connect your games with an awesome audience when you’re ready for it.

This is a lot to take on as a company and building it right is going to take a huge effort. We keep our eyes out for the best talent and help wherever we feel we can use it at every level. That’s why today, I’m pleased to welcome John Riccitiello onto the Unity team as our new CEO. Sure, that sounds odd, as it also means I’m stepping down from the role, but this is an amazing win for Unity and the community.

He’s the right person to help guide the company to the mission that we set out for ourselves over a decade ago: democratize game development!

Many of you are likely familiar with John. He’s been in the games industry for a long time, both as COO and later CEO of Electronic Arts. He’s also helped fund and guide some notable startups like Oculus and Syntertainment among many others, and is a heartfelt believer in the indie scene and its importance to the overall well-being of the industry.

I’ve had the pleasure of working with John closely over the last year after I convinced him to join our board. It’s been a delight to see his passion for Unity and what we’re doing here grow as he got to know the Unity community better and better.

So, what does this mean for me? It means I get to get back to doing what I love the most about working at Unity: strategy, and connecting with developers. I will be heavily involved with the company’s direction, and will focus my efforts on finding the best ways to serve all of you amazing developers, all while working with some insanely talented people here at Unity.

What does this mean for Unity? Not too much, since John completely agrees with our vision and our strategy. If anything it means that we’ll be more focused than ever about making sure everyone has access to the best technology and services. We want all of you to have the best tools and opportunities, and we’re going to do everything we can to make that happen across the board.

So, welcome John to the Unity family! Great things are ahead!

Much love,

David

Directory Notifications to find changes

Pending Changes is now faster than ever because it doesn’t need to traverse the workspace anymore. We have implemented a new mechanism based on Windows Directory Notifications to detect workspace changes faster than ever.

It is available only on Windows but we’ll eventually implement it for Linux and Mac (based on their corresponding notification mechanisms).

What does it mean for you? Well, as soon as you install 5.0.44.608 or 5.4.15.604 (or higher) Pending Changes will be faster. You’ll clearly notice the speed up with really large workspaces (in number of files) and with slow disks. The slower the disk is, the clearer the speed up will be.

How Pending Changes works? (without directory notifications)

Whenever you click on “refresh” on Pending Changes Plastic triggers a search to find the files that have been modified on your workspace.

The diagram below describes the process in detail:

  • The process checks the “pending changes options” first: if only checkouts are requested, then there’s nothing to look for, just print the list. That’s why working with checkouts makes sense for huge workspaces (>400k files).
  • If the options to find changed files on disk are set, the directory walk will start.
  • For each directory starting on the root of the workspace Plastic will try to find changed files. It will compare the timestamp on disk with the stored timestamp on the wktree file: the plastic.wktree file (inside .plastic) stores the metadata of the file. It know “how it was” after the last update or checkin. So if the timestamp and size doesn’t match, the file was changed. If timestamp doesn’t match and size does, Plastic hashes the file. It is slower but it makes sure the file is different. Alternatively there’s an option to force Plastic to always find changes based on file contents (ignoring the timestamp) which is definitely slower but required on some scenarios.
  • At the end of the disk walk, Plastic has a list of all the modified files on your workspace.

The diagram doesn’t show the last step (if the option is set): find “moved and renamed files” by matching potential added and deleted files.

What I want to explain with the diagram is that there is at least one IO operation by directory. If you keep pressing “refresh” on the Pending Changes view, chances are the list will be filled quickly: after the first traversal the workspace will be loaded in the file system cache so the next reads will be blazing fast. But if your disk is not very fast, or your computer is performing a lot of IO, or using a lot of RAM, chances are that your workspace won’t be entirely loaded in the file system cache, and then walking it will take longer.

You probably noticed it when after some coding you go back to Plastic, click refresh, and it takes longer than usual. This is exactly what we wanted to improve with this feature.

How Pending Changes with Directory Notifications works?

It is rather simple: we use Windows directory notifications to listen to events on the workspace directory. Each time a file is written, deleted, added, moved or renamed inside the workspace, Plastic gets a notification.

So, while we perform an initial directory traversal the first time the Pending Changes view loads, no other full directory walk will be needed later, greatly speeding up the operation.

What we do is the following: after the first traversal we keep a tree with the metadata of what is on disk, and we invalidate parts of it (on a directory basis) each time a change happens inside it. This way Pending Changes only has to reload parts of the tree instead of walking the workspace entirely. It saves precious time while still being a robust solution.

One of the issues with Directory Notifications on Windows is that it can’t really notify file or directory moves, so you have to match pairs of added/deleted. Instead of trying to pair the notifications we just invalidate parts of the tree and let the regular Pending Changes code to do the rest.

So, there’s still room for improvement but our initial tests probed that the extra complexity of doing a more precise event tracking didn’t pay off compared to just invalidating parts of the tree.

Availability

Chances are you are already using it :-)

If you’re using 5.0.44.608 or higher or 5.4.15.604 or higher, you’re already enjoying the Directory Notifications powered Pending Changes view.

October 21

Xamarin Evolve 2014 Session Recordings Now Available

At Xamarin Evolve 2014, we were excited to be joined by industry experts and Xamarin engineers delivering sessions on diverse mobile developer topics ranging from the latest in native mobile technologies, to enterprise development, to gaming and more. We are now pleased to make these great Xamarin Evolve 2014 conference sessions available on evolve.xamarin.com for your viewing pleasure.

Xamarin CTO Miguel de Icaza gives a session at Xamarin Evolve 2014

Head over to the Xamarin Evolve 2014 site now to kick back and enjoy the topics you find most interesting. With so many sessions, there’s plenty to choose from and something for everybody.

Enjoy!

App Spotlight: OBD Fusion Car Diagnostics Video

Evolve 2014 was an eye-opening event. It was an amazing opportunity to see some of the innovative mobile apps people are creating with the Xamarin platform. We’ll be featuring several of these over the coming weeks. Today we’re highlighting an automotive diagnostic app called OBD Fusion.

Matt O’Connor from OCTech, and creator of OBD Fusion, shared his app that wirelessly reads onboard vehicle diagnostics from a car. The app enables car enthusiasts to create their own dashboards and access much of the same diagnostic information a master mechanic would use with specialized automotive equipment to diagnose and tune an automobile. With OBD Fusion, users can calculate fuel economy, graph data, and also test emissions in real-time.

To really appreciate how this works, we got a car and took a drive around Atlanta. Watch it in full-screen to see it in action.

What really impresses me is how polished OBD Fusion is and how cleanly it presents useful information to the user. With sensor data coming in at up to 100 times a second, the app works flawlessly, utilizing multi-threading to process a large amount of data and update the dashboards and graphs in real-time – something only a native app can effectively do.

Matt originally created his app in C# as a desktop application. Utilizing Xamarin, he was able to convert it to iOS and Android with over 90% code-reuse, and by using app templates he’s created native smartphone and tablet user experiences. In addition to the automobile’s sensor data, he’s also integrated sensors on the mobile device, including the GPS and accelerometer to measure position and vehicle acceleration.

Congratulations to Matt for creating an outstanding app!

Learn More

OBD Fusion is available in the iTunes and Google Play app stores.

To get started developing with the Xamarin platform, check out our developer documentation, or get live online training with Xamarin University.

October 20

Xamarin.Forms Book Preview Edition Available Now

The Petzold engine has been churning through the Xamarin.Forms API, and we’re pleased to announce that the first preview edition of Creating Mobile Apps with Xamarin.Forms is now available. It was distributed in print to all Xamarin Evolve 2014 attendees last week and is now available as a free download in a number of formats:

Creating-Mobile-Apps-with-Xamarin-Forms

The six chapters available in the preview edition cover the history of Xamarin.Forms, solution structure, the basics of the view system and building your first Xamarin.Forms app. The extensive samples are also available for download from GitHub.

Charles also presented two talks at Evolve: XAML for Xamarin.Forms and Xamarin.Forms is Cooler Than You Think, both of which will be available shortly as videos of the sessions are published at evolve.xamarin.com.

Evolve-Xamarin-Forms-talk

Work continues on both Xamarin.Forms and the book!

Traktor Pro .tsi file format reverse-engineered – docs on GitHub

I’ve reverse-engineered the .tsi binary file format specification, which Traktor Pro by Native Instruments uses to store its Controller Mappings. I am hoping this should open the gates to new power tooling for MIDI controller mapping power tools (Traktor’s own being rather sub-optimal).

The project, documentation and instructions are here: https://github.com/ivanz/TraktorMappingFileFormat/wiki

Can’t sing enough praises of 010 Editor by SweetScape which enabled me to write scripts and define data structures incrementally on top of a binary blob.

2014-10-20_01-53-45

October 16

The Wait Is Over: MimeKit and MailKit Reach 1.0

After about a year in the making for MimeKit and nearly 8 months for MailKit, they've finally reached 1.0 status.

I started really working on MimeKit about a year ago wanting to give the .NET community a top-notch MIME parser that could handle anything the real world could throw at it. I wanted it to run on any platform that can run .NET (including mobile) and do it with remarkable speed and grace. I wanted to make it such that re-serializing the message would be a byte-for-byte copy of the original so that no data would ever be lost. This was also very important for my last goal, which was to support S/MIME and PGP out of the box.

All of these goals for MimeKit have been reached (partly thanks to the BouncyCastle project for the crypto support).

At the start of December last year, I began working on MailKit to aid in the adoption of MimeKit. It became clear that without a way to inter-operate with the various types of mail servers, .NET developers would be unlikely to adopt it.

I started off implementing an SmtpClient with support for SASL authentication, STARTTLS, and PIPELINING support.

Soon after, I began working on a Pop3Client that was designed such that I could use MimeKit to parse messages on the fly, directly from the socket, without needing to read the message data line-by-line looking for a ".\r\n" sequence, concatenating the lines into a massive memory buffer before I could start to parse the message. This fact, combined with the fact that MimeKit's message parser is orders of magnitude faster than any other .NET parser I could find, makes MailKit the fastest POP3 library the world has ever seen.

After a month or so of avoiding the inevitable, I finally began working on an ImapClient which took me roughly two weeks to produce the initial prototype (compared to a single weekend for each of the other protocols). After many months of implementing dozens of the more widely used IMAP4 extensions (including the GMail extensions) and tweaking the APIs (along with bug fixing) thanks to feedback from some of the early adopters, I believe that it is finally complete enough to call 1.0.

In July, at the request of someone involved with a number of the IETF email-related specifications, I also implemented support for the new Internationalized Email standards, making MimeKit and MailKit the first - and only - .NET email libraries to support these standards.

If you want to do anything at all related to email in .NET, take a look at MimeKit and MailKit. I guarantee that you will not be disappointed.

Monologue

Monologue is a window into the world, work, and lives of the community members and developers that make up the Mono Project, which is a free cross-platform development environment used primarily on Linux.

If you would rather follow Monologue using a newsreader, we provide the following feed:

RSS 2.0 Feed

Monologue is powered by Mono and the Monologue software.

Bloggers