Disclaimer: this is an automatic aggregator which pulls feeds and comments from many blogs of contributors that have contributed to the Mono project. The contents of these blog entries do not necessarily reflect Xamarin's position.

July 2

IL2CPP Internals: P/Invoke Wrappers

This is the sixth post in the IL2CPP Internals series. In this post, we will explore how il2cpp.exe generates wrapper methods and types use for interop between managed and native code. Specifically, we will look at the difference between blittable and non-blittable types, understand string and array marshaling, and learn about the cost of marshaling.

I’ve written a good bit of managed to native interop code in my days, but getting p/invoke declarations right in C# is still difficult, to say the least. Understanding what the runtime is doing to marshal my objects is even more of a mystery. Since IL2CPP does most of its marshaling in generated C++ code, we can see (and even debug!) its behavior, providing much better insight for troubleshooting and performance analysis.

This post does not aim to provide general information about marshaling and native interop. That is a wide topic, too large for one post. The Unity documentation discusses how native plugins interact with Unity. Both Mono and Microsoft provide plenty of excellent information about p/invoke in general.

As with all of the posts in this series, we will be exploring code that is subject to change and, in fact, is likely to change in a newer version of Unity. However, the concepts should remain the same. Please take everything discussed in this series as implementation details. We like to expose and discuss details like this when it is possible though!

The setup

For this post, I’m using Unity 5.0.2p4 on OSX. I’ll build for the iOS platform, using an “Architecture” value of “Universal”. I’ve built my native code for this example in Xcode 6.3.2 as a static library for both ARMv7 and ARM64.

The native code looks like this:

#include <cstring>
#include <cmath>

extern "C" {
int Increment(int i) {
  return i + 1;
}

bool StringsMatch(const char* l, const char* r) {
  return strcmp(l, r) == 0;
}

struct Vector {
  float x;
  float y;
  float z;
};

float ComputeLength(Vector v) {
  return sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}

void SetX(Vector* v, float value) {
  v->x = value;
}

struct Boss {
  char* name;
  int health;
};

bool IsBossDead(Boss b) {
  return b.health == 0;
}

int SumArrayElements(int* elements, int size) {
  int sum = 0;
  for (int i = 0; i < size; ++i) {
    sum += elements[i];
  }
  return sum;
}

int SumBossHealth(Boss* bosses, int size) {
  int sum = 0;
  for (int i = 0; i < size; ++i) {
    sum += bosses[i].health;
  }
  return sum;
}

}

The scripting code in Unity is again in the HelloWorld.cs file. It looks like this:

void Start () {
  Debug.Log (string.Format ("Using a blittable argument: {0}", Increment (42)));
  Debug.Log (string.Format ("Marshaling strings: {0}", StringsMatch ("Hello", "Goodbye")));

  var vector = new Vector (1.0f, 2.0f, 3.0f);
  Debug.Log (string.Format ("Marshaling a blittable struct: {0}", ComputeLength (vector)));
  SetX (ref vector, 42.0f);
  Debug.Log (string.Format ("Marshaling a blittable struct by reference: {0}", vector.x));

  Debug.Log (string.Format ("Marshaling a non-blittable struct: {0}", IsBossDead (new Boss("Final Boss", 100))));

  int[] values = {1, 2, 3, 4};
  Debug.Log(string.Format("Marshaling an array: {0}", SumArrayElements(values, values.Length)));
  Boss[] bosses = {new Boss("First Boss", 25), new Boss("Second Boss", 45)};
  Debug.Log(string.Format("Marshaling an array by reference: {0}", SumBossHealth(bosses, bosses.Length)));
}

Each of the method calls in this code are made into the native code shown above. We will look at the managed method declaration for each method as we see it later in the post.

Why do we need marshaling?

Since IL2CPP is already generating C++ code, why do we need marshaling from C# to C++ code at all? Although the generated C++ code is native code, the representation of types in C# differs from C++ in a number of cases, so the IL2CPP runtime must be able to convert back and forth from representations on both sides. The il2cpp.exe utility does this both for types and methods.

In managed code, all types can be categorized as either blittable or non-blittable. Blittable types have the same representation in managed and native code (e.g. byte, int, float). Non-blittable types have a different representation in managed and native code (e.g. bool, string, array types). As such, blittable types can be passed to native code directly, but non-blittable types require some conversion before they can be passed to native code. Often this conversion involves new memory allocation.

In order to tell the managed code compiler that a given method is implemented in native code, the extern keyword is used in C#. This keyword, along with a DllImport attribute, allows the managed code runtime to find the native method definition and call it. The il2cpp.exe utility generates a wrapper C++ method for each extern method. This wrapper performs a few important tasks:

  • It defines a typedef for the native method which is used to invoke the method via a function pointer.
  • It resolves the native method by name, getting a function pointer to that method.
  • It converts the arguments from their managed representation to their native representation (if necessary).
  • It calls the native method.
  • It converts the return value of the method from its native representation to its managed representation (if necessary).
  • In converts any out or ref arguments from from their native representation to their managed representation (if necessary).

We’ll take a look at the generated wrapper methods for some extern method declarations next.

Marshaling a blittable type

The simplest kind of extern wrapper only deals with blittable types.

[DllImport("__Internal")]
private extern static int Increment(int value);

In the Bulk_Assembly-CSharp_0.cpp file, search for the string “HelloWorld_Increment_m3”. The wrapper function for the Increment method looks like this:

extern "C" {int32_t DEFAULT_CALL Increment(int32_t);}
extern "C" int32_t HelloWorld_Increment_m3 (Object_t * __this /* static, unused */, int32_t ___value, const MethodInfo* method)
{
  typedef int32_t (DEFAULT_CALL *PInvokeFunc) (int32_t);
  static PInvokeFunc _il2cpp_pinvoke_func;
  if (!_il2cpp_pinvoke_func)
  {
    _il2cpp_pinvoke_func = (PInvokeFunc)Increment;
    if (_il2cpp_pinvoke_func == NULL)
    {
      il2cpp_codegen_raise_exception(il2cpp_codegen_get_not_supported_exception("Unable to find method for p/invoke: 'Increment'"));
    }
  }

  int32_t _return_value = _il2cpp_pinvoke_func(___value);

  return _return_value;
}

First, note the typedef for the native function signature:

typedef int32_t (DEFAULT_CALL *PInvokeFunc) (int32_t);

Something similar will show up in each of the wrapper functions. This native function accepts a single int32_t and returns an int32_t.

Next, the wrapper finds the proper function pointer and stores it in a static variable:

_il2cpp_pinvoke_func = (PInvokeFunc)Increment;

Here the Increment function actually comes from an extern statement (in the C++ code):

extern "C" {int32_t DEFAULT_CALL Increment(int32_t);}

On iOS, native methods are statically linked into a single binary (indicated by the “__Internal” string in the DllImport attribute), so the IL2CPP runtime does nothing to look up the function pointer. Instead, this extern statement informs the linker to find the proper function at link time. On other platforms, the IL2CPP runtime may perform a lookup (if necessary) using a platform-specific API method to obtain this function pointer.

Practically, this means that on iOS, an incorrect p/invoke signature in managed code will show up as a linker error in the generated code. The error will not occur at  runtime. So all p/invoke signatures need to be correct, even with they are not used at runtime.

Finally, the native method is called via the function pointer, and the return value is returned. Notice that the argument is passed to the native function by value, so any changes to its value in the native code will not be available in the managed code, as we would expect.

Marshaling a non-blittable type

Things get a little more exciting with a non-blittable type, like string. Recall from an earlier post that strings in IL2CPP are represented as an array of two-byte characters encoded via UTF-16, prefixed by a 4-byte length value. This representation does not match either the char* or wchar_t* representations of strings in C on iOS, so we have to do some conversion. If we look at the StringsMatch method (HelloWorld_StringsMatch_m4 in the generated code):

DllImport("__Internal")]
[return: MarshalAs(UnmanagedType.U1)]
private extern static bool StringsMatch([MarshalAs(UnmanagedType.LPStr)]string l, [MarshalAs(UnmanagedType.LPStr)]string r);

we can see that each string argument will be converted to a char* (due to the UnmangedType.LPStr directive).

typedef uint8_t (DEFAULT_CALL *PInvokeFunc) (char*, char*);

The conversion looks like this (for the first argument):

char* ____l_marshaled = { 0 };
____l_marshaled = il2cpp_codegen_marshal_string(___l);

A new char buffer of the proper length is allocated, and the contents of the string are copied into the new buffer. Of course, after the native method is called we need to clean up those allocated buffers:

il2cpp_codegen_marshal_free(____l_marshaled);
____l_marshaled = NULL;

So marshaling a non-blittable type like string can be costly.

Marshaling a user-defined type

Simple types like int and string are nice, but what about a more complex, user defined type? Suppose we want to marshal the Vector structure above, which contains three float values. It turns out that a user defined type is blittable if and only if all of its fields are blittable. So we can call ComputeLength (HelloWorld_ComputeLength_m5 in the generated code) without any need to convert the argument:

typedef float (DEFAULT_CALL *PInvokeFunc) (Vector_t1 );

// I’ve omitted the function pointer code.

float _return_value = _il2cpp_pinvoke_func(___v);
return _return_value;

Notice that the argument is passed by value, just as it was for the initial example when the argument type was int. If we want to modify the instance of Vector and see those changes in managed code, we need to pass it by reference, as in the SetX method (HelloWorld_SetX_m6):

typedef float (DEFAULT_CALL *PInvokeFunc) (Vector_t1 *, float);

Vector_t1 * ____v_marshaled = { 0 };
Vector_t1  ____v_marshaled_dereferenced = { 0 };
____v_marshaled_dereferenced = *___v;
____v_marshaled = &____v_marshaled_dereferenced;

float _return_value = _il2cpp_pinvoke_func(____v_marshaled, ___value);

Vector_t1  ____v_result_dereferenced = { 0 };
Vector_t1 * ____v_result = &____v_result_dereferenced;
*____v_result = *____v_marshaled;
*___v = *____v_result;

return _return_value;

Here the Vector argument is passed as a pointer to native code. The generated code goes through a bit of a rigmarole, but it is basically creating a local variable of the same type, copying the value of the argument to the local, then calling the native method with a pointer to that local variable. After the native function returns, the value in the local variable is copied back into the argument, and that value is available in the managed code then.

Marshaling a non-blittable user defined type

A non-blittable user defined type, like the Boss type defined above can also be marshaled, but with a little more work. Each field of this type must be marshaled to its native representation. Also, the generated C++ code needs a representation of the managed type that matches the representation in the native code.

Let’s take a look at the IsBossDead extern declaration:

[DllImport("__Internal")]
[return: MarshalAs(UnmanagedType.U1)]
private extern static bool IsBossDead(Boss b);

The wrapper for this method is named HelloWorld_IsBossDead_m7:

extern "C" bool HelloWorld_IsBossDead_m7 (Object_t * __this /* static, unused */, Boss_t2  ___b, const MethodInfo* method)
{
  typedef uint8_t (DEFAULT_CALL *PInvokeFunc) (Boss_t2_marshaled);

  Boss_t2_marshaled ____b_marshaled = { 0 };
  Boss_t2_marshal(___b, ____b_marshaled);
  uint8_t _return_value = _il2cpp_pinvoke_func(____b_marshaled);
  Boss_t2_marshal_cleanup(____b_marshaled);

  return _return_value;
}

The argument is passed to the wrapper function as type Boss_t2, which is the generated type for the Boss struct. Notice that it is passed to the native function with a different type: Boss_t2_marshaled. If we jump to the definition of this type, we can see that it matches the definition of the Boss struct in our C++ static library code:

struct Boss_t2_marshaled
{
  char* ___name_0;
  int32_t ___health_1;
};

We again used the UnmanagedType.LPStr directive in C# to indicate that the string field should be marshaled as a char*. If you find yourself debugging a problem with a non-blittable user-defined type, it is very helpful to look at this _marshaled struct in the generated code. If the field layout does not match the native side, then a marshaling directive in managed code might be incorrect.

The Boss_t2_marshal function is a generated function which marshals each field, and the Boss_t2_marshal_cleanup frees any memory allocated during that marshaling process.

Marshaling an array

Finally, we will explore how arrays of blittable and non-blittable types are marshaled. The SumArrayElements method is passed an array of integers:

[DllImport("__Internal")]
private extern static int SumArrayElements(int[] elements, int size);

This array is marshaled, but since the element type of the array (int) is blittable, the cost to marshal it is very small:

int32_t* ____elements_marshaled = { 0 };
____elements_marshaled = il2cpp_codegen_marshal_array<int32_t>((Il2CppCodeGenArray*)___elements);

The il2cpp_codegen_marshal_array function simply returns a pointer to the existing managed array memory, that’s it!

However, marshaling an array of non-blittable types is much more expensive. The SumBossHealth method passes an array of Boss instances:

[DllImport("__Internal")]
private extern static int SumBossHealth(Boss[] bosses, int size);

It’s wrapper has to allocate a new array, then marshal each element individually:

Boss_t2_marshaled* ____bosses_marshaled = { 0 };
size_t ____bosses_Length = 0;
if (___bosses != NULL)
{
  ____bosses_Length = ((Il2CppCodeGenArray*)___bosses)->max_length;
  ____bosses_marshaled = il2cpp_codegen_marshal_allocate_array<Boss_t2_marshaled>(____bosses_Length);
}

for (int i = 0; i < ____bosses_Length; i++)
{
  Boss_t2  const& item = *reinterpret_cast<Boss_t2 *>(SZArrayLdElema((Il2CppCodeGenArray*)___bosses, i));
  Boss_t2_marshal(item, (____bosses_marshaled)[i]);
}

Of course all of these allocations are cleaned up after the native method call is completed as well.

Conclusion

The IL2CPP scripting backend supports the same marshalling behaviors as the Mono scripting backend. Because IL2CPP produces generated wrappers for extern methods and types, it is possible to see the cost of managed to native interop calls. For blittable types, this cost is often not too bad, but non-blittable types can quickly make interop very expensive. As usual, we’ve just scratched the surface of marshaling in this post. Please explore the generated code more to see how marshaling is done for return values and out parameters, native function pointers and managed delegates, and user-defined reference types.

Next time we will explore how IL2CPP integrates with the garbage collector.

Xamarin Test Cloud Now Available to All Xamarin Developers

We started Xamarin because we want to help developers build apps they can be proud of and provide you with the tools you need to ensure that your apps do what they were designed to do.

A user’s perspective about you or your business is greatly impacted by your mobile app. A crash, a hang, or broken functionality will cause low app ratings or even user abandonment. Most developers aren’t doing systematic testing of their mobile apps because current tools and services are too hard to use.

That’s why we’re happy to announce that as of today, all Xamarin Platform subscriptions include 60 Xamarin Test Cloud device minutes per month. Every Xamarin developer can immediately take advantage of this new benefit to start automating UI testing for mobile apps written on any platform on our industry-leading catalog of over 1,600 real iOS and Android smartphones and tablets.

How it Works

Starting a new app in Xamarin Studio creates a C# Xamarin.UITest project with a basic test to make sure your app loads and give you a starting point for writing additional tests. You can upload your tests directly from Xamarin Studio and Visual Studio, incorporating mobile testing directly into your development processes to quickly verify apps work on a variety of hardware.

Executing a test is now as easy as a build or debug operation in Xamarin Studio or Visual Studio, and your Xamarin.UITest project can reside in the same solution as your Xamarin app, making it easy to keep your app code and tests in sync.

Upload C# Xamarin.UITest mobile tests directly from Xamarin Studio and Visual Studio

Upload C# Xamarin.UITest mobile tests directly from Xamarin Studio and Visual Studio

 

You specify the devices you want to test when initiating a test run, either by market share data we maintain for multiple geographies, or by the extensive filtering available for device type, manufacturer, OS version, processor, and other factors.

Select which devices to test in Test Cloud

Select which devices to test in Xamarin Test Cloud

 

Xamarin Test Cloud makes it easy to quickly find visual inconsistencies by comparing results across dozens of devices at a time after your test has executed, displaying screenshots synchronized to the same test steps. The service also offers video recordings of tests, making it easier to review the overall user experience on a variety of devices.

Xamarin Test Cloud

Xamarin Test Cloud test results

 

We’ve been using Xamarin Test Cloud internally on our sample apps to make sure they run properly and look great. One of these test runs uncovered that the screen rendered correctly on an iPhone 6, but had unexpected whitespace on an iPad. Bugs like these wouldn’t have been found without Xamarin Test Cloud, which enabled us to quickly diagnose the issue and correct the code.

Xamarin Test Cloud

What’s a Device Minute?

Device minutes are only consumed when the test runs on an actual device, regardless of whether you’re running tests serially on one device, or parallelized for faster results. If you need more minutes, you can sign up for one of our Xamarin Test Cloud plans.

Get Started

To start using your Xamarin Test Cloud device time today, visit testcloud.xamarin.com. Our online documentation also has great guides on how to write mobile tests in C# to test any app, whether or not it’s built with Xamarin.

We’ll be doing a live webinar on Wednesday, July 8 at 8:30 am PT covering different mobile testing methodologies and providing an in-depth overview of how Xamarin Test Cloud works. We’ll also discuss the exciting plans in our product roadmap. Click here to register.

Keep building great apps!

The post Xamarin Test Cloud Now Available to All Xamarin Developers appeared first on Xamarin Blog.

July 1

iOS 9 Preview Now Available

ios9 iconWe’re excited to announce that we have just published our first iOS 9 API preview release in Xamarin.iOS. This preview includes support for an array of exciting new features that you can start developing with today, including CoreSpotlight, Model I/O, and ReplayKit as well as improvements to Contacts APIs.

Installing the iOS 9 Preview

You can download the iOS 9 Preview for Xamarin Studio directly from the Xamarin Developer site. Upcoming previews will add even more support for iOS 9, including support for Visual Studio.

Important: Service Release for iOS 9 Compatibility

In addition to our iOS 9 preview release, we have also published an iOS release to our update channels addressing two issues that cause many Xamarin apps to crash on startup on Apple’s OS preview for iOS 9.

At WWDC last month, Apple announced that a public preview of iOS 9 will be made available to iOS users this July. To ensure your published apps run smoothly on Apple’s iOS 9 public preview this month, we recommend that Xamarin.iOS developers download the latest release from our Stable updater channel and rebuild and resubmit their apps to the App Store using Xcode 6. This will enable your apps to run on the iOS 9 OS preview and ensure your apps are ready for the public release of iOS 9 this fall.

You can read more about these updates in our iOS 9 Compatibility Guide.

The post iOS 9 Preview Now Available appeared first on Xamarin Blog.

The State of Unity on Linux

Hello lovely people!

Last week at Unite Europe, the Unity roadmap was made public, and it included a highly-voted feature on our feedback site: a Linux port of the Unity editor.  This past weekend I wrote a post on my personal blog about my own thoughts about our experience porting the Unity editor to Linux.  It turned out to be a pretty popular post, and it was amazing to see so many positive comments and reaction from our community, so we thought it would be nice to do something a bit more ‘official’ on the company blog and explain what you’ll be able to expect from our Linux port.

Unity was originally written for Mac OS X, and the Windows port came along in 2009 with the release of Unity 2.5.  Porting Unity from Mac to Windows was already a lot of work, and as you can imagine, Unity has grown considerably in size and complexity since 2009.  So porting to a third platform has been a lot of (very fun) work and taken a lot of time.

There are some of us who have been working on the Linux port of the editor since the beginning (which started in 2011 at an early ‘Ninja Camp’, according to our version control history), but several different people at Unity have helped work on one aspect or another along the way (lately it has been Levi spending the most time on the project, with myself and others, helping whenever/however possible, so buy him a beer if you see him).  Like I mentioned in my personal blog post, a lot of focus during this time has been on dealing with case-sensitivity issues (NTFS is case-insensitive, as is HFS+ by default; Unity doesn’t work on a case-sensitive system — sorry about that!) and native window management / input handling.  But we’re getting there!

What We Expect it Will Do

  • Run on 64-bit Linux (just like with our player, the ‘official’ support will be for Ubuntu due to its market share, and just like with our player, it should run on most modern Linux distributions); the earliest version of Ubuntu supported will be 12.04 (which is what our build/test farm is running).
  • Export to all of the same platforms as the Mac OS X editor (except for iOS; maybe someday we’ll enable exporting to iOS the same way we do from the Windows editor, but not initially)
  • Import all asset types not dependent on non-portable 3rd-party middleware
  • Support global illumination, occlusion culling, and all other systems reliant on portable 3rd-party middleware

Limitations

  • It will require modern, vendor-provided graphics drivers
  • Some of the model importers that rely on external applications (i.e, 3ds Max and SketchUp) won’t work; the workaround is to export to FBX instead

The Plan Right Now: An Experimental Build

The Linux port of Unity currently lives in an internally ‘forked’ repo.  Our plan is currently to prepare an early experimental build for you from this fork (that is kept more or less in sync with Unity’s mainline development branch) that you will be able to try out.  Based on how that experiment goes, we’ll figure out if it’s something we can sustain as an official port alongside our Mac and Windows editors (the Linux runtime support was also released as a preview initially, due to concerns about support and the fragmentation of Linux distributions, and the support burden turned out to be very low, despite a very significant percentage of Linux games on Steam being made with Unity, so I’m hopeful; we’ll have to see how it goes).

It’s been a really long time and I couldn’t be more excited.  Levi, myself, and all of the other people who have helped with the Linux port over the years (the list is pretty long!) can’t wait to get it into your hands.

P.S. Here are some more teaser screenshots:

blacksmith-2 blacksmith-1 blacksmith-3 bridge-3 bridge-4 bridge-6 Unity Editor on Linux crs-2 Unity Editor on Linux

P.P.S – We’re really interested in hearing how you will use the Linux Editor — what platforms you will be exporting to, whether you’re interested specifically in doing regular development on Linux or mostly interested in automated build pipelines, etc.

Much love from Unity,

Na’Tosha (@natosha_bard)

Towards Semantic Version Control

The new release we’re announcing today, BL677, includes a feature that pretty much explains what our vision for the future is: semantic version control.

It may sound like big words but it is a pretty simple concept: the version control “understands” the code. So when you diff C#, Java code, C, VB.net (and hopefully all languages in the near future), it knows how to handle it:

Build Time Code Generation in MSBuild

Build-time code generation is a really powerful way to automate repetitive parts of your code. It can save time, reduce frustration, and eliminate a source of copy/paste bugs.

This is something I'm familiar with due to my past work on MonoDevelop's tooling for ASP.NET, T4 and Moonlight, and designing and/or implementing similar systems for Xamarin.iOS and Xamarin.Android. However, I haven't seen any good documentation on it, so I decided to write an article to outline the basics.

This isn't just something for custom project types, it's also something that you can include in NuGets, since they can include MSBuild logic.

Background

The basic idea is to generate C# code from other files in the project, and include it in the build. This can be to generate helpers, for example CodeBehind for views (ASPX, XAML), or to process simple DSLs (T4), or any other purpose you can imagine.

MSBuild makes this pretty easy. You can simply hook a custom target before the Compile target, and have it emit a Compile item based on whatever input items you want. For the purposes of this guide I'm going to assume you're comfortable with enough MSBuild to understand that - if you're not, the MSDN docs are pretty good for the basics.

The challenge is to include the generated C# in code completion, and update it automatically.

An IDE plugin can do this fairly easily - see for example the Generator mechanism used by T4, and the *.designer.cs file generated by the old Windows Forms and ASP.NET designers. However, doing it this way has several downsides, for example you have to check their output into source control, and they won't update if you edit files outside the IDE. Build-time generation, as used for XAML, is a better option in most cases.

This article describes how to implement the same model used by WPF/Silverlight/Xamarin.Forms XAML.

Generating the Code

First, you need a build target that updates the generated files, emits them into the intermediate output directory, and injects them to the Compile ItemGroup. For the purposes of this article I'll call it UpdateGeneratedFiles and assume that it's processing ResourceFile items and emitting a file called GeneratedCode.g.cs. In a real implementation, you should use unique names won't conflict with other targets, items and files.

For example:

<Target Name="UpdateGeneratedFiles"
  DependsOnTargets="_UpdateGeneratedFiles"
  Condition=="'@(ResourceFile)' != ''"
>
  <ItemGroup>
    <Compile Include="$(IntermediateOutputDir)GeneratedFile.g.cs" />
    <FileWrites Include="$(IntermediateOutputDir)GeneratedFile.g.cs" />
  </ItemGroup>
</Target>
<Target Name="_UpdateGeneratedFiles"
  Inputs="$(MSBuildProjectFile);@(ResourceFile)"
  Outputs="$(IntermediateOutputDir)GeneratedFile.g.cs"
>
  <FileGenerationTask
      Inputs="@(ResourceFile)"
      Output="$(IntermediateOutputDir)GeneratedFile.g.cs"
  >
</Target>

A quick breakdown:

The UpdateGeneratedFiles target runs if you have any ResourceFile items. It injects the generated file into the build as a Compile item, and also injects a FileWrites item so the file is recorded for incremental clean. It depends on the 'real' generation target, _UpdateGeneratedFiles, so that the file is generated before the UpdateGeneratedFiles target runs.

The _UpdateGeneratedFiles target has Inputs and Outputs set, so that it is incremental. The target will be skipped if the output file exists is newer than all of the input files - the project file and the resource files.

The project file is included in the inputs list because its write time will change if the list of resource files changes.

The _UpdateGeneratedFiles target simply runs a tasks that generates the output file from the input files.

Note that the generated file has the suffix .g.cs. This is the convention for built-time generated files. The .designer.cs suffix is used for user-visible files generated at design-time by the designer.

Hooking into the Build

The UpdateGeneratedFiles target is added to the dependencies of the CoreCompile target by prepending it to the CoreCompileDependsOn property.

<PropertyGroup>
  <CoreCompileDependsOn>UpdateGeneratedFiles;$(CoreCompileDependsOn)</CoreCompileDependsOn>
</PropertyGroup>

This means that whenever the the project is compiled, the generated file is generated or updated if necessary, and the injected Compile item is injected before the compiler is called, so is passed to the compiler - though it never exists in the project file itself.

Live Update on Project Change

So how do the types from the generated file show up in code completion before the project has been compiled? This takes advantage of the way that Visual Studio initializes its in-process compiler that's used for code completion.

When the project is loaded in Visual Studio, or when the project file is changed, Visual Studio runs the CoreCompile target. It intercepts the call to the compiler via a host hook in the the MSBuild Csc task and uses the file list and arguments to initialize the in-process compiler.

Because UpdateGeneratedFiles is a dependency of CoreCompile, this means that the generated file is updated before the code completion system is initialized, and the injected file is passed to the code completion system.

Note that the UpdateGeneratedFiles target has to be fast, or it will add latency to code completion availability when first loading the project or after cleaning it.

Live Update on File Change

So, the generated code is updated whenever the project changes. But what happens when the contents of the ResourceFile files that it depends on change?

This is handled via Generator metadata on each of the ResourceFile files:

<ItemGroup>
  <ResourceFile Include="Foo.png">
    <Generator>MSBuild:UpdateGeneratedFiles</Generator>
  </ResourceFile>
</ItemGroup>

This takes advantage of another Visual Studio feature. Whenever the file is saved, VS runs the UpdateGeneratedFiles target. The code completion system detects the change to the generated file and reparses it.

This metadata has to be applied to the items by the IDE (or the user). It may be possible for the build targets to apply it automatically using an ItemDefinitionGroup but I haven't tested whether VS respects this for Generator metadata.

Xamarin Studio/MonoDevelop

But we have another problem. What about Xamarin Studio/MonoDevelop?

Although Xamarin Studio respects Generator metadata, it doesn't have an in-process compiler. It doesn't run CoreCompile, nor does it intercept the Csc file list, so its code completion system won't see the generated file at all.

The solution - for now - is to add explicit support in a Xamarin Studio addin to run the UpdateGeneratedFiles target on project load and when the resource files change, parse the generated file and inject it into the type system directly.

Migration

Migrating automatically from a designer-generation system to a build-generation system has a few implications.

You either have to force migration of the project to the new system via an IDE, or handle the old system and make the migration optional - e.g. toggled by the presence of the old files. You have to update the project templates and samples, and you have to build a migration system that removes the designer files from the project and adds Generator metadata to existing files.

June 30

Summer Fun with Xamarin Events in July

It’s July already! The year is already half over, so there won’t be a better time to get out and meet fellow Xamarin developers in your area to learn about crafting beautiful, cross-platform native mobile apps in C# at one of the local conferences, workshops, or user group events happening around the world!

meetup-banner-july2

 

Here are a few events happening this month:

Geek-a-Palooza! ad

  • Sant Julià de Lòria, Andorra: July 4th
  • Hands on Lab: Xamarin & Push Notifications

Mobile & Cloud Hack Day br

  • Hauer, Boqueirão Curitiba, Brazil: July 4th
  • A FREE event on how to create cross-platform apps for Android, iOS, and Windows using C#, Xamarin, and Visual Studio

Seattle Mobile .NET Developers us

  • Seattle, WA: July 7th
  • Introduction to Xamarin.Forms: iOS, Android, and Windows in C# & XAML with Xamarin Evangelist James Montemagno

Montreal Mobile Developers ca

  • Montreal, Canada: July 8th
  • Azure Mobile Services & Mobile Apps with Xamarin

XLSOFT Japan Japan

  • Tokyo, Japan: July 8th
  • Windows Phone / iOS / Android Cross-Platform App Development Using Xamarin.Forms

Birmingham Xamarin Mobile Cross-Platform User Group Flag of the UK

  • Birmingham, UK: July 8th
  • Developing for iBeacons with Xamarin

Introductory Training Session to Xamarin Germany

  • Hanover, Germany: July 13th
  • Xamarin Workshops by H&D International

DC Mobile .NET Developers Group in

  • Washington, DC: July 14th
  • NEW GROUP! Getting Started with Xamarin.Forms by Xamarin MVP, Ed Snider

Sydney Mobile .NET Developers au

New York Mobile .NET Developers us

  • New York, NY: July 28th
  • Building Native Cross-Platform Apps in C# with Xamarin by Xamarin MVP, Greg Shackles

 

Even more Xamarin events, meetups, and presentations are happening this month! Check out the Xamarin Events Forum to find an event near you if you don’t see an event in your area in the list above.

Interested in getting a developer group started? We’re here to help! Here’s a tips and tricks guide on staring a developer group, an introduction to Xamarin slide deck, and of course our community sponsorship program to get you started. Also, we love to hear from you, so feel free to send us an email or tweet @XamarinEvents to help spread the word and continue to grow the Xamarin community.
 

The post Summer Fun with Xamarin Events in July appeared first on Xamarin Blog.

New version of Unity Answers unveiled today

Update: We’ve hit some road bumps and have a delay in deploying the new theme with the features listed in this post. The site is currently live so you can still access it while we work on getting the theme set up.

We have been working on improving Unity Answers with the goal of making it easier to uncover authentic questions that need to be answered. We also want to cut down the time it takes to get an answer regardless of the amount of posts that get published daily. Ultimately, we want to make it easier for you to find existing posts that provide solutions to similar issues that you’re experiencing.

A user guide will be provided once the new site is deployed describing the new features and how to navigate the site.

Help Room

We have created a new section called the Help Room, where any user regardless of reputation (amount of karma points) can post questions directly without having to wait for moderator approval. The Help Room will also contain posts where users are asking for more general help with scripting. Moderators will be able to move questions from the default Questions/Home section to the Help Room when needed. If you want to post to the default Questions/Home section, a reputation of 15 KP or more is needed, otherwise your post will have to go through the moderation queue.

Help Room

Other features that are being introduced with this version:

  • When you start typing a Question, a list with suggested existing threads (+ amount of answers) will appear that may already hold the answer you are looking for
  • Autosave while typing questions, answers and comments
  • A new redactor text editor to make it easier to clearly see how to format code and insert screenshots
  • Reward other users with your own karma points for contributing with good answers and asking good questions
  • See how many followers a post has and tag a user who may be able to answer it
  • Follow content and manage them from your user profile
  • User profile and moderation tools will be accessible from the top right corner
Get suggestions while asking Autosaving of drafts Navigate to the Help Room space Reward other users with karma points Access your user profile and other settings

Moderators! You will also get new features:

A space for Moderators has been created to keep track of mod or site-related questions rather than using [META] threads. You will be able to move questions from the default Questions/Home section to either the Help Room or Moderators space.

You will also be able to redirect posts. If you want to redirect a post to another, simply choose that option from the drop-down menu and search for the post which it should redirect to. This way, if you for example find a duplicate post, just redirect it to a more suitable post.

Move a question to a different Space Redirect a question Search for a question to redirect a post to Post has been redirected

We hope you will enjoy the new site, and if there are any questions make sure to post it in the forum thread created for this so we can help you out.

June 29

What’s New in Google Play services

There are a plethora of amazing APIs built right into the core operating system to take advantage of when developing for Android, and Google Play services (GPS) allows for the addition of even more unique experiences.

What is Google Play services?

GPS LogoGPS is a continuously updated library from Google that enables adding new features to Android apps without waiting for a new operating system release. One of the most well known feature of GPS is Google Maps, which allows developers to add rich, interactive maps to their apps. Since GPS is a separate library that’s updated regularly, there are always new APIs to explore. The most recent release, 7.5, has tons of great new features.

Getting Started with Google Play services

In previous releases of GPS, everything was bundled into one huge NuGet and Component Library to be added to an app. However, this has now changed and each API has been broken into its own unique NuGet package, so you can pick and choose which features to add to an app without increasing app size or worrying about linking. To get started with GPS, simply open the NuGet Package Manager and search for “Xamarin Google Play services”. A list of new packages available for Android apps will be displayed, and you can choose to install the “All” package or select only the ones you want.

GPS

To learn more about the changes to the GPS packages and APIs, Xamarin Software Engineer Jon Dick’s blog post on the topic is a great place to start.

Once you have GPS installed, you can take advantage of tons of new APIs for your Android app, including some that I’m particularly excited about, outlined below.

Google Fit

fitness_64dpDevelopers of fitness apps will be excited to see that Google Fit has been updated with a brand new Recording and History API that enables gathering estimated distance traveled and calories burned in a simple API. This is in addition to the other APIs already available to discover sensors, collect activity data, and track a user’s fitness.

Android Wear Maps

Maps Android WearUntil now, there wasn’t a good way to show users their current location on a map on their Android Wear devices. The latest release, however, brings the entire Maps API to Android Wear, including support for interactive maps and non-interactive maps in Lite mode.

Google Cloud Messaging Updates

GCM
One of my favorite features of GPS has to be Google Cloud Messaging (GCM) for sending push notifications to Android devices, and there have been several updates to GCM in Google Play services 7.5. The new Instance ID tokens enable a single identity for your app across its entire lifetime instead of having a unique registration ID for each device. This simplifies the process of sending push notifications to all of the devices on which an app is installed.

So Much More

These aren’t the only additions to GPS with this release. Several new APIs have been added, including App Invites, Smart Lock for Passwords, and updates to Google Cast. The full list can be found in in the Google Play services documentation.

The post What’s New in Google Play services appeared first on Xamarin Blog.

June 27

Reader Q&A – PDFs in iOS

I got a question from a reader last night who was looking at some code from one of my Xamarin seminars.

Ryan asked about how to extract the content from a pdf file, draw on it, and email it in iOS.

One way to do this is using Core Graphics, as shown in the following snippet:

If you have a question feel free to contact me through my blog. I get lots of questions like this, but I do my best to respond to them all.


June 26

Build and Debug C++ Libraries in Xamarin.Android Apps with Visual Studio 2015

Today, the Microsoft Hyperlapse team shared the story of how they developed their app with C++ and Xamarin. Microsoft Hyperlapse Mobile turns any long video into a short, optimized version that you can easily share with everyone. It can transform a bumpy bike ride video into a smooth and steady time-lapse, like this one from Nat Friedman that was shot using GoPro and processed with Microsoft Hyperlapse Pro.

The core algorithmic portions of Hyperlapse are written in Visual C++ Cross-Platform Mobile and the majority of the app business logic was retained in a .NET portable class library. Using Xamarin, the Hyperlapse team was able to leverage the core C++ code and app logic, while providing C#-based native UIs so users of the app feel at home on each platform. Leveraging C++ code in your Xamarin app is easy, as outlined in the below explanation on implementing C++ in your Xamarin.Android apps.

Using Native Libraries

Xamarin already supports the use of pre-compiled native libraries via the standard PInvoke mechanism. To deploy a native library with a Xamarin.Android project, add the binary to the project and set its Build Action to AndroidNativeLibrary. You can read Using Native Libraries for more details. This approach is best if you have pre-compiled native libraries that support either or all architectures (armeabi, armeabi-v7a, and x86). The Mono San Angeles sample port explains how the ibsanangeles.so dynamic lib and its native methods are accessed in Xamarin.Android.

In this approach, dynamic libraries are typically developed in another IDE, and that code is not accessible for debugging. This imposes difficulty on developers, as it becomes necessary to context switch between code bases for debugging and fixing issues. With Visual Studio 2015, this is no longer the case. Through our collaboration with the Visual C++ Cross-Platform Mobile team at Microsoft, Xamarin developers in Visual Studio now have the power to write, compile, debug, and reference C/C++ projects in Xamarin.Android from within their favorite IDE.

Using Visual C++ Cross-Platform Mobile

As stated above, Visual Studio 2015 supports the development of C/C++ projects that can be targeted to Android, iOS, and Windows platforms. Be sure to select Xamarin and Visual C++ for Cross-Platform Mobile Development during installation.

Visual C++ for Cross Platform Mobile Development

For this post, we’re using the same San Angeles port sample referenced earlier in the Using Native Libraries section. However, its original C++ code is ported to a Dynamic Shared Library (Android) project in Visual Studio. When creating a new project, the Dynamic Shared Library template can be found under Visual C++ → Cross-Platform.

Mono San Angeles Demo

San Angeles is an OpenGL ES port of a small, self-running demonstration called “San Angeles Observation.” This demo features a scenic run-through of a futuristic city with different buildings and items. The original version was made for desktop with OpenGL, and the current version is one of Google’s NDK samples optimized for Android. The source code is available here, ported to Visual Studio.

Now that the Dynamic Shared Library that contains the source code has been directly referenced from the Xamarin.Android project, it works as smoothly as any other supported project reference.

Visual Studio 2015 VC++ Cross-Platform Mobile

To interop with native libraries in your Xamarin.Android project, all you need to do is create a DllImport function declaration for the existing code to invoke, and the runtime will handle the rest. Set the EntryPoint to specify the exact function to be called in the native code.

[DllImport ("sanangeles", EntryPoint = "Java_com_example_SanAngeles_DemoGLSurfaceView_nativePause")]
static extern void nativePause (IntPtr jnienv);

Now, to call the native function, simply call the defined method.

public override bool OnTouchEvent (MotionEvent evt)
{
	if (evt.Action == MotionEventActions.Down)
	nativePause (IntPtr.Zero);
	return true;
}

Refer to Interop with Native Libraries to learn more about interoperating with native methods.

One More Thing…

Now that you have access to the native source code, it’s possible to debug the C/C++ code inside Visual Studio. To debug your C/C++ files, choose to use the Microsoft debugger engine under the Android Options of Project properties.

VC++ Native Debugging options

Enable a breakpoint inside your C++ project, hit F5, and watch the magic happen!

Learn More

Refer to the VC++ team’s blog post at MSDN for a step-by-step guide to building native Android apps in Visual Studio 2015. The source code for the Mono San Angeles port explained in this post is available for download in our samples.

Discuss this post in the Xamarin Forums.

The post Build and Debug C++ Libraries in Xamarin.Android Apps with Visual Studio 2015 appeared first on Xamarin Blog.

Leveraging Unity Cloud Build for testing large projects

This is a story about how we are using Unity Cloud Build internally and how it can make life easier for you, our users, as well. Read on to learn how we used to deal with large project testing and which awesome new possibilities are available now!

Once upon a time

During development of the massive Unity 5 release, and the extensive changes it entailed, we frequently ran into issues with importing and building projects. For Unity 5 we wanted a cleaner API, which in some cases meant we had to break backwards compatibility. Often, when importing an older project in Unity 5, we had to fix scripts manually. We were also hitting major bugs and regressions in graphics, physics and performance related areas.

Our testers do a very good job at making sure that projects import, build and run properly in every new build on all our supported platforms, but since we are constrained by time we usually used small projects (like AngryBots or Nightmares) to run these tests. These projects don’t cover many of Unity’s features – any of which might break in a new version – and they are nowhere close to the size and complexity of some of the projects developed by our users.

We were fortunate enough to have a few major studios share the full project folders for some of their completed games with us (for example, Republique) and we started manually importing and building these games on every new build during the beta and release candidate phases of development. We found and fixed many issues before any of our users would have been affected by them, but it was a tedious and time-consuming task.

This is how the testing process worked back then:

  1. Install the new Unity build on Windows and OSX.
  2. Open a large project and reimport it. Wait a long time and check back from time to time to see if it has finished (often the Script Updater dialog would require a confirmation prompt which meant even more waiting).
  3. Fix any scripts or other issues and build the game for Mac and Windows Standalone.
  4. Switch platform to Android. Wait a long time again and check to see when it is finished.
  5. Run the game on Android.
  6. Switch platform to iOS. Wait again.
  7. Run the game on iOS.
  8. Repeat from steps 2-7 for all the other large projects we had (7 in total).

This was usually done by one person and it could take a few days.

The glorious present

We quickly started discussing how we could automate some of this. While we were busy figuring that out, in another part of the company work was underway for the official release of what is now Unity Cloud Build. Cloud Build seemed like a perfect fit for automating testing of these large projects.

Fast forward to the present day,  and after we released Unity 5.1, and Cloud Build has been out there for a while, testing large projects goes through an entirely different process (described below with pictures for your viewing pleasure):

1. Add project to Cloud Build

2. Press build for all supported platforms (currently Webplayer, Android and iOS with standalones and WebGl in the pipeline)

3_receive_notification

3. Receive an e-mail notification from Cloud Build when everything is finished

4. Share the build link with any/all testers, open the link in the browser, install the app, run and profit!

This saves us a tremendous amounts of time, since all it requires to make the build is a few clicks and all builds for all projects are done in parallel. As soon as the builds are ready we receive the notification e-mail, and testing can begin on all platforms. If anything fails during importing or building we also get a notification and we can act on it immediately.

Unity Cloud Build can also be configured to automatically poll a repository for changes and rebuild the projects automatically. It can rebuild projects on any supported Unity version.

A brighter future

Since we want to be able to scale up to testing more projects, the limiting factor now is running the builds on devices. The more projects we have in Unity Cloud Build, the more builds we have to install and run on devices.

The biggest problem with testing on mobile is device fragmentation (especially on Android) and we can only test on a few of the most popular models. We would like to know how these builds run on most devices (including some of the more esoteric ones). To that end, we are currently investigating services like TestDroid and AppThwack.

These services give us access to hundreds of devices and we can run our projects on any number of them. They offer a REST API that we can use to feed builds from Unity Cloud Build directly into them. What we get in return is performance data (CPU, Memory, Threading), screenshots of the game while running on the device, the ability to run custom testing scripts, get device logs and more.

By feeding all this data back into our own data warehouse, we can keep track of metrics across Unity versions, projects and devices and quickly pinpoint performance, rendering, and input issues.

Example test run results from our Doll Demo project on TestDroid

Unity Cloud Build and you

Unity Cloud Build is the solution of choice for us when it comes to removing all the time-consuming tasks and bottlenecks involved in importing projects to newer Unity builds, switching platforms and building projects. But we built Unity Cloud Build first and foremost with you, our users, in mind. If you are just getting started on a new Unity project, sign up for the free Cloud Build option and see how easy it is to have us do the heavy lifting for you and share the final build results with your entire team. If you are a veteran Unity user working on one or multiple projects, you will find something suitable for you in one of our other licensing options.

So, what are you waiting for? Give Unity Cloud Build a try! It might just be the best thing that happened to you since we introduced the Cancel Build button!

June 25

MethodHandle Performance

Last time I mentioned that with the integration of OpenJDK 8u45 MethodHandle performance went from awful to unusable. That was pretty literal as the JSR 292 test cases that I regularly run went from taking about 8 minutes to more than 30 minutes (when my patience ran out).

Using sophisticated profiling techniques (pressing Ctrl-Break a few times) I determined that a big part of the problem was MethodHandle.asType(). So I wrote a microbenchmark:

   IKVM 8.0.5449.1  IKVM 8.1.5638
asType.permutations(1) 2108 9039
asType.permutations(2) 2476 17269

The numbers are times in milliseconds. Clearly not a good trend. I did not investigate deeply what changed in OpenJDK, but after looking at the 8u45 code it was clear that too many intermediate MethodHandles were being created. So I rewrote asType to create a single LambdaForm to do all the work at once. This improved the performance a bit, but the disturbing increase in time for the second iteration was still there. Once again I decided not to investigate the root cause of this, but simply to assume that it was because of anonymous type creation (the CLR has no anonymous types and creating a type is relatively expensive).

Avoiding anonymous type creation turned out to be easy (well, the high level design was easy, the actual implementation took a lot more time). I just had to replace the LambdaForm compiler. There is a single method that represents the exact point where I can come in and change the implementation:

static MemberName generateCustomizedCode(LambdaForm form, MethodType invokerType) { ... }

In OpenJDK this method compiles the LambdaForm into a static method in an anonymous class and returns a MemberName that points to the static method. All I had to do was replace this method with my own implementation that directly generates a .NET DynamicMethod. As I said before, the idea was simple, actually getting the implementation correct took a couple of weeks (part time).

With both these optimizations in place, MethodHandle performance is back to awful (actually, it is less afwul than it was before):

   IKVM 8.0.5449.1  IKVM 8.1.5638  IKVM 8.1.5653
asType.permutations(1) 2108 9039 314
asType.permutations(2) 2476 17269 210

The running time of the JSR 292 test cases went down to less than 7 minutes. So I was satisfied. There are many more opportunities to improve the MethodHandle performance on IKVM, but so far no IKVM user has complained about it, so it is not a priority. Note that Java 8 lambdas are not implemented using MethodHandles on IKVM.

Changes:

  • Fixed performance bug. Base type of java.lang.Object was not cached.
  • Untangled TypeWrapper.Finish() from member linking to improve Finish performance for already compiled types.
  • Improved MethodHandle.asType() performance by directly creating a single LambdaForm to do the conversion, instead of creating various intermediate forms (and MethodHandles).
  • Make non-public final methods defined in map.xml that don't override anything automatically non-virtual.
  • Optimized LambdaForm compiler.
  • IKVM.Reflection: Added Type.__GetGenericParameterConstraintCustomModifiers() API.

Binaries available here: ikvmbin-8.1.5653.zip

June 24

Unite Europe 2015 Keynote Wrap up

It’s been awhile since we held a Unite in Europe, so we were thrilled to be hosting another event in beautiful Amsterdam. Against the backdrop of the impressive Westergasfabriek, the event has brought together over 1000 attendees from the gaming community, hailing from many different countries, to share ideas, learn best practices, and help each other make ridiculously cool games.

The show kicked off today with the keynote when John Riccitiello took the stage to outline Unity’s three key guiding principles: 1) democratize game development; 2) solve hard problems; and 3) help developers succeed. All of the decisions we’re making and directions we’re heading are based on these three ideas. That means hiring more talent so that we can take more of the ugly work away from you and let you focus on making games, and building new tools, services and initiatives to help you guys find success.

Following an inspiring message from our good friend Rob Pardo, Jussi Laakkonen took the stage to walk us through how Unity Ads is helping developers make money and increase engagement using Seriously Games’ Best Fiends as a successful example of ads done right. Jussi closed his portion of the keynote with the announcement that Unity Ads would be integrated into the Unity engine with the release of 5.2 this fall.

Unity Ads3

Unity Analytics’ John Cheng then took the stage to demonstrate the upcoming live dashboards displaying the massive amounts of data streaming into our analytics system which can be used to understand the greater market and make smart decisions. Also demonstrated were tools used to understand players and make positive adjustments to games. Heat maps allow users to overlay a visualization in-editor representing player activity while the Funnel Analyzer can show engagement levels and help iron out level design. Using the awesome mobile physics puzzler Ultraflow as an example, John showed how Ultrateam were able to identify and adjust a level to increase player engagement and overall user base.

Screen Shot 2015-06-23 at 16.32.19

Best of all, getting analytics hooked up in your game is easy and was made available in 5.1. As demonstrated on stage, its as simple as pasting your Cloud Project ID into the editor. You can learn more about Unity Analytics at http://unity3d.com/services/analytics.

As the presentation shifted to technical aspects of the engine and editor, Joachim Ante took the stage to dive a bit farther into the Blacksmith demo. Following the demonstration was the news that many of the assets from The Blacksmith are now available to use on the Asset Store. Additionally The Blacksmith runtime demo is now available for download so that you can check it out on your own computer. For more information on The Blacksmith project, visit the site.

Joachim handed the mic over to Lucas Meijer, who continued the technical portion of the show with a rundown of the recent changes to Unity 5.1, showing off how the new Unity networking solution and discuss features to make VR and AR development within Unity incredible for years to come.

Finally, Lucas also had the pleasure of announcing an exciting bit of news. Unity now has a Public Roadmap! We know it’s something that you’ve wanted for a while and we’re all very happy that you now have it. For more information on that roadmap, we suggest you wander over to our blog post all about it.

roadmap-blog

Lucas then passed the baton to Thomas Peterson, who discussed the challenges of testing and QA when creating a complex piece of software like Unity. Through our sustained engineering program and impressive suite of automation tools, we’ve taken great steps in a very positive direction. One of the things we’re very proud of is that companies like Intel, ARM, Qualcomm, Sony, and Oculus are all using these tools to ensure you’re all getting the best experience possible on their hardware.

John came back on stage to introduce our awesome guest speaker, Mariina Hallikainen, CEO and co-founder of and found great success doing it. Mariina was nice enough to share valuable information about their journey to create the game.

Colossal Order is a company that embodies the spirit of democratization. Their team of 9 built a game, Cities Skylines, that directly challenged one of the most venerable franchises in PC gaming. It’s fairly fitting then that David Helgason took the stage to thank Mariina and take a nostalgic look back at the first 10 years since Unity 1 launched at WWDC in 2005 by introducing a special 10 year anniversary highlight reel showcasing games from the past, present and future.

After a short message from Unity’s Andy Touch detailing some housekeeping issues for show attendees, the keynote ended and sessions began. As you may already know, Unite isn’t one singular event, and we’ve already held successful shows in Tokyo, Bangkok, Beijing, Seoul, and Taipei – and we’re not done yet! Our marquee conference, Unite Boston is taking place in September along with the Unity Awards and finally, Unite Melbourne (date TBD). Hopefully we’ll see you there!

IMG_8811 _DSF0401 IMG_8889 Unity Analytics helps game designers with actionable data. _DSF0509 IMG_8967 IMG_9137 IMG_8838 Unity Networking in action. IMG_9093 _DSF0568 _MG_7314 CEO of Colossal Order on Cities: Skylines Ten years since Unity 1.0! _DSF0700 Our principles _DSF0683 19074230039_98710b7b8d_o IMG_9375 IMG_9585 IMG_9512 IMG_8769 IMG_8746 IMG_8543 _MG_7719 _MG_7679 _MG_7568 Unite_Europe_2015_Group-photo2

Unity Roadmap

At Unite Europe 2015, we unveiled our public roadmap. We realize that our users have been wanting more information for some time. To address this, we carefully considered the best format for presenting this information,  then assembled it in a way we hope is most useful to all of you.

Background

For the past 10 years, Unity developed in a more organic fashion with an eye for feature work usually trumping schedules or deadlines. Without having some sense of regularity, a roadmap becomes some distant reality that is hard to subscribe to. Since shipping 5.0 and now 5.1, we are comfortable committing to a more regular rhythm of quarterly releases. In this commitment, a roadmap schedule now becomes a useful tool for everyone involved.

Goals

The roadmap is a tool for Unity users to be able to reasonably predict what feature set they could work with/commit to when starting a project in the near term. With that in mind, our goal is to lay out the anticipated and probable work arriving in the next 9 months.

We aware our community has interests in what important things are currently being worked on, and sometimes those timelines are beyond 9 months. Additionally, we are still a finite number of often specialized engineers and prioritization does take place.  Also, taking any feature and multiplying it by 22 platforms and the complexities of making it easy to use for a broad span of users simply take time.

In any case, we’ve organized the roadmap into five groups. First, we will display the next three upcoming releases with specified dates. For the very next release, we’ll also color code to show the confidence of the work item making the final cut and shipping. After trying something out in alpha and early beta, there is always a chance that the feature just really isn’t ready for prime-time. In that case, it could be delayed a version, or even kicked back to the drawing board. Everything of course is case-by-case.

The “Development” work are in-progress with a clear plan and dedicated engineering effort to move it towards release. However, the time at which the work may complete may exceed the 9 months of listed releases or other externalities may prevent us from specifying a particular release. For example of these externalities, we list WebGL 2.0 in “Development” since the technology in the industry is still evolving and we depend on the browser technology to be available in the general public.

Finally, we are left with “Research” which contains all the prototypes, design phase, and other items which are getting actual time, however they are not in any condition to be called earnest development where there is a solid plan being worked against.

We will aim for a weekly update to the roadmap contents, and will look to further refine the presentation.

If you would like to up-vote any listed feature or add a feature for consideration, please head over to feedback.unity3d.com.

Let us know what you think!

The Blacksmith Releases: Executable, Assets, Tools and Shaders

Hey. We’ve promised to release the assets and our custom project-specific tools and shaders from The Blacksmith realtime short film. Well, here they are.

First of all, you are welcome to download the executable of the demo.

The assets from the project, along with the custom tech, come in two main packages for your convenience: ‘The Blacksmith – Characters’ and ‘The Blacksmith – Environments’.

Below follows more information about what you can expect from each package, as well as some explanations about specific choices that we made. We’ve tried to not just release something of documentary value, but also make it more usable for you, in case you want to do something with it yourselves.

And let me answer straight away a very popular question you’ve been asking on previous occasions when we’ve released demo materials: yes, you can use all of this in whatever way you like, including in commercial projects. The standard Asset Store license applies. It would be our pleasure if you find the releases helpful on your own way to achieving success.

The Executable

We’ve added some simple interface for control over the playback of the demo:

  • Interacting with slider will allow you to scrub back and forth
  • Clicking on play-pause button, or anywhere else outside the UI, will toggle playback mode of the demo to playing or paused
  • Slight look-around is possible by pausing the playback and moving the mouse pointer.
  • A ‘mute’ button to toggle audio of the demo

You have the option to choose among four quality settings presets:

  • Low – Recommended for machines which are not able to run the demo in higher settings
  • Medium – Recommended for high-end laptops and less powerful desktop PCs. It runs at 30 FPS in 720p on a Laptop (Quad Core i7 2.5GHz with GeForce GT 750M)
  • High – Recommended for most desktop PCs. It runs at 30 FPS in 1080p on a desktop PC (Core i7 4770 with a GeForce GTX 760)
  • Higher – If you have a card that is better than GTX 760

It could take more than 30s to load, depending on your platform.

‘The Blacksmith – Characters’ Package

This project contains:

  • The Blacksmith character
  • The Challenger character
  • Hair Shader
  • Wrinkle maps
  • Unique character shadows
  • Plane reflections

Download ‘The Blacksmith – Characters’ package from here.

Blacksmith and Challenger

We have placed each character in a separate scene.

We have re-skinned the Challenger character so now it is a little better prepared for a more universal usage than the specific needs of our film. You are welcome to drop him in another environment and experiment. You still need to do some work if you want him to animate nicely. We have included sample animations ‘idle’ and ‘walk’.

Characters_shot02

You will also find the main character, Blacksmith, in this package. We haven’t done any re-skinning on him: he is more complex than Challenger and would have taken us more time. We are including the original 3D character asset and you are free to use it in any way you like.

Characters_shot01

We are including the original, full-size 4K textures of both characters. In our project, we use a smaller version – 2K or less – of some of the textures.

The characters’ models and textures were created by Jonas Thornqvist and Sergey Samuilov.

Hair rendering

To achieve those anisotropic highlights that are so characteristic in hair, we decided to create a separate hair shader. As a complement to this, we also added a rendering component that calculated ambient occlusion for the hair, and set up a multi-pass render approach to avoid sorting errors between overlapping, translucent hair polygons.

The hair package and an accompanying example scene can be downloaded from the asset store. Don’t forget to check out the readme for more details about how it works, and how to configure it for your own projects.

Wrinkle maps

To add more life to the Challenger’s expressions in The Blacksmith, we added a rendering component for blending ‘wrinkle maps’ based on the animated influence weights of the Challenger’s facial blendshapes. The rendering component blends normal and occlusion maps in an off-screen pre-pass, and then feeds these to the standard shader as direct replacements for the normal and occlusion maps assigned in the material.

Available as a separate package here, and a dedicated blog post about it can be found here.

Unique character shadows

We wanted to make sure our characters always had soft, high-resolution shadows in close-up shots. We also needed to make sure we had enough shadow resolution to cover the rest of the world. To achieve this, we added a method of setting up a unique shadow map for a group of objects.

Unique shadows can also be grabbed as a separate package from the Asset Store, and more details are available in the dedicated blog post.

Plane reflections

Plane reflections in The Blacksmith are, in essence, the kind of planar reflections one would normally render for reflective water surfaces. The twist is that once rendered, we convolve the reflected image into each mip-level of the target reflection texture. During this convolution, we use the depth information of the reflection to force sharper contacts for pixels close to the reflection plane. The goal of this contact sharpening is to simulate the effect ray tracing would have in non-perfect reflections. The result of this convolution is a reflection texture suitable as a drop-in replacement for reflection probe cubemaps, with the material’s roughness still dictating which of the different mip-levels are sampled for reflection. We use a modified Standard Shader that, based on a shader keyword toggle, samples reflections from this dynamic reflection texture instead of reflection probe cubemaps.

‘The Blacksmith – Environments’ Package

This project contains:

  • The Smithy
  • The Exterior
  • Atmospheric Scattering
  • PaintJob tool (paint vegetation on any surface)
  • Vegetation system
  • MaskyMix Shader
  • Modified Standard Shader
  • Tonemapping

You can toggle between FPS and animated camera (press C) and between several light presets (press V).

Download ‘The Blacksmith – Environments’ package from hereBe warned, it is quite large.

The Smithy

Looking like this:

Game view of Smithy interior in ‘The Blacksmith - Environments’ package

Game view of Smithy interior in ‘The Blacksmith – Environments’ package

The Exterior

We decided to re-arrange the original exterior from the movie in order to make it more relevant to game production, which we hope would be of more use to you.
We haven’t gone too far with the polishing. If you spend some additional time on it, it could be a good basis for some action of your own. Here is how it looks as we’ve provided it to you:

pck_environments05

The difference is that, for our film, we had arranged the environment according to the cameras. If you are still curious, here is a screenshot of how the environment of the original project looks in Scene view:

Scene view of the original Blacksmith project

Scene view of the original Blacksmith project

There were assets we didn’t use in the rearranged scene, but we still wanted you to have them. You will find them in the project.

Almost all of the gorgeous assets in this package were created by Plamen ‘Paco’ Tamnev. He also built the exterior environment scene.

Atmospheric Scattering

Our custom Atmospheric Scattering solution comes in this project. Find more details about it in this dedicated blogpost. For your convenience, we have also uploaded the Atmospheric Scattering as a separate package on the Asset Store. This way you don’t have to extract it yourself from this rather big project. Get it here.

PaintJob tool

This tool allows the artist to paint vegetation projected onto any geometry, not just Unity Terrains. It was a way for us to to explore how we could make the most out of the built-in Unity terrain tools, while also fulfilling one of the requirements we had for the project.

You are welcome to extract it from ‘The Blacksmith – Environments’ and use it in your own projects.

Vegetation system

There were a couple of things that we wanted to do with vegetation in The Blacksmith: we wanted it to be soft; we wanted custom shading on it; we wanted it to support dynamic GI; we wanted it to blend with whatever it considered to be ground; and we wanted it to work without being forced to use Unity terrains. PaintJob already took care of the latter, but to solve the rest we needed to do a little bit of a custom setup. We decided to build a component that would capture the PaintJob data – as well as any other hand-placed vegetation marked for capturing – and generate a number of baked vegetation meshes for which we could retain full control over rendering. Among other things, this allowed us to apply any kind of custom sorting we wanted, or reproject light probe data into dynamic textures.

MaskyMix Shader

Maskymix is an extended Standard Shader that mixes in an additional set of detail textures based on certain masking criteria – hence the name MaskyMix. The masking is primarily based on the angle between a material-specified world-space direction, and the per-pixel normal sampled from the base normal map. The mask is also modified by a tiled masking texture, as well as the vertex-color alpha channel of the mesh – if present. Depending on the masking thresholds specified in the material, the additional detail layer is mixed in based on this final mask. If the mesh provides a vertex alpha channel for masking, the vertex color can optionally be used for tinting the detail layer albedo.

Modified Standard Shader

The Blacksmith used quite a few tiny Standard Shader modifications to tweak things to our liking, or add small shader features that we wanted. Not all of these make sense outside of the main project, but some of them are included in the surface-shader based Standard Shader used in this package. These modifications are typically things like: optionally sampling per-pixel smoothness from the albedo alpha channel instead of a dedicated texture, or being able to control the culling mode of any material, or having additional control over the color and intensity of bounced global illumination.

Tonemapping

We’ve explained in an earlier blogpost how we used Tonemapping and applied Color Grading for The Blacksmith short film. Since the short film was shown, the Tonemapping was taken up by a Unity engineering team, and is now being developed properly into Unity.

The HDR sky textures in the package are from NoEmotionHDRs (Peter Sanitra) / CC BY-ND 4.0. Used without modification.

That’s all from us for now. Having delivered everything we promised, we’re ready to go off to new adventures.

If you do something with our assets, we are very curious to know about it. It would be nice of you if you post it as a comment here or drop us a line at demos@unity3d.com. And if it is something which others could use, please consider sharing back to the community.

Have fun!

Edit: Here are the links to the blogposts where we explain specific systems in more detail:

Wrinkle Maps in The Blacksmith

Unique Character Shadows in The Blacksmith

Atmospheric Scattering in The Blacksmith

June 23

Experiment on Roslyn C# compiler: Translatable Strings

Basically anyone can use resources, gettext or managed-commons-core, to translate (localize) strings in its C# code, and it can even be kind of terse like this sample using managed-commons-core:
using System.Collections.Generic;
using Commons.GetOptions;
using static System.Console;
using static Commons.Translation.TranslationService;

namespace TestApp
{
class AppCommand
{
// Returns the translated form of "First mock command"
public virtual string Description { get { return _("First mock command"); } }

public virtual string Name { get { return "alpha"; } }

// Returns the translated form of "Command {0} executed!" with Name substituted
public virtual void Execute(IEnumerable args, ErrorReporter ReportError)
{
WriteLine(TranslateAndFormat("Command {0} executed!", Name));
}
}
}
Then enters C# 6.0 with its fantastic new feature interpolated strings and now that last method can't be optimized to use the new feature because:
public virtual void Execute(IEnumerable<string> args, ErrorReporter ReportError)
{
WriteLine(_($"Command {Name} executed!"));
}
would in truth first format and then try to lookup a translation, which would be truly the wrong thing to happen...
This experiment would allow for C# 7 a new syntax for translatable strings that would make that snippet into:
using System.Collections.Generic;
using Commons.GetOptions;
using static System.Console;

namespace TestApp
{
class AppCommand
{
// Returns the translated form of "First mock command"
public virtual string Description { get { return $_"First mock command"; } }
public virtual string Name { get { return "alpha"; } }
// Returns the translated form of "Command {0} executed!" with Name substituted
public virtual void Execute(IEnumerable args, ErrorReporter ReportError)
{
WriteLine($_"Command {Name} executed!");
}
}
}
Interpolated strings can return an IFormattable, and thus one can do some localization (number formatting for instance), but not truly translation, so this feature is interesting beyond the small gain on shortening code, for the other cases.
But the killing feature that adding this to compiler would allow is to have the extraction of translatable texts done by the compiler, as it does for xml documentation, if the right command line parameter is specified.
$_"Command {Name} executed!" would be extracted as "Command {0} executed!", automagically.
All is well but some may ask as this, which looks a lot like the way gettext does things would work for extracting to a .resx file, where keys can't be arbitrary strings. Well for this scenario the compiler would generate SHA1 hashes as keys and insert the hashing while calling the TranslationService behind the scenes. TranslationService is a pluggable infrastructure that can have 'translators' sourcing their translations on resources, .mo files, hard-coded dictionaries, whatever...
My experimentation will use managed-commons-core, which I'm the core developer/maintainer, as the backend but if real merit is found on this discussion, surely the runtime team will have to come forward and implement something like it, or just borrow the logic which MIT-licensed from my implementation there.
  1. Code
  2. Issue

Android M Preview Now Available

Today, we’re excited to announce the preview release of Xamarin.Android featuring support for Android M’s developer preview. Android M is an exciting release, that introduces several new features for Android developers including new app Permissions, Fingerprint Authorization, enhanced Notifications, Voice Interactions, and Direct Sharing.

Android M

Installing the Android M Preview

  • Starting with Android Lollipop, Java JDK 1.7 is required to properly compile apps. You can download one for your system from Oracle’s website.
  • Update your Android SDK Tools to 24.3.3 from the Android SDK Manager
  • Install the latest Android SDK Platform and Platform and Build-tools

mnc-preview-tools

  • Download the Android M (API 22, MNC preview) SDKs
  • mnc-preview-packages

Getting Started

With this preview installed, you should have the new APIs available to use in your Xamarin.Android apps. Check out our release notes, download links, and more details on our Introduction to Android M documentation to guide you through setup and the new APIs.

The post Android M Preview Now Available appeared first on Xamarin Blog.

Live Webinar: Why Mobile Quality Matters

This live webinar will cover how to create a successful and sustainable mobile development strategy that enables quick delivery of great apps that work on across today’s diverse device landscape. Key to this process is a mobile quality solution like Xamarin Test Cloud that identifies bugs, visually compares results from hundreds of devices at once, increases delivery agility and velocity by eliminating manual testing, and integrates with your Continuous Integration (CI) systems.

The webinar will cover the different approaches and advantages to mobile testing, from manual testing, device remoting, and simulators to fully automated testing with real devices and how to create quality consumer-grade mobile apps.

Wednesday, July 08
8:30 AM – 9:30 AM PT

Register

TestCloudReport
All of the registrants for the webinar will receive a copy of the recording, so please feel free to register even if you won’t be able attend the day of the event.

The post Live Webinar: Why Mobile Quality Matters appeared first on Xamarin Blog.

Design-Driven In App Purchases: Creating Sustainable Monetization

We are in an era on mobile where Freemium has won; but there are many out there who question whether this is a good or bad thing for the player. Indeed, are the current approaches to Free2Play design sustainable and are some of them even ethical?

Over the last 4 years, the reported ‘typical’ paying player appears to have dropped from 3-5% of total downloads* to a mere 1-2%. This isn’t a smoking gun and there is a lot of conflicting evidence, but when you consider the improvements in data analysis to aid retention and the huge increased marketing spend from games at the top, I believe it’s worth taking another look at how we can develop a more sustainable approach to game monetization.

Let’s agree on three principles before we start.

  1. The games business is a leaky bucket

We will always lose players! Games are consumable entertainment and players will inevitably churn. This means we have two options – add more people faster than we lose them or plug as many leaks as we can.

  1. Retention has a huge impact

    Look at the results of our Unity Ads Survey with EEDAR:

Retention-Graph

Online Survey Conducted in 2014 with 3,000 paying players

  1. Buying (even downloading) is a risk

buying-is-risk

In The Journal of Marketing, James W Taylor wrote about the four forces which prevent people from making any purchase, be it a game or a pair of shoes. We need to know what we are getting, what we are missing out on, what others will think of our decisions and deal with other things in our lives.

In short, if we are going to make better, more sustainable In App Purchase (IAP) design, then first we have to keep more players for longer and create the conditions where they feel safe to buy things in our game.

The current IAP models typically use:

  • Unlimited Content – Capped by limited energy (such as Candy Crush Saga)
  • Exponential Cost Escalation – Building a bigger base requires bigger stores (Clash of Clans)
  • Time-Limited Events – Special limited editions and timed events (Puzzles & Dragons)
  • Casino Mechanics – Not part of this discussion as it relies on different psychology

These kind of purchases strongly focus on the conversion of the player to spending, rather than on delivering an expectation of value. We don’t get to retain users if we treat them as disposable, like visitors to a carnival midway (or fairground if you’re British). If we rig the games too far, then people will lose the joy and simply stop coming back.

The concept of ongoing spending as a user presents different short-term vs. long-term risks. Most players have a budget they are comfortable spending regularly. In the heat of a game they might exceed that, but this creates Buyer’s Remorse unless they feel they can choose to limit this spend in the future. We have to consider the short and long term risk profiles of the game as well as the context for players including:

  • Escalating costs – The perception of ever escalating costs will impact player demand. This isn’t the same as price sensitivity but never-ending upward pressure creates payment fatigue.
  • Never-ending spend – The perception that I will always be asked for more money from the game creates payment fatigue, but that is different from the desire to want to spend money of my own choice. Always have more for me to acquire on my own initiative; don’t make my basic retention depend on it.
  • Comparative progress – Seeing others perform better than me can create playing fatigue. If someone else’s spend alone makes it appear impractical for me to compete, I will abandon the game – claiming that it’s pay-to-win.
  • Substitute games – We can’t ignore that there were an average of 362 mobile games released every day in Feb this year alone. There are always substitute games, and they are all free too.

Buyer’s Remorse is a real thing. We build up a great deal of anticipation and often get caught up in the heat of the moment when we make a purchase (or download).  But after our purchase is when we are at our most vulnerable and we will (at some point) cool down and review our purchase decision. The role of a designer is to keep that player playing. More than that, as a designer of IAP we have to keep players wanting to not just continue playing, but paying. That requires us to sustain their attention, interest, and desire over time!

Just like every game mechanic has to engage and entertain a player, our game purchases have to ‘supercharge’ a player’s sense of delight and drive repeat engagement.

  • Unfinished Business: Games like Kim Kardashian Hollywood do an amazing job with the narrative progression and the format of what are essentially ‘Cookie Clicker’ tasks and still create a sense of unfinished business. The gameplay may be limited but the engagement is very real – this leaves the player always wanting more. That engagement directly helps overcome the issues from any opportunity cost there may be
  • Continued Relevance: Games like the VEGA conflict show items which players will be able to unlock later in the game. Their associated stats similarly go a long way to show the continued relevance of playing as well as how what the players just unlocked fits into the game. Often this is about putting the monetization in the context loop, rather than in with the core game mechanics.
  • Social Capital: It’s also important not to ignore both the social consequences and the value that players put on the ability to personalize their experience as long as others are able to observe their decisions. This was key to most of the revenue in the now shutdown Playstation®Home experience with examples like the ‘Gold Suit’ offering its wearers social capital. However, people often misunderstand this phenomena – customization has to be authentic as it’s about a real person’s response to your experience.
  • Inertia: It’s also easy to underestimate how important it is to keep your players playing – even if they are freeloaders! The fact that a player deliberately chose to play your game is hugely valuable – it’s a massive compliment to you and your team and you should respect that.  This is the key to you being able to generate revenue in the first place and their ongoing commitment will be hard to win. That’s why your initial on-boarding process is so vital. Acknowledge that every player has a lifecycle and be aware of how their needs will change as they move from Discovering to Learning then Engaging.  Building longevity takes an understanding of the community as well as how your game’s rhythm of play fits into your players’ lifestyles.

We should not consider someone who pays once to be a customer.  They may have purchased, but unless they do it again there is work to do to not only create a scaleable business, but also one which delivers what our players actually want!

According to Park & Lee, players are buying because they have an expectation of value, not just because they are happy with the game. They are demonstrating a desire to get more out of our game and we have to sustain that if we are to encourage them to keep spending. You can’t sustain this desire if your IAP doesn’t deliver both logical and emotional value. If we respect our players, we will earn a longer Lifetime Value (LTV), but unfortunately no matter how good our game is there will always be a diminishing return.

That’s why we have to take a design view to the kinds of goods we offer players.  I like to break these down into four categories:

  • Sustenance – Goods we require to continue playing
  • Shortcuts – Goods which speed up the actions we are performing
  • Socialisation – Goods which are primarily about social capital
  • Strategy – Goods which open new playing options

These goods can come in various forms:

  • Consumable – a one-time use item
  • Capacity – something which enhances growth/play
  • Permanent – a permanent upgrade or unlock item
  • Generators – an increase in the supply of a consumable

Looking at your game, you will be able to identify a point in the game mechanic or the context loops of play (perhaps even the metagame) where any of these items would benefit the players. However, the problem most developers fall into is forgetting to make their goods scaleable.

It’s something which, in my opinion, was the downfall of the free2play version of Dungeon Keeper.

Scale matters!

Some methods we can use to help scale goods include:

  • Bundles – Whether it’s a BOGO or a pack of 10, selling more than one consumable in a single transaction not only makes the offer more attractive to the player, it also means that they may have some left over. And that means they’ll need to come back to use them.
  • Ratchet Mechanisms: It can be scaling how many recharge crystals you need to continue your run, having died multiple times like in Blades of Brim by SYBO, or the classic mechanic where to upgrade your HQ you first have to upgrade your Gold and Mana Stores (which of course takes an escalating amount of time and resources to complete). I’m falling out of love for this system to be honest, but it’s still valid when spread amongst a large number of assets such as the different heroes in Marvel Future Fight. This method also includes multi-part items such as the Blueprints in the Force Collection.
  • Scissor-Paper-Stone: This remains my favorite approach to scale and I think the most consumer friendly – add a touch of dilemma to the purchase. Do you buy the Blue Sword or the Red one? Blue is better on Green, but Vulnerable to Red attacks… Do it well and you’ll turn purchase decisions into a positive part of the reason to play. Look at games like Hearthstone or DOTA where players have no problems with spending money. A dilemma doesn’t have to be profound, it can be as simple as the mental switch between collecting gems and avoiding obstacles in Lets Go Rocket from Cobra Mobile.
  • Customization: The more creativity you allow your players, the more engaged they will be with their characters emotionally and the better impact your purchases will have on ongoing retention. However, this has to be authentic. You can’t fake Geek Cool.

There are other things you can consider too, such as how rare an item might be, what function that item delivers, why that’s special, and how it improves the gameplay. But also ask why an item will be something a player aspires to get and how you can make it more personal.

IAP must be part of the game design experience. We have to create a sense of anticipation and delight if we are to attract players’ interest and create the desire to act and purchase from us.  We are now retailers inside our game and as such have to think in a similar way. Why not consider some of the following techniques?

  • Help from a friend: Games like Criminal Case actively use Facebook connections to offer gifts to their friends of freely available consumable items like energy. Learning from Puzzles & Dragons as well as Marvel Future Fight, we can connect with other players who are online at the same time as us and make tentative allies. These can be a great excuse to see what impact a power or new character might have on our game, and make it easier to get past a troublesome boss.
  • Free use of an item: Sometimes we have to show people what they are missing out on; unless you have used a better car/gun/etc, how will you know how much more fun it is that the one you already have? Sometimes this temporary use can be a reward or part of a daily challenge, but it can also be highly effective to use ‘Opt-In’ Video Ads to offer such experiences. These put a commercial value to the free item, something the player often appreciates more as a result.
  • Predictable uncertainty: Knowing you will get something but not knowing what is a great tool. This is often used crudely by throwing a roulette wheel into the game. However, it’s more interesting in its use in Crossy Road: I regularly get a random creature from the coins I earn through play or from watching opt-in videos. These creatures are all delightful in some way, and each time I get one the other becomes more interesting. There are some which I just had to get my hands on straight away – as a result I was willing to spend real money to get the ones I wanted, Emo Goose and Frankenstein.
  • Limited offer: Whether it’s limited by time or event, it can be really effective to make players authentic and in-game context plausible offers. Fake scarcity will add to playing fatigue.

Finally, the point of making sustainable IAP is to look at the sale as the beginning not the end. If we are to really achieve that, then we have to recognize that each purchase we initiate creates its own sense of buyer’s remorse and build playing and paying fatigue – leading to churn. We have to constantly fight this inevitable loss by building post-purchase utility. That means making the user feel special every time they make a purchase, similar to the unboxing experience of an Apple product. Identify and allow players to show off ‘landmark items’ which genuinely expand the scale of play, but then don’t forget to show them what their money has bought. All this has to also take into account how each purchase affects the gameplay of others; we can’t afford to increase the engagement of one player at the cost of dozens of others.

Show me as a player that you respect my decision to invest in your game and give me a reason to do it again!

And for those of you want to see our recent webinar on this same topic, you can view it here:

June 22

Extending Xamarin.Forms Controls with Custom Renderers

Xamarin.Forms allows you to build native UIs from a single, shared codebase with over 40 pages, layouts, and mix-and-match controls. One of the most powerful aspects of Xamarin.Forms is that you not only have access to the controls that we surface, but you have 100% access to all of the native APIs on each platform as well.

Moments Sign Up Page

Accessing Native APIs in Shared Code

Accessing native APIs using Xamarin.Forms is done in one of three ways: the dependency service, custom renderers or through plugins for Xamarin. The dependency service in Xamarin.Forms allows shared code to easily resolve interfaces to platform-specific implementations, so you can easily access platform-specific features like text-to-speech, geo-location, and battery information from your PCL or Shared Project.

User interfaces built with Xamarin.Forms look and feel native because they are native — they’re rendered using the native controls for each platform. For example, if you use an Entry control in Xamarin.Forms, on iOS this will be rendered as a UITextField, on Android as an EditText, and on Windows Phone as a TextBox. Developers can easily tap into these native renderings of Xamarin.Forms controls by using custom renderers, which can be used to do everything from small tweaks of the existing control or building entire pages.

Implementing a Custom Renderer

In Moments, a Snapchat clone built with Xamarin.Forms and Microsoft Azure, I made extensive use of the built-in Xamarin.Forms controls. However, there were a few places I wanted a finer level of customization. Let’s look at how easy it is to tweak the Entry control’s placeholder font and color using custom renderers in the sign up page featured above. Each custom renderer has two main parts: a subclass of the control being extended and platform-specific implementations that are used to customize a control’s appearance.

Subclassing Existing Controls

In the Shared Project or PCL where your Xamarin.Forms user interface logic resides, create a subclass of the control that you wish to extend, as seen below:

public class MomentsEntry : Entry
{
    public MomentsEntry ()
    {
        TextColor = Color.White;
    }
}

Platform-Specific Implementation
To customize the control’s appearance, we must create custom renderers on each platform for which we wish to customize the given control. Each Xamarin.Forms control has a renderer class that can be subclassed on each platform, such as EntryRenderer.

public class MomentsEntryRenderer : EntryRenderer
{
}

Next, override the OnElementChanged method. This method is where all the customization for the control takes place. All customization is done using the Control property, which is just an instance of the native mapping for the particular renderer subclassed. For example, on iOS, this would be a UITextField and on Android an EditText. Altering the placeholder color and font was as easy as altering a few properties, as seen below in both the iOS and Android implementations:

iOS

public class MomentsEntryRenderer : EntryRenderer
{
    protected override void OnElementChanged (ElementChangedEventArgs<Entry> e)
    {
        base.OnElementChanged (e);
        if (Control != null)
        {
            Control.BackgroundColor = UIColor.FromRGB (119, 171, 233);
            Control.BorderStyle = UITextBorderStyle.None;
            Control.Font = UIFont.FromName ("HelveticaNeue-Thin", 20);
            Control.SetValueForKeyPath (UIColor.White, new NSString ("_placeholderLabel.textColor"));
            Control.Layer.SublayerTransform = CATransform3D.MakeTranslation (10, 0, 0);
        }
    }
}

Android

public class MomentsEntry : EntryRenderer
{
    protected override void OnElementChanged (ElementChangedEventArgs<Entry> e)
    {
	    base.OnElementChanged (e);
	    if (Control != null)
        {
	        Control.SetHintTextColor (ColorStateList.ValueOf  (global::Android.Graphics.Color.White));
	        Control.SetBackgroundDrawable (null);
        }
    }
}

Finally, to enable Xamarin.Forms to properly find and render your custom control, you must add the [assembly] attribute above the class. The first parameter references the Xamarin.Forms control you wish to alter the renderer for, while the second parameter references the platform-specific renderer for the custom control:

[assembly: ExportRenderer (typeof (Moments.MomentsEntry), typeof (Moments.iOS.MomentsEntryRenderer))]

The Results

Before and after custom renderer

Wrapping It All Up

Custom renderers in Xamarin.Forms make it easy to extend existing Xamarin.Forms controls, create your own controls, or even build entire pages using native APIs. To get started, be sure to check out the docs or watch this video from Xamarin’s Mark Smith, and you’ll be up and running with custom renderers in no time!

The post Extending Xamarin.Forms Controls with Custom Renderers appeared first on Xamarin Blog.

Making of The Blacksmith: Animation, Camera effects, Audio/Video

In this blog post we’ll be sharing our animation pipeline, and our approach to the post effects and audio and video output in the short film The Blacksmith. You can also check out our previous ‘Making Of’ post about scene setup, shading, and lighting, as well as our dedicated page for the project, where all the articles in this series are collected.

Animation pipeline

Starting With a Previz

Once we knew the direction we were moving in terms of art style and themes for The Blacksmith, we were eager to see our short film in action, so we set off to build an early version of it, which we could use as a starting point.

We made early mesh proxies of the characters, supplied them with temporary rigs with very rough skinning, and quickly blocked shots out in 3D, so that we could work on a previz.

This approach provided us with a sense of timing and allowed us to iterate on the action, camera angles and editing, so we were able to get an idea about how our film was going to flow, even very early in the process. From there on, it was the perfect way to communicate the goals to the external contractors who helped us on the project, and to guide the production process.

The cameras and rough animation from the previz were used as a placeholder going forward.

Motion Capture

For the performance of the characters, we worked with Swedish stunt- and mocap-specialists Stuntgruppen, who were not only able to carry out the motion capture performance, but also consulted us on the action our characters would engage in. We held some rehearsals with them before moving on to the mocap.

We had a very smooth shooting session at the venue of Imagination Studios in Uppsala. The previz was instrumental at the motion capture shoot as it served as a reference for everyone involved. What made things even easier was that we were able to see the captured performance retargeted directly onto our characters in real time during the shoot.

The Pipeline From End-to-End

Our animation pipeline was structured as follows:

  • Rigs were prepared in Maya with HIK and additional joints for extra controls.
  • Rigs were exported via fbx to Motionbuilder.
  • Mocap was applied and then remapped to rigs in Maya.
  • Facial Animation was applied.
  • Simulation of Hair implemented on extra joint chains, such as hair and cloth.
  • Additional props were finally animated over this, and some fine tuning was done
  • before export of fbx to Unity.
  • Rigs were all imported under the ‘Generic’ rig type, with takes added to these via joint name mapping in Unity’s timeline/sequencer prototype tool.
  • Cameras were animated mainly in Motionbuilder; then imported takes were attached to the render camera in Unity, via takes on the sequencer timeline.

Setting Up In Unity

Once in Unity, we had to set-up our rigs before we could move forward. ‘Generic Rig’ setting was used for the two master rig files. We then relied upon auto-mapping of joints referencing the two master rig files for scene optimization and organization.
Next we used the Scene Manager to shift from one shot setup to the next, but each shot always referenced one of the two master rigs. For this naming of joints had to match 1/1 for retargeting to work effectively. Shots were then sourced from many fbx files, and were setup along the timeline pointing to the Master Characters in scene. The timeline then generally consisted of these playable items: Character Track, Camera Track, Prop Track, Event Track, Switcher/Shot Manager Track.

Approaching Cloth, Hair and Additional joints

It was also important to us to ensure we had some nice movement on our characters’ clothes and hair. For that reason we ended up having additional joints placed into the skirts of both characters, and the hair of our Blacksmith lead. The setup allowed the simulation to be baked back onto the joints that were skinned into the rigs, so it was effectively pseudo cloth with a lot of extra bones baked. This was an effective approach for us, as we had a really high target of 256 bones and four bone weights per Character.

ChalControlJointsChalControlXtraChalControlAll

The joints visible in the face drive the eyes for ‘look at’ control, and the
rest around the jaw actually bind the beard to the face vertices using this riveting method in Maya.

This was done purely because we needed to keep the beard and hair as separate elements in Unity for the custom hair shader to work. The facial animation is blendshape
driven, and the custom wrinkle map tech also relied on the blendshape values. That meant
we had to use the rivet approach and bake the joints to follow the blendshapes on export.

Assembling the Mocap Scene in-house

Post-shoot we reconstructed the mocap in Motionbuilder using ‘Story’ mode.

At this point the Demo team’s animator took the lead with content, refining the cameras around the newly produced near-final performances. Being the most intimate with the intention of the scenes it made sense that the animator edit the sequence back together into master scene files. Additionally, any further ideas that may have come up during the motion capture shoot could be incorporated at this stage.

The overall movie was divided into only a few master scene/fbx files, namely ‘Smithy to Vista’, ‘Vista to Challenger’, ‘Challenger and Blacksmith Battle’ and ‘Well Ending’. This grouping of shots simply made the file handling much easier. Each of the fbx files contained all the content needed to final the performances, and once completed were exported in full to Unity. After that, the take/frame ranges were split inside the Unity animation importer.

Next, all the content from previz was replaced with the motion capture. This allowed for ‘nearly final’ animation to be implemented into Unity. At this stage it was important to get it in quick, thus allowing art, lighting and FX to forge ahead with their own final pass on content.

FixingCapture

After this was done the performances were refined and polished, but the overall action and camera work didn’t noticeably change that much. Once the body work was finaled and locked off, the animator took all the content back to Maya for the final pass of facial animation, cloth and hair simulation, as well as the addition of props and other controls.

Final Exports to Unity

At this stage in the production the time had come to overwrite all placeholder files with final animation assets. The sequence structure and scene in Unity stayed pretty well intact, and it was mostly just a matter of overwriting fbx files and doing some small editing in the sequencer.

The following video provides an opportunity to compare and contrast a brief cross sections of the stages of production, from storyboard to the final version.

Camera effects

Since we wanted to achieve cinematic look throughout this project, we ended up using some typical film post effects in every shot. We used several of Unity’s default effects: Depth of Field, Noise and Grain, Vignette and Bloom.

PostEffects

We developed a custom motion blur because we wanted to experiment with the effect that camera properties such as frame rate and shutter speed had on the final image. We also wanted to make sure we could accurately capture the velocities of animated meshes; something that involved tracking the delta movement of each skeletal bone in addition to the camera and object movements in the world.

To give a good sense of depth and scope in our large, scenic shots, we strived to have a solution to achieve a believable aerial perspective. We decided to develop a component for custom atmospheric scattering, as a drop-in replacement for Unity’s built in fog modes. After some initial experimentation based on various research papers, we opted to go for extensive artistic control over correctly modeling the physics of atmospheric scattering.

Here is an example of how our camera settings look in a randomly selected shot from the project:

atmospheric_scattering_UI

atmospherics_02

To further our goal of achieving a cinematic look, we wanted to simulate a classical film out process, i.e. printing to motion picture print film. We imported screenshots from Unity into DaVinci Resolve – an external application used for professional video production – where the color grading and tone mapping took place. There we produced a lookup table, which was imported and converted to a custom Unity image effect. At runtime, this image effect converted linear data to a logarithmic distribution, which was then remapped through the lookup table. This process unified tone mapping and grading into a single operation.

FujiD55

Audio

The audio composition was a single track which was imported in Unity and aligned to the Sequencer timeline. The music composer was given an offline render – ‘preview’ – of the movie to use as a reference when composing and synchronising the track. In order to achieve the desired mood of the piece, he used the song ‘Ilmarinen’s Lament’, which we licensed from American-Swedish indie musician Theo Hakola, and enhanced it with additional elements he composed and recorded himself.

Video output

We wanted to produce this short film in its entirety inside of Unity, without the need for any external post-processing or editing to finalize it. To achieve this, we put together a small tool that would render each frame of the movie with a fixed timestep. Each of these frames would then be piped in memory to an H.264 encoder, multiplexed with the audio stream, and written to an .mp4 on disk. The output from this process was uploaded straight to YouTube as the final movie.

And that’s it for this blog post. Stay tuned for more, and in the meantime make sure to check in at our special page for The Blacksmith, if you haven’t done so already. There you’ll find all the information we have already published about how we brought our short film to life.

Monologue

Monologue is a window into the world, work, and lives of the community members and developers that make up the Mono Project, which is a free cross-platform development environment used primarily on Linux.

If you would rather follow Monologue using a newsreader, we provide the following feed:

RSS 2.0 Feed

Monologue is powered by Mono and the Monologue software.

Bloggers