Disclaimer: this is an automatic aggregator which pulls feeds and comments from many blogs of contributors that have contributed to the Mono project. The contents of these blog entries do not necessarily reflect Xamarin's position.

May 23

dupefinder - Removing duplicate files on different machines

Imagine you have an old and a new computer. You want to get rid of that old computer, but it still contains loads of files. Some of them are already on the new one, some aren’t. You want to get the ones that aren’t: those are the ones you want to copy before tossing the old machine out.

That was the problem I was faced with. Not willing to do this tedious task of comparing and merging files manually, I decided to wrote a small tool for it. Since it might be useful to others, I’ve made it open-source.

Introducing dupefinder

Here’s how it works:

  1. Use dupefinder to generate a catalog of all files on your new machine.
  2. Transfer this catalog to the old machine
  3. Use dupefinder to detect and delete any known duplicate
  4. Anything that remains on the old machine is unique and needs to be transfered to the new machine

You can get in two ways: there are pre-built binaries on Github or you may use go get:

go get github.com/rubenv/dupefinder/...

Usage should be pretty self-explanatory:

Usage: dupefinder -generate filename folder...
    Generates a catalog file at filename based on one or more folders

Usage: dupefinder -detect [-dryrun / -rm] filename folder...
    Detects duplicates using a catalog file in on one or more folders

  -detect=false: Detect duplicate files using a catalog
  -dryrun=false: Print what would be deleted
  -generate=false: Generate a catalog file
  -rm=false: Delete detected duplicates (at your own risk!)

Full source code on Github

Technical details

Dupefinder was written using Go, which is my default choice of language nowadays for these kind of tools.

There’s no doubt that you could use any language to solve this problem, but Go really shines here. The combination of lightweight-threads (goroutines) and message-passing (channels) make it possible to have clean and simple code that is extremely fast.

Internally, dupefinder looks like this:

Each of these boxes is a goroutine. There is one hashing routine per CPU core. The arrows indicate channels.

The beauty of this design is that it’s simple and efficient: the file crawler ensures that there is always work to do for the hashers, the hashers just do one small task (read a file and hash it) and there’s one small task that takes care of processing the results.

The end-result?

A multi-threaded design, with no locking misery (the channels take care of that), in what is basically one small source file.

Any language can be used to get this design, but Go makes it so simple to quickly write this in a correct and (dare I say it?) beautiful way.

And let’s not forget the simple fact that this trivially compiles to a native binary on pretty much any operationg system that exists. Highly performant cross-platform code with no headaches, in no time.

The distinct lack of bells and whistles makes Go a bit of an odd duck among modern programming languages. But that’s a good thing. It takes some time to wrap your head around the language, but it’s a truly refreshing experience once you do. If you haven’t done so, I highly recommend playing around with Go.

Random questions


Comments | @rubenv on Twitter

May 22

Case Study: Development Time Slashed by 50% for Leading Transport Company

mrw-logoMRW is Spain’s leading national and international express transport company. Powered by 10,000 people linked to the brand in over 1,300 franchises and 64 logistical platforms in Spain, Portugal, Andorra, Gibraltar, and Venezuela, MRW handles an average of 40 million parcel deliveries per year and ships to more than 200 countries and over 10,000 online stores.
 
A mission critical element of the company’s success is the MRWMobile app that supports 2,500 concurrent users in the field by helping them with process optimization, including delivery coordination. MRWMobile was developed by the company’s Portugal-based partner Moving2u, and after the successful creation of MRWMobile 3 for Windows, MRW wanted to expand to Android.
 
MRW app on HTC OneThe app is used in the field for a range of functions, including proof of picking up deliveries in real time, receiving new work orders, and for rescheduling order pick ups and deliveries—all while using secure communications and local data encryption. To support these functions, the app needs to support a range of capabilities, including offline work, local storage, push sync, multi-threading, barcode scanning, photos, and signature capture. The app also incorporates geolocation, multilingual support, multiple user profiles, mobile payment, printing, document scanning, and internal communications with messages and tasks.
 
The magnitude of requirements coupled with budget and conflicting project roadblocks created time-to-market challenges. “Without Xamarin, it would have taken at least twice as long to have the full feature set of the app built and tested,” says Alberto Silva, R&D Manager at Moving2u.
 
“Xamarin is the right approach for any serious Android, iOS, or mobile cross-platform app development,” Alberto adds. “Even if you don’t plan to go cross-platform, the productivity of Xamarin in producing an app for a single platform in C# is unmatched.”
 

View the Case Study
 

The post Case Study: Development Time Slashed by 50% for Leading Transport Company appeared first on Xamarin Blog.

May 21

RSVP for Xamarin’s WWDC 2015 Party

Join the Xamarin team for a party celebrating WWDC at Roe Restaurant on Tuesday, June 9th, from 6:00 – 9:00pm. Just two blocks from Moscone you’ll find great conversation with your fellow mobile developers, drinks, and appetizers. We’d love for you to join us to talk about your apps and projects and the latest news from Apple.

WWDC 2015 Logo

When: Tuesday, June 9th, 6pm-9pm
Where: Roe Restaurant, 651 Howard St, San Francisco, CA, 94105

RSVP

Even if you’re not attending WWDC, all of our Bay Area friends are welcome!

You can make the most of your time in town for WWDC week by scheduling dedicated time with a member of our team.

We hope to see you there!

The post RSVP for Xamarin’s WWDC 2015 Party appeared first on Xamarin Blog.

Xsolla Unity SDK – a customizable in-game store for desktop, web and mobile

We’re excited to announce a new service partner on the Asset Store: Xsolla!

For almost a decade, Xsolla has been providing payment services to some of the biggest names in the game industry: Valve, Twitch, Ubisoft and Kongregate, to name only a few. With the introduction of the new Unity Asset Store product, the Xsolla solution is now available to Unity developers everywhere!

Global payment solution, monitoring and marketing in one

The Xsolla Unity SDK is transparent and customizable; it supports more than 700 payment options from all over the world and comes with a sophisticated monitoring system and marketing tools. Plus, all transactions are protected by a robust anti-fraud solution.

1,2,3 Go! You’re up and running across your platforms

Thanks to straightforward documentation and extensive support, you can integrate the Xsolla solution in a matter of hours. Use it across Web, desktop and mobile!

“The Xsolla Plugin allows you to seamlessly integrate a fully functional virtual store right in your game. It’s super easy. With Unity 5, developers can build bigger and more advanced online products. It’s a great time to explore the multiplatform possibilities of Unity and expand products beyond smartphones and tablets to PCs and the browser-based market. You can get a high quality reliable in-app store running in a browser window with Unity 5 and the Xsolla Plugin. It’s never been easier”.

Alexander Agapitov, CEO & Founder, Xsolla

The Xsolla Plugin:

  • A reliable solution from a trusted provider
  • Complete in-game store management toolset to manage items, virtual currency, and subscription billing
  • Integrated tools for promotion of your in-game products
  • Advanced reporting and analytics
  • 24/7 anti-fraud protection and customer support
  • Automatic localization of UI, payment methods and currencies
  • Full multiplatform support across desktop, web and mobile

May 20

Get Started with HomeKit

Next month sees the launch of several highly anticipated HomeKit accessories, which were debuted earlier this year at CES. With HomeKit-enabled accessories finally coming to market, it’s time to create iOS apps that utilize Apple’s home automation APIs.

homekit

HomeKit is designed to bring an end to individual apps for smart home accessories. Gone are the days where you will be switching between apps to set up a perfect movie night scene; instead you’ll have one app, which communicates to all of your accessories using HomeKit.

HomeKit has a couple of core concepts that form the basis for the entire API. The general idea of HomeKit is that you interact with a Home, which has Rooms, which have Acessories, which have states. In order to get started, you’ll need to create a Home Manager. The Home Manager is your entry point to HomeKit – it keeps a common database of Accessories and allows you to manage your Home(s). It also notifies you of changes, which makes it easy to deal with changes to the HomeKit configuration from other HomeKit-enabled apps.

General Setup Tips

If you’re looking to test these APIs, it’s worth noting that you’ll need access to a physical iOS device running iOS 8 at a minimum. HomeKit doesn’t currently work within the iOS Simulator and the exception thrown doesn’t hint towards this. Because you’re running on the device, you’ll need to make sure you’ve set the entitlements for the project to allow for HomeKit. You’ll probably also want to grab a copy of Apple’s Hardware IO Tools for Xcode. The Hardware IO Tools for Xcode allow you to simulate HomeKit-enabled devices for testing your app. You can fetch this from the Apple Developer Center if you’re an existing member.

Creating a Home

To create a Home, we must first create an instance of the Home Manager.

var homeManager = new HomeKit.HMHomeManager();

Once we’ve done this, we can go ahead and add a Home to the homeManager object.

homeManager.AddHome("Guildford", (HomeKit.HMHome home, NSError error) =>
{
    if (error != null)
    {
        // Adding the home failed. Check the error object for why!           
    }
    else
    {
        // Successfully added home!
    }
});

All Homes within your HomeKit configuration must have a unique name, so if we have two homes in the same city, it might be worth finding another naming convention. Homes must be uniquely named because we will be able to interact with them using Siri once Apple fully enables HomeKit (hopefully later this year). For example, we’ll be able to say, “Hey Siri, turn my lights off in Guildford,” and like magic, all of the lights in your Home in Guildford will be switched off.

Once you’ve added a Home, the DidUpdateHomes event will be raised. This allows other apps to ensure they’ve processed any new Homes that have been added to the database. We can subscribe to the event with the following API.

homeManager.DidUpdateHomes += (sender, args) =>
{     
	foreach (var home in homeManager.Homes)     
	{         
	    var alert = new UIAlertView("Home...", home.Name, null, "OK");  
	}
};

Creating a Room

A Home also contains Rooms, each of which has a list of Accessories that are unique to that particular Room. Much like the Home, a Room can notify you about any changes and must also be uniquely named. This again allows you to interact with the Room using Siri. The API for creating a Room is almost identical to creating a Home.

home.AddRoom("Kitchen", (HMRoom room, NSError error) =>
{     
	if (error != null)     
	{         
	    //unable to add room. Check error for why     
	}     
	else     
	{         
	    //Success     
	}
});

Accessories

Accessories are where HomeKit starts to become a little more interesting. Accessories correspond to physical devices and must be assigned to a Room. They have a device state which allows you to query them. For example, you can query the intensity of a light fixture or the temperature of a thermostat. As you’ve probably already guessed, Accessories must be uniquely named, but this time only within the Home where they reside. Accessories will also notify you of changes to the state so you don’t have to constantly query them to ensure your app is up to date; one common event that you can be notified of is when the device is reachable.

accessory.DidUpdateReachability += (o, eventArgs) =>
{                         
	if (accessory.Reachable == true)                         
	{                             
	    //we can communicate with the accessory                         
	}                         
	else                         
	{                             
	    //the accessory is out of range, turned off, etc                    
	}                     
};

A few of the more interesting aspects of Accessories are Services and Characteristics. A Service represents a specific piece of device functionality. For instance, Apple gives the example that a garage door accessory may have a light and a switch Service. Users wouldn’t ever create Services or Characteristics as these are supplied by the accessory manufacturer, but it’s your job as a developer to make sure they can interact with the Services.

Action Sets and Triggers

Actions are by far my favorite feature of HomeKit. Actions and triggers allow you to control multiple Accessories at once. For example, when I go to bed I like to turn the lights off and turn my fan on. I can program this action with HomeKit to set the state of the Accessories and then use triggers to call the action. I personally have an iBeacon stuck to the underside of my nightstand which could detect my proximity and then call my action set for sleeping. As with almost every aspect of HomeKit, each action set has a unique name within the Home that can be recognized by Siri.

Conclusion

I’m extremely excited about the prospect of HomeKit evolving into the go-to solution for home automation. With HomeKit-enabled accessories finally coming to market, there’s never been a better time to create an iOS app that utilizes Apple’s home automation APIs.

To start integrating HomeKit into your apps today, check out our HomeKitIntro sample, which will give you everything you need to build amazing home automation apps with HomeKit.

The post Get Started with HomeKit appeared first on Xamarin Blog.

IL2CPP Internals – Debugging tips for generated code

This is the third blog post in the IL2CPP Internals series. In this post, we will explore some tips which make debugging C++ code generated by IL2CPP a little bit easier. We will see how to set breakpoints, view the content of strings and user defined types and determine where exceptions occur.

As we get into this, consider that we are debugging generated C++ code created from .NET IL code. So debugging it will likely not be the most pleasant experience. However, with a few of these tips, it is possible to gain meaningful insight into how the code for a Unity project executes on the actual target device (we’ll talk a little bit about debugging managed code at the end of the post).

Also, be prepared for the generated code in your project to differ from this code. With each new version of Unity, we are looking for ways to make the generated code better, faster and smaller.

The setup

For this post, I’m using Unity 5.0.1p3 on OSX. I’ll use the same example project as in the post about generated code, but this time I’ll build for the iOS target using the IL2CPP scripting backend. As I did in the previous post, I’ll build with the “Development Player” option selected, so that il2cpp.exe will generate C++ code with type and method names based on the names in the IL code.

After Unity is finished generating the Xcode project, I can open it in Xcode (I have version 6.3.1, but any recent version should work), choose my target device (an iPad Mini 3, but any iOS device should work) and build the project in Xcode.

Setting breakpoints

Before running the project, I’ll first set a breakpoint at the top of the Start method in the HelloWorld class. As we saw in the previous post, the name of this method in the generated C++ code is HelloWorld_Start_m3. We can use Cmd+Shift+O and start typing the name of this method to find in in Xcode, then set a breakpoint in it.

image05

We can also choose Debug > Breakpoints > Create Symbolic Breakpoint in XCode, and set it to break at this method.

image02

Now when I run the Xcode project, I immediately see it break at the start of the method.

We can set breakpoints on other methods in the generated code like this if we know the name of the method. We can also set breakpoints in Xcode at a specific line in one of the generated code files. In fact, all of the generated files are part of the Xcode project. You will find them in the Project Navigator in the Classes/Native directory.

image03

Viewing strings

There are two ways to view the representation of an IL2CPP string in Xcode. We can view the memory of a string directly, or we can call one of the string utilities in libil2cpp to convert the string to a std::string, which Xcode can display. Let’s look at the value of the string named _stringLiteral1 (spoiler alert: its contents are “Hello, IL2CPP!”).

In the generated code with Ctags built (or using Cmd+Ctrl+J in Xcode), we can jump to the definition of _stringLiteral1 and see that its type is Il2CppString_14:

struct Il2CppString_14
{
  Il2CppDataSegmentString header;
  int32_t length;
  uint16_t chars[15];
};

In fact, all strings in IL2CPP are represented like this. You can find the definition of Il2CppString in the object-internals.h header file. These strings include the standard header part of any managed type in IL2CPP, Il2CppObject (which is accessed via the Il2CppDataSegmentString typedef), followed by a four byte length, then an array of two bytes characters. Strings defined at compile time, like _stringLiteral1 end up with a fixed-length chars array, whereas strings created at runtime have an allocated array. The characters in the string are encoded as UTF-16.

If we add _stringLiteral1 to the watch window in Xcode, we can select the View Memory of “_stringLiteral1” option to see the layout of the string in memory.

image06

Then in the memory viewer, we can see this:

image00

The header member of the string is 16 bytes, so after we skip past that, we can see that the four bytes for the size have a value of 0x000E (14). The next byte after the length is the first character of the string, 0x0048 (‘H’). Since each character is two bytes wide, but in this string all of the characters fit in only one byte, Xcode displays them on the right with dots in between each character. Still, the content of the string is clearly visible. This method of viewing string does work, but it is a bit difficult for more complex strings.

We can also view the content of a string from the lldb prompt in Xcode. The utils/StringUtils.h header gives us the interface for some string utilities in libil2cpp that we can use. Specifically, let’s call the Utf16ToUtf8 method from the lldb prompt. Its interface looks like this:

static std::string Utf16ToUtf8 (const uint16_t* utf16String);

We can pass the chars member of the C++ structure to this method, and it will return a UTF-8 encoded std::string. Then, at the lldb prompt, if we use the p command, we can print the content of the string.

(lldb) p il2cpp::utils::StringUtils::Utf16ToUtf8(_stringLiteral1.chars)
(std::__1::string) $1 = "Hello, IL2CPP!"

Viewing user defined types

We can also view the contents of a user defined type. In the simple script code in this project, we have created a C# type named Important with a field named InstanceIdentifier. If I set a breakpoint just after we create the second instance of the Important type in the script, I can see that the generated code has set InstanceIdentifier to a value of 1, as expected.

image09

So viewing the contents of user defined types in generated code is done that same way as you normally would in C++ code in Xcode.

Breaking on exceptions in generated code

Often I find myself debugging generated code to try to track down the cause of a bug. In many cases these bugs are manifested as managed exceptions. As we discussed in the last post, IL2CPP uses C++ exceptions to implement managed exceptions, so we can break when a managed exception occurs in Xcode in a few ways.

The easiest way to break when a managed exception is thrown is to set a breakpoint on the il2cpp_codegen_raise_exception function, which is used by il2cpp.exe any place where a managed exception is explicitly thrown.

image08

If I then let the project run, Xcode will break when the code in Start throws an InvalidOperationException exception. This is a place where viewing string content can be very useful. If I dig into the members of the ex argument, I can see that it has a ___message_2 member, which is a string representing the message of the exception.

image07

With a little bit of fiddling, we can print the value of this string and see what the problem is:

(lldb) p il2cpp::utils::StringUtils::Utf16ToUtf8(&ex->___message_2->___start_char_1)
(std::__1::string) $88 = "Don't panic"

Note that the string here has the same layout as above, but the names of the generated fields are slightly different. The chars field is named ___start_char_1 and its type is uint16_t, not uint16_t[]. It is still the first character of an array though, so we can pass its address to the conversion function, and we find that the message in this exception is rather comforting.

But not all managed exceptions are explicitly thrown by generated code. The libil2cpp runtime code will throw managed exceptions in some cases, and it does not call il2cpp_codegen_raise_exception to do so. How can we catch these exceptions?

If we use Debug > Breakpoints > Create Exception Breakpoint in Xcode, then edit the breakpoint, we can choose C++ exceptions and break when an exception of type Il2CppExceptionWrapper is thrown. Since this C++ type is used to wrap all managed exceptions, it will allow us to catch all managed exceptions.

image10

Let’s prove this works by adding the following two lines of code to the top of the Start method in our script:

Important boom = null;
Debug.Log(boom.InstanceIdentifier);

The second line here will cause a NullReferenceException to be thrown. If we run this code in Xcode with the exception breakpoint set, we’ll see that Xcode will indeed break when the exception is thrown. However, the breakpoint is in code in libil2cpp, so all we see is assembly code. If we take a look at the call stack, we can see that we need to move up a few frames to the NullCheck method, which is injected by il2cpp.exe into the generated code.

image01

From there, we can move back up one more frame, and see that our instance of the Important type does indeed have a value of NULL.

image04

Conclusion

After discussing a few tips for debugging generated code, I hope that you have a better understanding about how to track down possible problems using the C++ code generated by IL2CPP. I encourage you to investigate the layout of other types used by IL2CPP to learn more about how to debug the generated code.

Where is the IL2CPP managed code debugger though? Shouldn’t we be able to debug managed code running via the IL2CPP scripting backend on a device? In fact, this is possible. We have an internal, alpha-quality managed code debugger for IL2CPP now. It’s not ready for release yet, but it is on our roadmap, so stay tuned.

The next post in this series will investigate the different ways the IL2CPP scripting backend implements various types of method invocations present in managed code. We will look at the runtime cost of each type of method invocation.

May 19

A report from our Unite conferences in Asia

For those that know about Unite conferences, we thought we would share the marathon tour of the Asia Unites done this April.  This was a 5-city tour consisting of Tokyo, Seoul, Beijing and then splitting up between Taipei and Bangkok. From the R&D developer perspective we get valuable time meeting up with our Asian colleagues and also getting great interactions and learnings from our users in Asia. So, here’s a photo gallery offering of the trip and the conference as we saw it.

Preparations

Leading up to the beginning of the marathon, a number of us headed to Tokyo to acclimate and prepare for the talks. In addition, the User Experience team made studio visits to gather usability information across our customers in Tokyo.

Unity devs from Copenhagen and evangelists prepping the keynotes at our Tokyo office. I've learned that Japan has the cutest pink cranes with hearts in the windows.

Unite Tokyo

The Unite Tokyo show was a great success with a keynote feature Palmer Luckey of Oculus and Ryan Payton of Camouflaj. Along with an Oculus demo, and a Republique level featuring Unity-chan, we also announced our 3DS support.

We also got to see our now anime-stylized Dr. Charles Francis. He cuts quite the dashing figure.

image01 The offered sticker set in Tokyo Shinobu, Hiroki, David, Palmer, and Ryan Watching the Unity videos Unity-chan was added to a Republique level for the keynote. Various Unite Tokyo decor Unity-chan! The crowds waiting to get into talks. Hiroki and Alex giving the roadmap session, emceeing the questions Rene and Kim answering questions post talk Celebration of 3DS announcement and wrap up of Unite.

The User Experience team supplemented their session “Getting to Know You!” with card sorting exercises and user interviews.

The UX team had a crowd post-talk to be interviewed. Card Sorting was a very popular UX exercise. Card sorting in action The UX team even received some Unity-chan fan art (it’s a whole booklet).

Unite Seoul

Seoul impressed us with an amazing venue and even more attendees (>2000). We had our second pass on our talks, and had another crowd of excited and motivated developers. We’ve never seen more excited folk to see David Helagason, and the autograph signing photo below shows it!

David drawing some Pro license winner Banner across the lobby Unite Banners lining the hall in Seoul Tim Cooper on the schedule Rene Damm giving a talk regarding performance David Helgason is popular for autographs in Seoul Of course, we had some Korean BBQ. The whole Unite Seoul crew

Unite Beijing

In China, the Unite Beijing impressed with volume. There were more than 5000 attendees to the keynote.  During the keynote,  Taiwanese director/new media artist, Hsin-Chien Huang, showcased his masterpiece — “The Inheritance” performing his project first time in Mainland China. It offered an experience of the collision and mix of new media art and Unity.

As for the tech talks, they were packed as you can see from the picture of the “Fast UI Best Practices” talk picture below. Similarly, Jesper’s talk on Global Illumination had a rapt audience.

The Keynote hall and the backdrop for the keynote. Tim gives you a sense of size. Keynote in progress 20150419_Unity_204_2 Elements of "The Inheritance" #MadeWithUnity Elements of "The Inheritance" #MadeWithUnity 20150419_Unity_096_2 The booth area. Alex and Jesper managed to catch one of the pair of Dr. Charles Francis walking the floor Roadmap talk. Devs got to be up front and center. UI Talk was packed with people standing in aisles. Waiting for the team photo Our evangelists Carl and Kelvin lead the charge. The full China staff and visiting Unity folk

Unite Taipei

After Beijing, the dev team split up to half attend Taipei and half attend Bangkok. The Shanghai office with a number of folk from Taiwan carried the show through. The vibe from the attendees was great with lots of advanced discussions and questions.

John Goodale giving the keynote Kelvin during the keynote Ryan Payton talking about Republique Remastered Prepping a hot air balloon offering to the gods.

Unite Bangkok

The Unity crew from Singapore put together a great first Unite for Bangkok. The training day preceding the event was well attended and a success. Overall, Bangkok was younger and a more novice crowd compared to all the others, but it really reveals itself to be an emerging area with lots of future potential.

Carl Callewaert with a well attended training day Evan Spytma giving our Bangkok keynote Vijay during the 2D talk The Singapore team set up a great starting Unite in Bangkok

Big thanks to all the volunteers, partners, organizers and attendees!

We look forward to seeing you at the next Unite!

Xamarins on Film: New Video Resources

The Xamarin team is popping up everywhere; from conferences and user groups to Xamarin Dev Days, odds are high that you can find a member of our team at an event near you. If, however, you we haven’t made it to your neck of the woods, observing a Xamarin on film can be just as fascinating and educational. For your viewing pleasure, and to teach you about a wide variety of mobile C# topics, we present footage from some recent sightings below.

Building Multi-Device Apps with Xamarin and Office 365 APIs

Have you been curious about how to integrate your Xamarin apps with Azure Active Directory and utilize the brand new Office 365 APIs? Look no further than James Montemagno’s session at this years Microsoft Build conference on how to integrate all of these services from a shared C# business logic backend.

Cross-Platform App Development with .NET, C#, and Xamarin

Xamarin Developer Evangelist Mike James recently spoke at the Technical Summit in Berlin, providing a complete overview of how to build native cross-platform apps with C# and Xamarin.

Tendulkar Explains

If you’re just getting started, you can learn the basics of Xamarin and mobile app development one step at a time by following along with Xamarin Developer Evangelist Mayur Tendulkar in his new, ongoing series, tendulkar-uvāca (Tendulkar Explains). The first episode, below, covers how to set up your development environment.

Developing Cross-Platform 2D Games in C# with CocosSharp

13cc9fce-3d69-4b8e-8bc5-35580ff98e33 If you haven’t been following James Montemagno’s appearances on Visual Studio Toolbox, then you’re in for a treat! Officially setting the record for most appearances, his latest visit takes a look at cross-platform 2D games with CocosSharp.

Real-Time Monitoring of Mobile Apps with Xamarin Insights

13cc9fce-3d69-4b8e-8bc5-35580ff98e33In his 8th appearance on Visual Studio Toolbox, James joins Robert Green to discuss monitoring your apps in real time with Xamarin Insights.

Live Events

If you’d like to catch a Xamarin talk in person, check out our upcoming events here.

The post Xamarins on Film: New Video Resources appeared first on Xamarin Blog.

May 18

Join Xamarin at Twilio Signal

Join Xamarin at Twilio Signal, a developer conference in San Francisco, CA on May 19-20, 2015 covering communications, composability, iOS and Android, WebRTC, and much more. Key members of the Xamarin team will be available to answer your questions, discuss your apps and projects, and show you what’s new across our products.

Twilio Signal Conference Logo

Xamarin Developer Evangelist James Montemagno will also be be presenting C# and Twilio-powered iOS and Android experiences on Wednesday, May 20 at 1:45 pm, covering how to leverage the Twilio Mobile Client native SDK for iOS and Android from C# to create a rich communication experience in your mobile apps.

We’ll be in the Community Hall, so stop by with your questions or just to say hello. If you’re not already registered, limited tickets remain, and you can use promo code “Xamaringuest” for 20% off registration. We look forward to seeing you there!

The post Join Xamarin at Twilio Signal appeared first on Xamarin Blog.

Traveling the world with the Asset Store

Terry Drever is traveling the world, and he funds his here-today-gone-in-a-few-months lifestyle exclusively by selling a portfolio of assets on the Asset Store. It gives him the freedom to work from anywhere with an Internet connection, to make enough money to cover his living costs and travel expenses, and the time to work on game ideas.

When I called Terry to interview him, the first thing he did was fetch a jumper. It was snowing outside, and this was a problem, because Terry had planned and packed for South-East-Asian sunshine. The trip to Sapporo, Japan, where he’s staying in an apartment he found through Air BnB, was something of an impulse decision.

His three month stop off in Japan is part of a tour around Asia that’s already taken in Hong Kong, Mainland China and Thailand. When his visa runs out, he’ll move on. Next stop Korea, and after that… wherever the fancy takes him. Terry, who originally comes from a remote Scottish island, isn’t planning to go home anytime soon.

Terry’s been working in the game industry for seven years, and he has a strong background in and passion for game programming. Two years ago, he started making tools and publishing them on the Asset Store.

“I had a large amount of experience at that point, and I knew what games companies wanted and what they needed.”

What Terry does is fill gaps. He’s always prototyping and trying out game ideas, and he uses Asset Store tools to build them. When the tools available on the Asset Store don’t deliver the functionality he needs to make a game, he makes a tool himself and publishes it as an extension on the Asset Store.

Over the course of the interview, it becomes apparent that Terry is a bit of a perfectionist. He’s worked on a number of game prototypes but hasn’t published them because they’re just not quite good enough.

Often, it’s the artwork that’s a problem. Though he has a stable prototype with game mechanics he’s happy with, the look and feel of the game often don’t meet his expectations.

Currently, Terry is talking to a number of Asset Store publishers to source artwork for his online multiplayer deathmatch game. It’s a game he’s always wanted to make, and one that will generate another Asset Store extension which he’s planning to publish in a couple of months.

His popular uSequencer cutscene tool resulted from work he did on another as yet unpublished title: A rhythm game for mobile inspired by Japanese games he used to play, in which the player’s action and resultant reward are tied to a sequence of game events. uSequencer more or less provides the game’s core architecture.

In the coming weeks, Terry will be visiting a game studio in Tokyo to see how they use uSequencer. He finds it fascinating discovering how the tools he’s made are used in practice, and all those insights are useful when it comes to maintaining and developing his tools.

Indeed, a new version of uSequencer is in the works. Terry’s considering a name change and is working to present his asset more professionally on the Asset Store using services from fiverr.com, because, yet again… he’s not satisfied with the visuals.

Terry’s recipe for publisher success:

  • Tie asset development to game development
  • Develop to fill a need, and find gaps in the market
  • Think carefully about how you name your product
  • Make sure your asset is presented in a professional manner

Best of luck Terry!

May 15

VR pioneers Owlchemy Labs

Owlchemy Labs work at the frontier of VR development. Which, you could say, puts them at the frontier of the frontier of game development. And, they like to boldly go. Studio CTO Devin Reimer enthuses about working with VR as a once-in-a-lifetime chance to shape a medium that’s going to be hugely influential.

Owlchemy Labs have been using Unity since formation in 2010, and to date they’ve made 10 different games. One of these is a WebGL version of alphabetical-list-sure-fire-winner AaaaaAAaaaAAAaaAAAAaAAAAA!!! for the Awesome. Developed in cooperation with Dejobaan Games, it was the first commercially available WebGL title made with Unity.

Adapting AaaaaAAaaaAAAaaAAAAaAAAAA!!! for the Awesome for another new platform (Oculus Rift) and releasing it to Steam opened a further door for the company. In November 2014, Owlchemy Labs were approached by Valve to develop a game that would show off the capabilities of what was then an unannounced platform: SteamVR.

A mountain of NDAs later, and Devin and Studio Founder Alex Schwartz were hard at work on what became Job Simulator; a game which stole lots of hearts when showcased on the HTC Vive at GDC.

“I never expected a video game demo in which I grabbed a tomato (and threw it at a robot) to awe me so deeply. I … wanna play Job Simulator forever” IGN

kitchen02

Playtesting is key

Developing a playable prototype from scratch in a three-month timeframe without the luxury of a polished and tested pipeline to SteamVR, meant that the Owlchemy team had to iterate very fast to get Job Simulator ready on time. Indeed, both Alex and Devin make a point of stressing that rapid and early playtesting are key to VR development generally.

“Oculus has a best practice guide for making VR content that they’re constantly updating and changing. No-one really knows at this stage what will work in VR without playtesting. You simply have to experiment and fail quickly. If, for example, the player in your game is a 50-storey Godzilla wandering around Manhattan, it’s best to prototype that mechanic and get a feeling for what playing it is actually like before you push forward to develop your game,” says Devin.

He recalls how, when developing Job Simulator, he worked alongside a colleague adjusting the size of the game’s microwave. With someone wearing the Oculus Rift headset calling out with feedback, Devin could scale it in realtime from the Unity editor and know that it looked and felt right inside the device: “You just don’t get a proper sense of the size of an object as the user experiences it from a conventional 2d monitor.”

vr_seated_marked

Optimize, optimize, optimize

With up to 5 million pixels being rendered 90 times per second, both Devin and Alex are keen to stress the importance of optimization. Alex likens it to making games for the PS2-era, and generally the studio’s long history of developing games for mobile has prepped them for the unique challenges of VR.

“Understanding how to keep your draw calls to a minimum and your shading simple are really important when you’re developing for VR,” says Devin.

VR for the future

Both Devin and Alex see VR as having the potential to redefine not just gaming, but industries, from remote surgery to architectural visualization and beyond.

Indeed, having seen Devin’s grandmother (whose gaming experience is limited to say the least) pick up the HTC Vive headset and immediately and seamlessly interact within the world of Job Simulator, they’re confident that VR headsets will soon be a standard item of consumer electronics.

“I get asked the question, why VR? Why take the risk on such an unproven platform? And my feeling is that if we spent our time developing another me-too mobile title, then we’d be putting the studio at greater risk. By being amongst the first movers on a new platform that we truly believe in, we’re securing the future of our business. We’re in it for the long game with VR,” says Alex.

Best of luck to the Owlchemy Labs team!

Community Contributions You Won’t Want to Miss

Xamarin developers not only love building amazing mobile apps in C#, they also love helping the developer community at large. Whether through building great open-source libraries, components, and plugins or sharing their experience in forums, blog posts, and podcasts, our community consistently steps up to make development with Xamarin a pleasure. The links below will take you to some of our favorite content from our community over the past few weeks.

Podcasts

Great Community Posts

Tools & Frameworks

Xamarin.Forms

Thanks to these developers for sharing their knowledge and insight with the rest of the Xamarin community! If you have an article or blog post about developing with Xamarin that you’d like to share, please let us know by tweeting at @XamarinHQ.

The post Community Contributions You Won’t Want to Miss appeared first on Xamarin Blog.

May 14

Holographic development with Unity

In January of this year Microsoft made public their most innovative and disruptive product in quite some time called HoloLens, an augmented reality headset that combines breakthrough hardware, input and machine learning so that you can bring mixed reality experiences to life using the real world as your canvas. These are not just transparent screens placed in the center of a room with an image projected on them but truly immersive holograms that enable you to interact with the real world. This is a truly innovative product with a rich set of APIs that enable you to develop Windows Holographic applications that will blur the line between the real world and the virtual world.

As impressive as this may sound, Microsoft has been very quiet about this technology; only allowing a few videos and bits of information to be released. But at the most recent Microsoft Developer Conference//Build 2015, they allowed a select group of people, around 180 people in total including me, to try out this new technology.

Crafting 5 Star iOS Experiences With Animations

When I think about the iPhone apps I use the most, they all have one thing in common: they use custom animations to enhance the user experience. Custom animations provide an immersive experience, which can add a whole new element of enjoyment to your user experience.

iOS is jam-packed with beautiful and subtle animations, visible from the moment you unlock the phone. Subtlety is the key to delivering the best experience, as Apple is very clear that developers should avoid animations that seem excessive or gratuitous.

Original keynote design

With over 1.2 million apps on the iOS app store alone, if you want your app to get noticed, it needs to stand out from the crowd. The easiest way to do this is with a unique user interface that goes beyond the generic built-in controls and animations.

In this blog, I’m going to show you how you can easily prototype and add custom animations to your iOS apps. Before we get started on the technical details, it’s worth discussing a tip used by some of the best mobile app designers.

Prototype in Keynote or PowerPoint

It’s no secret that to create something that appears simple often requires a large amount of iteration to refine it to its simplest form. This is definitely the case when it comes to UI design and animation design. Many UX designers will use tools like Keynote or PowerPoint, which include built-in animations, for prototyping. During this part of the design process, you are free from thinking about the complexities of the implementation and can focus on the desired result. It’s a step in the design process I highly recommend to everyone who is creating custom animations and transitions. Below is an animation movie exported from Keynote, which you can use to compare to the final animation.

keynote designed animation
Implementation

Once you’ve designed your animations, you’ll need to start implementing them. Rest assured, though, as iOS has a fantastic animation API that makes the experiences very straight forward. You’ll most likely be reusing many of the animations you create across your app, which Apple actually recommends in the iOS human interface guidelines.

The implementation for your views rely on Apple’s Core Animation framework. This framework consists of a number of extremely powerful APIs that cater to different requirements. For example, you can create block, key frame, explicit, and implicit animations with Core Animation. Another option is to animate your views using UIKit, which is an approach I use a lot in Dutch Spelling.

For example, I change the position of a button as well as changing its visibility over .2 seconds.

 //Move button Asynchronously and fade out
 AnimateAsync(0.2, () =>
 {
     button.Frame = CalculatePosition(_buttonsUsedInAnswer.IndexOf(button));
     button.Alpha = 0.0f;
});

The snippet above deals with the button animation; below is a video of the animation running on a the device.

final animation
Another example from Dutch Spelling is shrinking views over time. I use this to draw attention to other visual elements within the UI.

//Shrink WordView Asynchronously and then set font.
public void ShrinkWord()
{
    var transform = CGAffineTransform.MakeIdentity();
    transform.Scale(1f, 1f);
    UIView.AnimateAsync(0.6, () =>
    {
        _word.Transform = transform;
        _title.TextColor = "3C3C3C".ToUIColor();
    });
    _word.Font = UIFont.FromName("Raleway-Regular", 32);
}

Further Reading

You can find more examples to help you get started building your own 5-star app animations in our Core Animation documentation here.

The post Crafting 5 Star iOS Experiences With Animations appeared first on Xamarin Blog.

May 13

RSVP for Xamarin’s Google I/O 2015 Party

Join the Xamarin team on May 27th at Southside Spirit House from 7-10pm to kick off Google I/O!

Google I/O

Spend the night before Google I/O with the Xamarin Team and fellow mobile developers and check out the Xamarin Test Cloud wall in person to see how easy mobile testing can be.

Xamarin Test Cloud Wall

When: Wednesday, May 27th, 7pm–10pm
Where: Southside Spirit House, 575 Howard St, San Francisco, CA, 94105

RSVP

In the Bay Area but not attending Google I/O? Stop by anyway! You and your friends are welcome. Make the most of your time at Google I/O and schedule dedicated time with the Xamarin team while you’re in town for the conference. We’d love to meet you, learn about your apps and discuss ways we can help.

The post RSVP for Xamarin’s Google I/O 2015 Party appeared first on Xamarin Blog.

IL2CPP Internals: A Tour of Generated Code

This is the second blog post in the IL2CPP Internals series. In this post, we will investigate the C++ code generated by il2cpp.exe. Along the way, we will see how managed types are represented in native code, take a look at runtime checks used to support the .NET virtual machine, see how loops are generated and more!

We will get into some very version-specific code that is certainly going to change in later versions of Unity. Still, the concepts will remain the same.

Example project

I’ll use the latest version of Unity available, 5.0.1p1, for this example. As in the first post in this series, I’ll start with an empty project and add one script file. This time, it has the following contents:

using UnityEngine;

public class HelloWorld : MonoBehaviour {
  private class Important {
    public static int ClassIdentifier = 42;
    public int InstanceIdentifier;
  }

  void Start () {
    Debug.Log("Hello, IL2CPP!");

    Debug.LogFormat("Static field: {0}", Important.ClassIdentifier);

    var importantData = new [] { 
      new Important { InstanceIdentifier = 0 },
      new Important { InstanceIdentifier = 1 } };

    Debug.LogFormat("First value: {0}", importantData[0].InstanceIdentifier);
    Debug.LogFormat("Second value: {0}", importantData[1].InstanceIdentifier);
    try {
      throw new InvalidOperationException("Don't panic");
    }
    catch (InvalidOperationException e) {
      Debug.Log(e.Message);
    }

    for (var i = 0; i < 3; ++i) {
      Debug.LogFormat("Loop iteration: {0}", i);
    }
  }
}

I’ll build this project for WebGL, running the Unity editor on Windows. I’ve selected the Development Player option in the Build Settings, so that we can get relatively nice names in the generated C++ code. I’ve also set the Enable Exceptions option in the WebGL Player Settings to Full.

Overview of the generated code

After the WebGL build is complete, the generated C++ code is available in the Temp\StagingArea\Data\il2cppOutput directory in my project directory. Once the editor is closed, this directory will be deleted. As long as the editor is open though, this directory will remain unchanged, so we can inspect it.

The il2cpp.exe utility generated a number of files, even for this small project. I see 4625 header files  and 89 C++ source code files. To get a handle on all of this code, I like to use a text editor which works with Exuberant CTags. CTags will usually generate a tags file quickly for this code, which makes it easier to navigate.

Initially, you can see that many of the generated C++ files are not from the simple script code, but instead are the converted version of the code in the standard libraries, like mscorlib.dll. As mentioned in the first post in this series, the IL2CPP scripting backend uses the same standard library code as the Mono scripting backend. Note that we convert the code in mscorlib.dll and other standard library assemblies each time il2cpp.exe runs. This might seem unnecessary, since that code does not change.

However, the IL2CPP scripting backend always uses byte code stripping to decrease the executable size. So even small changes in the script code can cause many different parts of the standard library code to be used or not, depending on the situation. Therefore, we need to convert the mscorlib.dll assembly each time. We are researching better ways to do incremental builds, but we don’t have any good solutions yet.

How managed code maps to generated C++ code

For each type in the managed code, il2cpp.exe will generate one header file for the C++ definition of the type and another header file for the method declarations for the type. For example, let’s look at the contents of the converted UnityEngine.Vector3 type. The header file for the type is named UnityEngine_UnityEngine_Vector3.h. The name is created based on the name of the assembly, UnityEngine.dll followed by the namespace and name of the type. The code looks like this:

// UnityEngine.Vector3
struct Vector3_t78 
{
  // System.Single UnityEngine.Vector3::x
  float ___x_1;
  // System.Single UnityEngine.Vector3::y
  float ___y_2;
  // System.Single UnityEngine.Vector3::z
  float ___z_3;
};

The il2cpp.exe utility has converted each of the three instance fields, and done a little bit of name mangling to avoid conflicts and reserved words. By using leading underscores, we are using some reserved names in C++, but so far we’ve not seen any conflicts with C++ standard library code.

The UnityEngine_UnityEngine_Vector3MethodDeclarations.h file contains the method declarations for all of the methods in Vector3. For example, Vector3 overrides the Object.ToString method:

// System.String UnityEngine.Vector3::ToString()
extern "C" String_t* Vector3_ToString_m2315 (Vector3_t78 * __this, MethodInfo* method) IL2CPP_METHOD_ATTR

Note the comment, which indicates the managed method this native declaration represents. I often find it useful to search the files in the output for the name of the managed method in this format, especially for methods with common names, like ToString.

Notice a few interesting things about all methods converted by il2cpp.exe:

  • These are not member functions in C++. All methods are free functions, where the first argument is the “this” pointer. For static functions in managed code, IL2CPP always passes a value of NULL for this first argument. By always declaring methods with the “this” pointer as the first argument, we simplify the method generation code in il2cpp.exe and we make invoking methods via other methods (like delegates) simpler for generated code.
  • Every method has an additional argument of type MethodInfo* which includes the metadata about the method that is used for things like virtual method invocation. The Mono scripting backend uses platform-specific trampolines to pass this metadata. For IL2CPP, we’ve decided to avoid the use of trampolines to aid in portability.
  • All methods are declared extern “C” so that il2cpp.exe can sometimes lie to the C++ compiler and treat all methods as if they had the same type.
  • Types are named with a “_t” suffix. Methods are named with a “_m” suffix. Naming conflicts are resolved by appended an unique number to each name. These numbers will change if anything in the user script code changes, so you cannot depend on them from build to build.

The first two points imply that every method has at least two parameters, the “this” pointer and the MethodInfo pointer. Do these extra parameters cause unnecessary overhead? While they clearly do add overhead, we haven’t seen anything so far which suggests that those extra arguments cause performance problems. Although it may seem that they would, profiling has shown that the difference in performance is not measurable.

We can jump to the definition of this ToString method using Ctags. It is in the Bulk_UnityEngine_0.cpp file. The code in that method definition doesn’t look too much like the C# code in the Vector3::ToString() method. However, if you use a tool like ILSpy to reflect the code for the Vector3::ToString() method, you’ll see that the generated C++ code looks very similar to the IL code.

Why doesn’t il2cpp.exe generate a separate C++ file for the method definitions for each type, as it does for the method declarations? This Bulk_UnityEngine_0.cpp file is pretty large, 20,481 lines actually! We found the C++ compilers we were using had trouble with a large number of source files. Compiling four thousand .cpp files took much longer than compiling the same source code in 80 .cpp files. So il2cpp.exe batches the methods definitions for types into groups and generates one C++ file per group.

Now jump back to the method declarations header file and notice this line near the top of the file:

#include "codegen/il2cpp-codegen.h"

The il2cpp-codegen.h file contains the interface which generated code uses to access the libil2cpp runtime services. We’ll discuss some ways that the runtime is used by generated code later.

Method prologues

Let’s take a look at the definition of the Vector3::ToString() method. Specifically, it has a common prologue that is emitted in all methods by il2cpp.exe.

StackTraceSentry _stackTraceSentry(&Vector3_ToString_m2315_MethodInfo);
static bool Vector3_ToString_m2315_init;
if (!Vector3_ToString_m2315_init)
{
  ObjectU5BU5D_t4_il2cpp_TypeInfo_var = il2cpp_codegen_class_from_type(&ObjectU5BU5D_t4_0_0_0);
  Vector3_ToString_m2315_init = true;
}

The first line of this prologue creates a local variable of type StackTraceSentry. This variable is used to track the managed call stack, so that IL2CPP can report it in calls like Environment.StackTrace. Code generation of this entry is actually optional, and is enabled in this case by the --enable-stacktrace option passed to il2cpp.exe (since I set Enable Exceptions option in the WebGL Player Settings to Full). For small functions, we found that the overhead of this variable has a negative impact on performance. So for iOS and other platforms where we can use platform-specific stack trace information, we never emit this line into generated code. For WebGL, we don’t have platform-specific stack trace support, so it is necessary to allow managed code exceptions to work properly.

The second part of the prologue does lazy initialization of type metadata for any array or generic types used in the method body. So the name ObjectU5BU5D_t4 is the name of the type System.Object[]. This part of the prologue is only executed once and often does nothing if the type was already initialized elsewhere, so we have not seen any adverse performance implications from this generated code.

Is this code thread safe though? What if two threads call Vector3::ToString() at the same time? Actually, this code is not problematic, since all of the code in the libil2cpp runtime used for type initialization is safe to call from multiple threads. It is possible (maybe even likely) that il2cpp_codegen_class_from_type function will be called more than once, but the actual work it does will only occur once, on one thread. Method execution won’t continue until that initialization is complete. So this method prologue is thread safe.

Runtime checks

The next part of the method creates an object array, stores the value of the x field of Vector3 in a local, then boxes the local and adds it to the array at index zero. Here is the generated C++ code (with some annotations):

// Create a new single-dimension, zero-based object array
ObjectU5BU5D_t4* L_0 = ((ObjectU5BU5D_t4*)SZArrayNew(ObjectU5BU5D_t4_il2cpp_TypeInfo_var, 3));
// Store the Vector3::x field in a local
float L_1 = (__this->___x_1);
float L_2 = L_1;
// Box the float instance, since it is a value type.
Object_t * L_3 = Box(InitializedTypeInfo(&Single_t264_il2cpp_TypeInfo), &L_2);
// Here are three important runtime checks
NullCheck(L_0);
IL2CPP_ARRAY_BOUNDS_CHECK(L_0, 0);
ArrayElementTypeCheck (L_0, L_3);
// Store the boxed value in the array at index 0
*((Object_t **)(Object_t **)SZArrayLdElema(L_0, 0)) = (Object_t *)L_3;

The three runtime checks are not present in the IL code, but are instead injected by il2cpp.exe.

  • The NullCheck code will throw a NullReferenceException if the value of the array is null.
  • The IL2CPP_ARRAY_BOUNDS_CHECK code will throw an IndexOutOfRangeException if the array index is not correct.
  • The ArrayElementTypeCheck code will thrown an ArrayTypeMismatchException if the type of the element being added to the array is not correct.

These three runtime checks are all guarantees provided by the .NET virtual machine. Rather than injecting code, the Mono scripting backend uses platform specific signaling mechanism to handle these same runtime checks. For IL2CPP, we wanted to be more platform agnostic and support platforms like WebGL, where there is no platform-specific signaling mechanism, so il2cpp.exe injects these checks.

Do these runtime checks cause performance problems though? In most cases, we’ve not seen any adverse impact on performance and they provide the benefits and safety which are required by the .NET virtual machine. In a few specific cases though, we are seeing these checks lead to degraded performance, especially in tight loops. We’re working on a way now to allow managed code to be annotated to remove these runtime checks when il2cpp.exe generates C++ code. Stay tuned on this one.

Static Fields

Now that we’ve seen how instance fields look (in the Vector3 type), let’s see how static fields are converted and accessed. Find the definition of the HelloWorld_Start_m3 method, which is in the Bulk_Assembly-CSharp_0.cpp file in my build. From there, jump to the Important_t1 type (in theAssemblyU2DCSharp_HelloWorld_Important.h file):

struct Important_t1  : public Object_t
{
  // System.Int32 HelloWorld/Important::InstanceIdentifier
  int32_t ___InstanceIdentifier_1;
};
struct Important_t1_StaticFields
{
  // System.Int32 HelloWorld/Important::ClassIdentifier
  int32_t ___ClassIdentifier_0;
};

Notice that il2cpp.exe has generated a separate C++ struct to hold the static field for this type, since the static field is shared between all instances of this type. So at runtime, there will be one instance of the Important_t1_StaticFields type created, and all of the instances of the Important_t1 type will share that instance of the static fields type. In generated code, the static field is accessed like this:

int32_t L_1 = (((Important_t1_StaticFields*)InitializedTypeInfo(&Important_t1_il2cpp_TypeInfo)->static_fields)->___ClassIdentifier_0);

The type metadata for Important_t1 holds a pointer to the single instance of the Important_t1_StaticFields type, and that instance is used to obtain the value of the static field.

Exceptions

Managed exceptions are converted by il2cpp.exe to C++ exceptions. We have chosen this path to again avoid platform-specific solutions. When il2cpp.exe needs to emit code to raise a managed exception, it calls the il2cpp_codegen_raise_exception function.

The code in our HelloWorld_Start_m3 method to throw and catch a managed exception looks like this:

try
{ // begin try (depth: 1)
  InvalidOperationException_t7 * L_17 = (InvalidOperationException_t7 *)il2cpp_codegen_object_new (InitializedTypeInfo(&InvalidOperationException_t7_il2cpp_TypeInfo));
  InvalidOperationException__ctor_m8(L_17, (String_t*) &_stringLiteral5, /*hidden argument*/&InvalidOperationException__ctor_m8_MethodInfo);
  il2cpp_codegen_raise_exception(L_17);
  // IL_0092: leave IL_00a8
  goto IL_00a8;
} // end try (depth: 1)
catch(Il2CppExceptionWrapper& e)
{
  __exception_local = (Exception_t8 *)e.ex;
  if(il2cpp_codegen_class_is_assignable_from (&InvalidOperationException_t7_il2cpp_TypeInfo, e.ex->object.klass))
  goto IL_0097;
  throw e;
}
IL_0097:
{ // begin catch(System.InvalidOperationException)
  V_1 = ((InvalidOperationException_t7 *)__exception_local);
  NullCheck(V_1);
  String_t* L_18 = (String_t*)VirtFuncInvoker0< String_t* >::Invoke(&Exception_get_Message_m9_MethodInfo, V_1);
  Debug_Log_m6(NULL /*static, unused*/, L_18, /*hidden argument*/&Debug_Log_m6_MethodInfo);
// IL_00a3: leave IL_00a8
  goto IL_00a8;
} // end catch (depth: 1)

All managed exceptions are wrapped in the C++ Il2CppExceptionWrapper type. When the generated code catches an exception of that type, it unpacks the C++ representation of the managed exception (which has type Exception_t8). In this case, we’re looking only for a InvalidOperationException, so if we don’t find an exception of that type, a copy of the C++ exception is thrown again. If we do find the correct type, the code jumps to the implementation of the catch handler, and writes out the exception message.

Goto!?!

This code brings up an interesting point. What are those labels and goto statements doing in there? These constructs are not necessary in structured programming! However, IL does not have structured programming concepts like loops and if/then statements. Since it is lower-level, il2cpp.exe follows lower-level concepts in generated code.

For example, let’s look at the for loop in the HelloWorld_Start_m3 method:

IL_00a8:
{
  V_2 = 0;
  goto IL_00cc;
}
IL_00af:
{
  ObjectU5BU5D_t4* L_19 = ((ObjectU5BU5D_t4*)SZArrayNew(ObjectU5BU5D_t4_il2cpp_TypeInfo_var, 1));
  int32_t L_20 = V_2;
  Object_t * L_21 =
Box(InitializedTypeInfo(&Int32_t5_il2cpp_TypeInfo), &L_20);
  NullCheck(L_19);
  IL2CPP_ARRAY_BOUNDS_CHECK(L_19, 0);
  ArrayElementTypeCheck (L_19, L_21);
*((Object_t **)(Object_t **)SZArrayLdElema(L_19, 0)) = (Object_t *)L_21;
  Debug_LogFormat_m7(NULL /*static, unused*/, (String_t*) &_stringLiteral6, L_19, /*hidden argument*/&Debug_LogFormat_m7_MethodInfo);
  V_2 = ((int32_t)(V_2+1));
}
IL_00cc:
{
  if ((((int32_t)V_2) < ((int32_t)3)))
  {
    goto IL_00af;
  }
}

Here the V_2 variable is the loop index. Is starts off with a value of 0, then is incremented at the bottom of the loop in this line:

V_2 = ((int32_t)(V_2+1));

The ending condition in the loop is then checked here:

if ((((int32_t)V_2) < ((int32_t)3)))

As long as V_2 is less than 3, the goto statement jumps to the IL_00af label, which is the top of the loop body. You might be able to guess that il2cpp.exe is currently generating C++ code directly from IL, without using an intermediate abstract syntax tree representation. If you guessed this, you are correct. You may have also noticed in the Runtime checks section above, some of the generated code looks like this:

float L_1 = (__this->___x_1);
float L_2 = L_1;

Clearly, the L_2 variable is not necessary here. Most C++ compilers can optimize away this additional assignment, but we would like to avoid emitting it at all. We’re currently researching the possibility of using an AST to better understand the IL code and generate better C++ code for cases involving local variables and for loops, among others.

Conclusion

We’ve just scratched the surface of the C++ code generated by the IL2CPP scripting backend for a very simple project. If you haven’t done so already, I encourage you dig into the generated code in your project. As you explore, keep in mind that the generated C++ code will look different in future versions of Unity, as we are constantly working to improve the build and runtime performance of the IL2CPP scripting backend.

By converting IL code to C++, we’ve been able to obtain a nice balance between portable and performant code. We can have many of the nice developer-friendly features of managed code, while still getting the benefits of quality machine code that C++ compiler provides for various platforms.

In future posts, we’ll explore more generated code, including method calls, sharing of method implementations and wrappers for calls to native libraries. But next time we will debug some of the generated code for an iOS 64-bit build using Xcode.

May 12

A Scalable Introduction to Vector Drawables

lollipopAmong the many novelties brought to Android in 5.0 Lollipop, vector drawables are one of my favorite additions. They manage to both solve one of the oldest Android pain points (wide range of screen densities) and also pave the way for much better interactions in our apps.

Introduction to Vector Drawables

What exactly are vector drawables? As their name implies, vector drawables are based on vector graphics, as opposed to raster graphics.

Developers should already be familiar with raster graphics in the assortment of PNG, JPEG, and other image files that populate your Android apps.

android-rasterizationRaster graphics describe (in some encoded form) the actual color value of each pixel of an image, whereas vector graphics contain the recipe, via a series of draw commands, to create the desired result.

To display that recipe on a screen, the system converts it back at run-time to the same pixel data that it would have gotten from a bitmap file through a process called rasterization.

With Android Lollipop, the recipes that make vector drawables are directly written in an XML format very much like their older cousins, shape drawables.

Both vector drawables and shape drawables share the same core benefit of being rendered on-demand by the system, and thus always at the right resolution. As such, contrary to other bitmap-based images, they don’t need to have extra variations based on screen-densities.

Indeed, where the world was happily spread between LDPI, MDPI and HDPI a few years ago, today we just don’t know how many ‘x’ will be able to fit in front of ‘HDPI’ (at the time of this writing we have reached XXXHDPI).

Additionally, like any other drawable, vector drawables are natively understood by the Android toolchain, which makes their use as seamless as feasible (i.e. you can reference them in layouts, styles, acquire them through Resources.GetDrawable, etc.).

But let’s see an example of what a vector drawable looks like in all its XML glory:

<?xml version="1.0" encoding="utf-8" ?>
<vector xmlns:android="http://schemas.android.com/apk/res/android"
	android:height="96dp"
	android:width="96dp"
	android:viewportHeight="48"
	android:viewportWidth="48" >
	<group>
		<path
			android:fillColor="@color/black_primary"
			android:pathData="M12 36l17-12-17-12v24zm20-24v24h4V12h-4z" />
	</group>
</vector>

This example draws this well known media player action icon:

Skip Next

The format of the XML file is voluntarily modeled after another existing vector graphics file format (of which it shares the same path expression syntax): SVG.

This means that you can fairly easily reuse any SVG you might have (or that your graphic editor can produce) by massaging it into an Android vector drawable.

Motion with Animated Vector Drawables

Because vector drawables describe a recipe of what you want displayed on the screen, it’s very easy to modify this recipe on the fly to provide a wide range of effects, including animated vector drawables (represented by the AnimatedVectorDrawable type).

If you look back at the XML source of the media player icon, you may have noticed that our path element is actually contained inside another element called a group.

For the sake of displaying the vector drawable, this is not a very interesting element. But if you look at the following list of valid XML attributes that can be set on a group element, you should see something emerging: rotation, scaleX, scaleY, translateX, translateY.

Indeed, you probably recognized that those are the same attributes we use to manipulate our View instances when animating them.

Animated vector drawables are actually more of a meta-type, bridging several other pieces together much like state-list drawables (and like the latter, they are also drawables themselves).

Animated vector drawables are declared using the <animated-vector/> element:

<animated-vector xmlns:android="http://schemas.android.com/apk/res/android"
	android:drawable="@drawable/bluetooth_loading_vector" >
	<target
		android:name="circleGroup"
		android:animation="@anim/circle_expand" />
	<target
		android:name="circlePath"
		android:animation="@anim/circle_path" />
</animated-vector>

The first thing you need to tie in is which vector drawable the animated version is going to use as a base, which is set in the android:drawable attribute at the top level.

The rest of the file contains several different <target/> elements. These set up which animations are run and which part of the vector drawable they run.

Using animated vector drawables and a widget like ProgressBar means you can quickly and easily build rich spinners like this one:



Powerful Transitions

When I sat down at I/O last year and saw the introduction of Material design, I was highly skeptical of animated transitions.

James and discussed this during our Material Design session at Xamarin Evolve 2014. At the time of the talk, the only facility we had been given to do animated transitions was through keyframe animations, which are incredibly clunky to maintain.

Thankfully, the introduction of vector drawables, along with the addition of a specialized evaluator allowing path morphing, changed all of this.

The new evaluator is able to understand the path definition used by a vector drawable and create intermediary versions of it. This means that given two specific paths for a vector drawable, we can use an object animator to not only animate transformations or styles as outlined above, but also the actual pathData of the vector itself.

Now before you get too excited, it’s not a miracle evaluator. There are two very strong requirements for it to work properly:

  • The path command list needs to be of the same size
  • Each command in that list needs to be of the same type

Basically, the evaluator treats the path data as an array of floats extracted from each command parameter and uses that to interpolate different paths in between.

Thanks to this and the new animation aware AnimatedStateListDrawable class, it’s very easy to create nice state transitions like this play/pause interaction:



The transition is defined in an XML file much like a traditional state list drawable, with the addition of a section declaring the motion that happens between the various state changes:

<animated-selector xmlns:android="http://schemas.android.com/apk/res/android"
    android:constantSize="true">
    <item
        android:drawable="@drawable/ic_pause"
        android:state_checked="true"
        android:id="@+id/pause_state" />
    <item
        android:drawable="@drawable/ic_play"
        android:id="@+id/play_state" />
	<transition android:fromId="@id/play_state" android:toId="@id/pause_state" android:reversible="true">
		<animated-vector android:drawable="@drawable/ic_play">
			<target android:name="d" android:animation="@anim/play_pause" />
		</animated-vector>
	</transition>
</animated-selector>

Since the framework also keeps track of the animation running for you, you don’t have to worry that any form of lifecycle animation will be scheduled or canceled automatically.

Using the same transition facilities, you can also build more complicated interactions involving a stage-like approach like this one:



Compatibility Considerations

As of now, vector drawables are still a Lollipop-specific feature. However, there have been signs that the support library will soon add support for vector drawables, likely announced in time for this year’s Google I/O conference.

It’s not clear if the more advanced animations capabilities will be supported, but basic rendering will still allow developers to use a scalable image format across most API levels, which is a great start.

Further Reading

I kept this blog short to give a very broad overview of what vector drawables are and how they can be used. For more information (including more details on how all of these new pieces fit together), I encourage you to read my previous posts on the subject:



Yep, you can even do cats

The post A Scalable Introduction to Vector Drawables appeared first on Xamarin Blog.

Color Grading with Unity and the Asset Store

It takes a split second for a brain to judge your game’s visual style. Color grading, a post processing visual effect, is often the secret sauce that tells the audience that what they’re looking at has a polish, a unique style or a particular mood. I talked to a developer and an Asset Store publisher to get the recipe.

“To get started, try watching films with colour grading you admire! I’ve watched Iron Man, Blade Runner, Transformers, Battlestar Galactica, Oblivion, and many others – it’s great to pick apart what others are doing, and see why it’s effective. ” says Iestyn Lloyd of Lloyd Digital. He also finds Instagram’s filters quite inspiring.

dropship with without full sizeLloy’d Unity 5 Dropship demo, before and after color grading. Asset is Orbital Reentry Craft by Andromeda Station.

When you know what you kind of style you want to go for, take a screenshot of a scene in your game and open it in Photoshop. Play around with everything under Image > Adjustments until you reach the style you’re happy with. There are plenty of tutorials online (like this one) that can help you along the way.

But how do you transfer that look to your game? Color grading is basically mapping every possible color to another color. Imagine all these mapped colors are stored in a cube, with red value on the X axis, green value on the Y axis and the blue on the Z axis.

A 2D representation of this cube is called unwrapped volume texture, also known as LUT (color look-up texture). After you perform the same color adjustments on a new neutral LUT as you did on your screenshot, you can save it as a new LUT. Then assign the new LUT to the effect and hit Convert & Apply. More info on this workflow is in Unity Documentation, but it’s far from the only way to get color grading.

“There’s a number of plugins that don’t require Photoshop at all – that’s where I got started!,” says Iestyn. “Colorful has a number of easy-to-use presets which can be easily tweaked to give a pleasing result. Chromatica is very clever and is my tool of choice right now. It gives you a number of colour grading tools similar to the ones found in Photoshop right in the Unity editor and allows for split-screen before-and-after previews, among many other features.”

UNT_as_sale_may_FB_asset_2_1200x628Colorful – Image Effects

For artists and any professionals that have experience with image editing software, he recommends Amplify Color: “Amplify Color makes it extremely easy to send screenshots directly to Photoshop for colour grading. It also supports volumes, which can be a very easy way to change the grading throughout areas of your game.”

Amplify Color is a script you apply to camera. After you set up remote connection with Photoshop, you can then send a screenshot from Unity to Photoshop with one click, adjust in Photoshop and then click Read screenshot from Photoshop in Unity when you’re done. Apply it to camera by dragging it to the correct slot.

It also has a “File Mode” that allows users to export an image to any kind of professional software they are familiar with such as Davinci Resolve, Nuke, After Effects or even Gimp, grade it and import it back into Unity.

It’s used in a wide range of projects, from realistic games like The Forest to incredibly stylized creations such as Firewatch, the recent GI Unity Demo or even 2D games such as Night in The Woods.

screenshot7Night in the Woods

“Grading film and VFX is easier since all the shots are mostly predetermined. Grading an interactive and free moving game might be quite tricky. Amplify Color offers volume based color grading and a way to bind third party effects to the scene mood in order to simplify the process. It also provides support for dynamically generated color grading masks, a great way to isolate and grade specific assets,” says Ricardo Teixeira of Amplify Creations, the publisher of the asset.

Another advantage of Amplify Color is it’s flexibility. “We offer Full Source Code with all our products, users can build upon and improve their their tools whenever needed and Amplify Color 2.0, as a free update, is already in the works with an easy-to-use LUT editor within Unity and a few other surprises” he adds. This month, Amplify Color is  half-price to all Level 11 members.

Whatever workflow or tool suits you, color grading opens up a whole new world of possibilities for the final look of your game. With so many new games hitting the market these days, showing off an interesting level of visual polish can help you stand out.

“Just feel free to experiment and be creative! It’s so easy to start playing around with colour grading. There are no rights or wrongs – it’s all about the aesthetic you want to convey to the player of your game,“  says Iestyn.

More color grading resources:

May 11

Xamarin.Studio 5.9 Enhancements

Our latest release of Xamarin Studio includes many great new features, including a new wizard to walk you through creating projects, a new publishing workflow to simplify publishing, archiving, and code signing, fancy new debugger visualizers, support for C# 6, and much more!

New Project Dialog

The File > New experience has been completely redesigned to make it easier to find the right template for your app, and easier to configure the various options available in each template.

The wizard walks you through platform selection, language selection (such as F#), Android API levels, and platform support in Xamarin.Forms. It’s also easier to add a WatchKit or Android Wear app to your solution.

Debugger Visualizers

The debugging experience is even more interactive with a number of new visualizers for various types including strings, points, sizes, rectangles, colors, map locations, images, bézier curves, and more. Hover over an instance during debugging and click the “eye” icon to preview:

visualizers

C# 6

Even though C# 6 support isn’t “officially” in Visual Studio until 2015 is released, you can already use C# 6 features in Xamarin apps. Here are a couple of examples of the new syntax:

// Null-Conditional operator
var mip = name?.Substring(' ');
// Auto-Property initializer
public DateTime TimeStamp { get; } = DateTime.UtcNow;
// nameof
throw new ArgumentException ("Not found",nameof(Key));
// Expression Bodied properties and methods
public string Fullname => string.Format ("{0} {1}", fname, lname);
public override string ToString() => string.Format("{0}, {1}", lname, fname)
// String interpolation
var Fullname = string.Format ($"{fname} {lname}");

You can also use the new dictionary initializer syntax, exception filters, async/await in the catch block, and static using statements.

To learn more about new C# 6 features check out this video by Mads Torgersen, Language PM for C#:

Sketches

If you haven’t played with Sketches before, now is the time to give it a try with these great new features.

Sketches1

It’s much easier to add resources like images to your Sketch, and Xamarin.Forms support is also improved. Follow along with our Sketches walkthrough to give it a try.

New Publishing Workflow

There is now a more consistent approach for publishing apps across iOS, Android, and Mac projects. Select Archive for Publishing for the project to create a binary in the archive list.

Once the binary has been created choose Sign and Distribute… then follow the publishing steps, which are customized for each platform.

publish-distribution (1)

And much more…

You’ll notice lots of other changes too, like the new Mac-native title bar, the Packages folder always appearing in the Solution Explorer, configurable Components location, lots of iOS Designer improvements, and the Release Notes are available from the Help menu instead of popping up automatically after installation!

You can discuss these new features in the Xamarin Studio forum, and be sure to read through the full release notes in the Xamarin developer portal.

The post Xamarin.Studio 5.9 Enhancements appeared first on Xamarin Blog.

Case Study: Usability Testing the Audio Mixer

In a previous blog post, “Testing the Audio Mixer”, the Audio Mixer was discussed from a Software Test Engineering point of view. In this blog post we will pick up where we left off: involving real users in evaluating a system in development.

In the Summer and Fall of 2014, we, Stine Kjærbøll (User Experience Team Lead) and Nevin Eronde (User Experience Researcher), ran three rounds of usability testing sessions with audio designers/composers from the local games industry in Copenhagen.

The users that we recruited for usability testing had backgrounds in games and  art installations. They were:

  • Audio programmers
  • The “One-Man Army” (programmer/artist/audio designer)
  • Composers
  • Sound designers
  • Audio-visual artists

Think Aloud Protocol – Involving real users

The Think Aloud Protocol is a user research method that involves test subjects thinking aloud as they are performing a set of specified tasks. The test subjects are encouraged to say whatever they are thinking, doing and feeling as they are performing the tasks.

We wanted to perform the Think Aloud test was for several reasons:

  • The Audio Mixer is designed for a target audience that ranges from people who are programming experts and people who have never even opened a script. Would those audio designers, who are not comfortable with scripting, be able to use the Audio Mixer without the the help of a programmer? To what extent?
  • Having an Audio Mixer inside of a game development tool is a new concept in itself. Would users understand the concepts of Snapshots, Ducking Volume, Audio Groups and Audio Sources?
  • Unity editor workflow with references from 3rd party DAW (Digital Audio Workstation) tools. Would the Audio Mixer and its concepts be easy to use for an audio designer that is working with other types of DAWs?
  • Would they understand the concept of tweaking audio effects at runtime?

All of our user research is confidential, but this Kids React video is pretty close to how a Think Aloud session can look like:

Biases during development

Another reason why we conducted Think Aloud Protocols was to avoid our own biases. Whenever you are testing, researching and designing a feature, you can not avoid biases, but you can be aware of them.

Some of the biases we were aware of with regards to the Audio Mixer as well as the users that we recruited from the local games industry were:

  • Test Subjects were working on smaller computer game productions
  • We could not invite audio designers working on large AAA-scale games,since such studios are scarce to come by in the Copenhagen area
  • The QA testing that had been done on the Audio Mixer was with specific DAWs in mind
  • Lastly, our own familiarity  with the Audio Mixer meant that  we were biased with regards to naming conventions and how the system functioned.

This is also where usability testing gets interesting! With the above research questions and biases in mind, we started designing the test.

Designing the Test

When designing usability tests, you always have to make a decision whether to go for realism, abstraction or something in between. If we had designed a test that used a large Unity project, such as Bootcamp or Angry Bots, that could count as  a realistic test scenario, but the complexity of the project might be overwhelming for a test subject unfamiliar with the project. S/he would end up spending a lot of time familiarizing themselves with the project instead of focusing on the Audio Mixer. The Audio Mixer itself is an extensive system, so we decided to go for a minimalist test setup, to avoid any added complexity.

We used a scene with a Red Cube and Blue Cube. All the sound files that were needed were provided in a folder. Not a real life scenario, but the complexity is removed, so the test subject could focus on the tasks.

image01

The tasks of the test subjects were to:

  • Build up an ambience in the scene
  • Trigger Snapshots
  • Trigger Ducking Volume
  • Explore the Audio Mixer effects and Send/Receive units

Lessons learned during Think Aloud Protocol

When conducting Think Aloud Protocols, it is always an interesting study in interaction design. We learned many lessons and below are a few examples:

What you think is “obvious” is not obvious to the user

The “Edit in Play Mode” button is a very useful feature for audio designers, as it enables them to tweak and adjust sounds at runtime. In a very eary prototype, the button was labelled as “Edit Snapshots”. Technically, this is what you are doing when enabling the button and adjusting the Audio Mixer settings at runtime. The problem was that the test subjects did not interpret the “Edit Snapshots” button as a natural part of their workflow. They ignored the button, simply because they did not intuitively associate it with the ability to edit the Audio Mixer settings at runtime. The button was then renamed to “Edit in Play mode”. We ran a new round of Think Aloud tests and from those tests we could verify that it was more intuitive for our test subjects to enable the “Edit in Play mode” button, as they now interpreted the button label as a part of their workflow.

image00

Simplicity versus complexity 

When developing features we often think of how to make the feature simple to use for our users, whether the user is advanced or a new user. One thing we observed during usability testing of the Audio Mixer was that in some cases users wanted the complexity. In the first version of the Ducking Volume, the feature had very few parameters for the user to tweak. The test subjects expressed that “something was missing”. We learned that audio designers are actually used to the complexity of multiple parameters that they can adjust and tweak from the DAWs. A feature with very few parameters was suddenly a complex thing to understand.

Below is an early design example of the Ducking Volume.

image03

After the feedback from our test subjects, the Ducking Volume was redesigned to a sidechained compressor with parameters and controls that the test subjects are used to from DAWs.

image02

Getting to know you!

All the feedback that we get from our users is so valuable to us! We, the User Experience Team, want to get closer to you! Meet us at Unite Europe 2015 in Amsterdam and help us make Unity even better!

Monologue

Monologue is a window into the world, work, and lives of the community members and developers that make up the Mono Project, which is a free cross-platform development environment used primarily on Linux.

If you would rather follow Monologue using a newsreader, we provide the following feed:

RSS 2.0 Feed

Monologue is powered by Mono and the Monologue software.

Bloggers