Disclaimer: this is an automatic aggregator which pulls feeds and comments from many blogs of contributors that have contributed to the Mono project. The contents of these blog entries do not necessarily reflect Xamarin's position.

February 27

Give Us the Gist of It Contest Winners!

Two weeks ago, we asked the community to share the code snippets that help them write amazing apps even faster. Five winners were chosen at random, and here is the gist of it:

Jason Fox:
Snippet Name: Xamarin.iOS Image Blur
Platform: Xamarin.iOS
Function: Image blur extension method for Xamarin.iOS

public static UIImage Blur(this UIImage image, float blurRadius = 25f)
{
  if (image != null)
  {
    // Create a new blurred image.
    var imageToBlur = new CIImage (image);
    var blur = new CIGaussianBlur ();
    blur.Image = imageToBlur;
    blur.Radius = blurRadius;
    var blurImage = blur.OutputImage;
    var context = CIContext.FromOptions (new CIContextOptions { UseSoftwareRenderer = false });
    var cgImage = context.CreateCGImage (blurImage, new RectangleF (new PointF (0, 0), image.Size));
    var newImage = UIImage.FromImage (cgImage);
    // Clean up
    imageToBlur.Dispose ();
    context.Dispose ();
    blur.Dispose ();
    blurImage.Dispose ();
    cgImage.Dispose ();
    return newImage;
  }
  return null;
}

Runar Ovesen Hjerpbakk:
Snippet Name: Async and await together with UIAlertController
Platform: Xamarin.iOS
Function: This snippet shows how to use a TaskCompletionSource to enable async and await together with UIAlertController.

public static class CustomerFeelingSheet {
 public static Task<CustomerFeeling> ShowRatingDialogAsync(UIViewController parent) {
   var taskCompletionSource = new TaskCompletionSource<CustomerFeeling>();
   var alert = UIAlertController.Create("howDoYouFeel".T(), null, UIAlertControllerStyle.ActionSheet);
   alert.AddAction(UIAlertAction.Create("likeIt".T(), UIAlertActionStyle.Default,
       a => taskCompletionSource.SetResult(CustomerFeeling.LikeIt)));
   alert.AddAction(UIAlertAction.Create("couldBeBetter".T(), UIAlertActionStyle.Default,
       a => taskCompletionSource.SetResult(CustomerFeeling.CouldBeBetter)));
   alert.AddAction(UIAlertAction.Create("cancel".T(), UIAlertActionStyle.Cancel,
       a => taskCompletionSource.SetResult(CustomerFeeling.DontCare)));
   parent.PresentViewController(alert, true, null);
   return taskCompletionSource.Task;
 }
}

Matthieu Guyonnet-Duluc:
Snippet Name: Android Behavior – WPF Style
Platform: Xamarin.Android
Function: Reproduce the beloved WPF behaviors in Android

<com.mycompany.behaviors.ListViewHideKeyboardOnScroll
        android:layout_width="0px"
        android:layout_height="0px"
        local:View="@+id/resultsList" />
    public class ListViewHideKeyboardOnScroll : Behavior<AbsListView>
    {
        public ListViewHideKeyboardOnScroll(Context context, IAttributeSet attrs)
            : base(context, attrs)
        {
        }
        #region implemented abstract members of Behavior
        public override void OnAttached()
        {
            View.ScrollStateChanged += HideKeyboard;
        }
        public override void OnDetached()
        {
            View.ScrollStateChanged -= HideKeyboard;
        }
        #endregion
        void HideKeyboard(object sender, AbsListView.ScrollStateChangedEventArgs e)
        {
            if (e.ScrollState == ScrollState.TouchScroll)
            {
                var inputManager = (InputMethodManager)this.Context.GetSystemService(Context.InputMethodService);
                inputManager.HideSoftInputFromWindow(View.WindowToken, HideSoftInputFlags.None);
            }
        }
    }

Ken Pespisa:
Snippet Name: SQLite Extension methods for Save & Delete
Platform: Xamarin.iOS
Function: Save the specified entity by calling insert or update, if the entity already exists.

public static class SQLiteExtensions
{
   /// <summary>
   /// Save the specified entity by calling insert or update, if the entity already exists.
   /// </summary>
   /// <param name="pk">The primary key of the entity</param>
   /// <param name="obj">The instance of the entity</param>
   /// <typeparam name="T">The entity type.</typeparam>
   public static int Save<T>(this SQLiteConnection db, object pk, object obj) where T : new()
   {
       if (pk == null || db.Find<T>(pk) == null)
       {
           return db.Insert(obj);
       }
       return db.Update(obj);
   }
   /// <summary>
   /// Delete entities based on a predicate function
   /// </summary>
   /// <param name="predicate">The predicate specifying which entities to delete</param>
   /// <typeparam name="T">The entity type.</typeparam>
   public static void Delete<T>(this SQLiteConnection db, Expression<Func<T, bool>> predicate) where T : new()
   {
       var records = db.Table<T>().Where(predicate).ToList();
       foreach (var record in records)
       {
           db.Delete(record);
       }
   }
}

Ryan Davis:
Snippet Name: InlineTableViewSource
Platform: Xamarin.iOS
Function: A subclass of UITableViewSource that allows you to define UITableViewDataSource and UITableViewDelegate methods inline, rather than subclassing.

var cellId = new NSString("cell");
var tableView = new UITableView(View.Frame, UITableViewStyle.Grouped) {
    Source = new InlineTableViewSource {
            _NumberOfSections = (tv) => 2,
            _RowsInSection = (tv, section) => 5,
            _TitleForHeader = (tv, section) => String.Format("Section {0}", section),
            _GetCell = (tv, indexPath) => {
                var cell = tv.DequeueReusableCell(cellId) ?? new UITableViewCell(UITableViewCellStyle.Default, cellId);
                cell.TextLabel.Text = "hi";
                return cell;
        }
    }
};

Find even more speedy code snippets for your apps in the Get The Gist forum thread, and a big thanks to all who participated in the Give Us the Gist of It Contest!

Unity at GDC 2015

GDC is nearly upon us! It’s crazy hectic getting ready for such a big show but always an incredibly exciting week where we get to meet with so many of our current development community and meet new friends. As you might imagine, we’ve got a lot going on at the show! Here’s a little bit about it.

Unity Special Event

On Tuesday March 3 at 8:30AM PST, we’ll be holding live from San Francisco a special event to kick our GDC off. We’ll be sharing some big news, showing some beautiful demos, and inviting some special guests from the development community on stage.

Event details to come soon on our social media channels, stay tuned!

Unity Party

Don’t miss the Unity Party Wednesday night!
Register here: http://unity-gdc2015-party.eventbrite.com

Unity Dev Day

For those who chose an all access or summit & tutorial conference pass, don’t miss the “Unity Developer day”, Tuesday, March 3rd from 10:00am to 5:30pm in Room 2014, Moscone West Hall, 2nd Floor. We’re refining the final agenda, but in short it’ll be about digging deep into Unity 5, learning from Unity engineers and games developers.
Here’s what we’re preparing:

  • Graphics improvements by Aras Pranckevičius (Rendering Plumber),

  • New audio mixer by Jan Marguc and Wayne Johnson,

  • Future of scripting with IL2CPP by Jonathan Chambers (Scripting Team Developer) and Mantas Puida (iOS Team Lead),

  • Unity Ads & Everyplay by Oscar Clark (Everyplay Evangelist) and Nikkolai Davenport (Dev Relations Engineer),

  • Cloud Build, Analytics and more services by Patrick Curry, John Cheng & Suhail Dutta,

  • Post mortem on producing high-end content by Veselin Efremov (Artist), Torbjorn Laedre (GFX Programmer), Dominic Laflamme (Lead Developer Storytelling),

  • And finally “Rebuilding Republique in Unity 5” postmortem by Camouflaj team Paul Alexander (Producer/Designer), Kevin Call (Engineer) and Stephen Hauer (Art Director)

Unity GDC 2015 Expo Booth – South Hall #1402

As usual, we’re setting up shop in the main expo of GDC. We encourage you to stop by and say hello, ask questions, and check out awesome games from the community and our great partners that make the Unity development ecosystem so amazing.  We’ll have a lot of staff at the booth ready to answer questions and give you tours of Unity including the big new features of Unity 5 and some awesome new demos. We’ll also have daily drawings to win cool prizes

Talks at Unity booth

Additionally, we’ve got a slew of useful talks scheduled from Unity staff and partners designed to help you get the most out of Unity. We will be announcing the program shortly, and of course you can stop by the booth to see the schedule!

Games Pavilion

As usual, we’re very excited to be hosting several games currently in development by the Unity community of developers at the Unity booth. Come by, check out the games, talk to the guys that made them, and get inspired!

Republique Remastered from Camouflaj Ori and the Blind Forest from Moon Studios Gang Beasts from Bone Loaf Mordheim: City of the Damned by Rogue Factor Total War Battles Kingdom from Creative Assembly Space Noir from N-Fusion Super Dungeon Bros from React Games Dyscourse from Owlchemy Labs The Room Three from Fireproof Games The Trace from Relentless Pollen from Mindfield Games Armello from League of Geeks TBA from NVYVE T

and two other unannounced titles!

Partner Pavilion

Make sure to stop by and check out the latest technologies and platforms being showcased by our sponsors in the partner pavilion:

Microsoft
image02
image03
image04
image05
image07
image06
image09
image08
image10
image00

This year we’re also pleased to have a new dedicated area for our Asset Store Publishers! The following twelve publishers will be showcasing their tools and technologies for one day during the show, rotating across three kiosks:

Cinema Suite, Houdini, Hutong Games/PlayMaker, Make Code Now, Neat Corporation/Shader Forge, Owlchemy Labs, Polygonmaker, ProCore, Rust Ltd, SonicEther Technologies, TextMesh Pro and Tigar Shark Studios.

February 26

Join Xamarin for GSMA Mobile World Congress 2015

Mobile World Congress LogoXamarin will take the stage alongside Airwatch, Box and Salesforce in Barcelona at Mobile World Congress next week.
 
Xamarin’s Director of Enterprise Mobility, Steve Hall, will join the Steve Hall“Airwatch Presents a New Standard for Enterprise App Development” panel discussion on March 2nd. Employees expect – and need – fast, on-the-go access to company data, and we’ll share how enterprises can successfully build and distribute secure mobile apps.


AirWatch Presents a New Standard for Enterprise App Development featuring Box, Salesforce & Xamarin

Tuesday, March 3, 1:30 – 2:25 pm CET
AirWatch Connect Stand
Hall 3, Stand 3D10
All MWC Attendees are welcome to attend.

See you in Barcelona!

Using history to better explain branch differences

Release BL647 introduced a great step ahead in the way in which branch (and cset) differences are displayed and explained. Now it is possible to understandwhere each difference comes from:

Remastering Republique: The Journey to Unity 5

Greetings, fellow Unity developers! We are Camouflaj, a game studio based near Seattle, WA. We are the folks behind République, an episodic stealth action game about governmental surveillance. To date, we’ve shipped three episodes (of five) to an overwhelmingly positive reception.

Back in 2012, we promised to make a “true” PC & Mac version of République that was in no way a simple mobile port. We spent countless hours thinking and experimenting with ways to make our upcoming PC & Mac release all the more special.

Soon after Unity 5 was announced, the team started dreaming up ways we could use that new technology to make a big splash on PC. We wanted to totally remaster the game in Unity 5.

That’s when we approached Unity with a proposal: in exchange for early access to Unity 5’s alpha and beta releases, why doesn’t the Camouflaj team document their journey from Unity 4 to Unity 5? We’d love to leverage République as a standout title on Unity 5, and share the story of our development with the public so they can learn from our successes and failures. Thankfully, the folks at Unity said yes.

Today we are proud to share the developer diary about our journey to remaster Republique in Unity 5!

Each of the five episodes from our dev diary includes a video and a podcast. Ultimately, our modest hope is that our “journey” series is helpful to you.

Here’s a more detailed breakdown of what we cover:

Dev diary 1: République enters the Next Gen

Why we are using Unity 5 to take Republique to PC

Dev diary 2: République Migrates From Unity 4

We’ll explain how we moved our project from Unity 4 to Unity 5 in the midst of a chaotic, 20-person project, starting with our initial investigation.

Dev diary 3: République in Physically Based Shading

This is the really exciting stuff. We’ll go over a little bit of Physically Based Shading for the uninitiated, and explain how we put this to work in our game.

Dev diary 4: République Lighting & More

We’ll walk you through how we made use of Reflection Probes, Global Illumination, cookies and other good stuff, plus we’ll cover some physics and animation refinements.

Dev diary 5: République Ships on Unity 5

We’ll document our push to launch, and how we optimized and (fingers crossed) shipped a fantastic game.

Thank you for taking this journey with us!

-Camouflaj

February 25

Triggers in Xamarin.Forms

Triggers were introduced in Xamarin.Forms 1.3 along with Behaviors, which we covered previously. Triggers allow you to declaratively express actions in XAML that are executed when a specified condition is met. Xamarin.Forms support four types of triggers:

  • Property Trigger – executed when a property on a control is set to a particular value.
  • Data Trigger – same as the property trigger but uses data binding.
  • Event Trigger – occurs when an event occurs on the control.
  • Multi Trigger – allows multiple trigger conditions to be set before an action occurs.

Let’s take a look at each one in detail.

Property Trigger

Property Triggers (represented by the Trigger element) are added to a control’s Triggers collection. The Setter collection inside is executed when a specified property equals the specified value.

PropertyTrigger

Wouldn’t it be nice to provide some visual indicator that an input control has focus? To achieve this, we can set the BackgroundColor property when the property IsFocused of the Entry element is true.

<Entry Placeholder="enter name">
    <Entry.Triggers>
        <Trigger TargetType="Entry"
             Property="IsFocused" Value="True">
            <Setter
                Property="BackgroundColor"
                Value="Yellow" />
        </Trigger>
    </Entry.Triggers>
</Entry>

Alternatively, we can set them in styles so that they can be attached to every Entry element in the screen.

<ContentPage.Resources>
   <ResourceDictionary>
     <Style TargetType="Entry">
       <Setter Property="AnchorX" Value="0" />
       <Style.Triggers>
         <Trigger  TargetType="Entry"
                   Property="IsFocused"
                   Value="True">
           <Setter Property="BackgroundColor"
                   Value="Yellow" />
         </Trigger>
       </Style.Triggers>
     </Style>
   </ResourceDictionary>
</ContentPage.Resources>

Data Trigger

DataTriggers are very similar to PropertyTriggers, except that instead of specifying the Property, we specify the Binding for the trigger. This Binding generally refers to another VisualElement’s property on the page or it could reference a property in a ViewModel.

The code below shows how to disable the button when the entry’s Text.Length property is 0.

<StackLayout Spacing="20">
<Entry x:Name="emailAddress" Text="" Placeholder="email address"/>
<Button Text="Send">
  <Button.Triggers>
    <DataTrigger TargetType="Button"
         Binding="{Binding Source={x:Reference emailAddress},
                                           Path=Text.Length}"
         Value="0">
      <Setter Property="IsEnabled" Value="False" />
    </DataTrigger>
  </Button.Triggers>
</Button>
</StackLayout>

Event Trigger

Event Triggers execute user-defined code when a specified event occurs.

In the above Property Trigger example, we saw how to change the background color of an Entry element based on the IsFocused property entirely in XAML. Alternatively, we can use an Event Trigger to execute an action written in C# based on the TextChanged event of an entry to perform some basic validation.

Define the TriggerAction in code

Every action that we define has to inherit from TriggerAction<T> where T is the element to which a trigger is attached. When a trigger is fired, the Invoke method will be called. In the code below, we change the Entry’s BackgroundColor to indicate whether the input is valid or not.

public class NumericValidationTriggerAction : TriggerAction<Entry>
{
   protected override void Invoke (Entry entry)
   {
      double result;
      bool isValid = Double.TryParse (entry.Text, out result);
      entry.BackgroundColor =
            isValid ? Color.Default : Color.Red;
   }
}

TriggerAction in XAML

To use the C# code, just declare a namespace for the assembly (xmlns:local in this sample) and add the NumericValidationTriggerAction element to the event trigger:

<Style TargetType="Entry">
<Style.Triggers>
    <EventTrigger Event="TextChanged">
        <local:NumericValidationTriggerAction />
    </EventTrigger>
</Style.Triggers>
</Style>

Multi Trigger

A MultiTrigger looks similar to a Trigger or DataTrigger except there can be more than one condition. All the conditions must be true before the Setters are triggered.

In the code below, we enable the button when either the email or the phone entries are filled in by the user. Each condition is true when the length of the text input is zero (ie. nothing has been entered). When both conditions are true (ie. both are empty) then the trigger’s Setters are called, which in this case disables the button. When either have text entered, the overall condition becomes false and the button is enabled.

<Style TargetType="Button">
<Style.Triggers>
  <MultiTrigger TargetType="Button">
    <MultiTrigger.Conditions>
      <BindingCondition
          Binding="{Binding Source={x:Reference email},
                            Path=Text.Length}"
          Value="0" />
      <BindingCondition
          Binding="{Binding Source={x:Reference phone},
                            Path=Text.Length}"
          Value="0" />
    </MultiTrigger.Conditions>
    <Setter Property="IsEnabled" Value="False" />
  </MultiTrigger>
</Style.Triggers>
</Style>

To see how to build a “require all” trigger (like you’d use in a login page, for example) check out our Triggers sample on GitHub that uses an IValueConverter along with a MultiTrigger.

For even more information on Xamarin.Forms, be sure to check out the detailed documentation.

Discuss this post in the Xamarin forums

How to setup an encrypted server

We are happy to announce a new Plastic SCM feature that allows configuring a server with encrypted data.

It means that, in your organization, you can configure a central server where all the data is encrypted. This way, the users who have a specific key, will be able to push/pull data to this server.

It is important to remark that this server is created for replication purposes only. Repositories have all data encrypted. If we directly download the file content to a workspace, we will only see empty files (the data is encrypted).

This configuration could be very useful when your server is accessible from a public network or you just need to be sure that even if a not authorized person access to your server, he will not be able get any information.

February 24

Live APAC Webinar: Go Mobile with Xamarin

Photo of Mayur Tendulkar

Join Xamarin Evangelist Mayur Tendulkar for this live webinar timed just for our APAC customers, where you’ll learn how to leverage your existing Microsoft .NET and C# skills to build iOS, Android, and Windows Phone apps using Visual Studio and Xamarin. We’ll also talk about how to maximize code sharing and reuse existing .NET libraries.

At the end of the webinar, you’ll have the skills you need to create your first iOS and Android apps in C# with Xamarin in Visual Studio.

Wednesday, March 11
11:30 AM – 12:30 PM IST

Register

All registrants will receive a copy of the webinar, so please feel free to register even if you can’t attend.

Gorgeous Arch-Viz in Unity 5

Is it possible to dial up the quality level in Unity 5 high enough to make high-end architectural visualizations?

In response Alex Lovett aka @heliosdoublesix built this gorgeous architectural visualization demo in Unity 5.

It makes good use of the real-time global illumination feature, physically based shading, reflection probes , HDR environment lighting, the linear lighting pipeline and a slew of post-effects all in order to achieve the necessary visual fidelity expected in an architectural visualization.

The aim was to push for quality, so very high resolution textures were used and the model has just over 1 million faces.

There is no baked lighting in this scene

The first part of the demo has a fast moving sun. The second part has more localized lighting; a spot light from a fellow maintenance robot lights up the environment in addition to the headlight of the robot the viewer is piloting. In both parts there is considerable environment lighting.

Due to how the scene is laid out, there is a lot of bounced lighting and also quite distinct penumbrae caused by indirect lighting. For example, the v-shaped columns cast a very sharply defined indirect shadow onto the ceiling, which is especially visible in the night time part of the video.

Front-Huge 28 Under-16-10 Side Redo1 Redo4 Redo5 TopShadow-16-10 Reflection probes TopShadow2-16-10 Night5-Top Shadow1 Night4-Corner Indirect shadow penumbrae Indirect shadow penumbrae

Using high resolution real-time lightmaps

When the lighting changes, these penumbrae and the overall lighting gradients have to change significantly. In order to do this with global illumination, the Enlighten powered real-time lightmaps feature was employed. Traditionally, Enlighten is used in-game at relatively low resolutions (1-2 pixels per meter). This works well because the bounced lighting is generally quite low-frequency.

In this demo, a much higher density is used to capture the fine details in the lighting. An overall density of 5 pixels per meter was used. There is about 1.5 million texels in the real-time lightmaps in total. In the resolution screenshot below you get a sense of the density in relation to the scene size.

At this resolution, the precompute time spent was about 2.5 hrs. The scene is automatically split into systems in order to make the precompute phase parallelizable. This particular level was split into 261 systems. The critical path through the precompute (i.e. the sum of the most expensive job in each stage along the pipeline) is about 6 minutes. So there are significant gains to be made by making the precompute distributed. And indeed going forward, one of the things we will address is distribution of the GI pipeline across multiple computers and in the cloud. We will look into this early in the 5.x cycle.

See geometry, GI systems and real-time lightmap UV charting screenshots from the Editor below:

Geometry Real-time lightmap systems Real-time lightmap texture density and UV charts

Interactive lighting workflow

Once the precompute is done, the lighting can be tweaked interactively. Lights can freely be animated, added, removed and so on. The same goes for emissive properties and HDR environment lighting. This demo had two lighting rigs; one for the day time and one for the night time. They were driven from the same precompute data.

“I’m able to move the sun / time of day and change material colors without having to rebake anything. I can play with it in real-time and try combinations out. For a designer like me, working iteratively is not only easier and faster, but also more fun,” says Alex Lovett.

Lighting 1.5 million texels with Enlighten from scratch takes less than a second. And the lighting frame rate is decoupled from the rendering loop, so it will not affect the actual rendering frame rate. This was a huge workflow benefit for this project. Interactive tweaking of the lighting across the animation without interruption drove up the final quality.

To make this a real-time demo, some rudimentary scheduling of updating the individual charts would have to be added, such that visible charts are updated at real-time, while occluded charts and charts in the distance are updated less aggressively. We will look into this early in the 5.x cycle.

Acknowledgements

A big thanks to Alex Lovett owner of shadowood.uk. He has been tirelessly stress testing the GI workflow from when it was in alpha. Also thanks to the Geomerics folks, especially Roland Kuck.

The following Asset Store items were used. SE Natural Bloom & Dirty Lens by Sonic Ether, Amplify Motion and Amplify Color by Amplify Creations.

Web continuous integration with Plastic SCM and Azure

Microsoft defines Azure Websites as a fully managed Platform-as-a-Service (PaaS) that enables you to build, deploy and scale enterprise-grade web Apps in seconds. Since most modern web development teams promote new code from staging to production environments, it is important to consider various techniques to automatically deploy your code as part of your ALM (Application Lifecycle Management) process. The technique that we will focus on in this article is Plastic SCM with GitSync. With this technique, your team can quickly and easily publish changes automatically to Azure Websites from GitHub.

February 23

Adding Real-world Context with Estimote Beacons and Stickers

It’s no secret that iBeacons have created a buzz in the development community. Leveraging these Bluetooth Smart devices enables developers to add contextual awareness to their mobile apps with just a few lines of code. iBeacons were everywhere at Evolve 2014, including at the forefront of the Evolve Quest scavenger hunt and the conference mini-hacks, as well as taking the main stage for an in-depth session.

sticker_bikeEstimote, a leader in the iBeacon space, recently introduced Estimote Stickers, a low-powered device to go alongside their traditional beacons. Stickers can be attached to almost anything and turn any everyday item into a “nearable” – a smart object that can transmit data about its location, motion, temperature, and environment to nearby apps and devices. Today, we’re pleased to announce the Estimote SDK for iOS, available on the Xamarin Component Store, enabling developers to easily detect Beacons and Estimote Stickers with a beautiful C# API that includes events and async/await support.

Detecting Nearables

Nearables have a new, simplified API. Each Nearable has a specific NearableType that can be used to detect, for example, Car, Dog, or Bike. You can decide to range for a specific type or all nearby Nearable devices.

Let’s see how easy it is to get up and running with Nearables by scanning for all Nearables that are close by.

Install the Estimote SDK for iOS

The very first task is to set up a new Xamarin.iOS project and add the Estimote SDK for iOS from the component store.

2015-02-19_1313

In addition to the SDK, you must specify NSLocationAlwaysUsageDescription or NSLocationWhenInUseUsageDescription in your Info.plst file with a description that will be prompted to your users, since iBeacons use CoreLocation functionality.

Setting Up App ID

When you log in to your Estimote Cloud, you are able to manage all of your Beacons and Stickers in addition to creating API keys for your mobile apps. Once you have an app set up in the Estimote Cloud, you can config the app in your AppDelegate’s OnFinishedLaunching method:

Config.SetupAppID ("<appId from cloud>", "<appToken from cloud>");

While not required, it’s recommended to set up your app ID so that the SDK can now communicate with the Estimote Cloud to pull in unique attributes.

Ranging Nearables

Using the new NearableManager you can easily range for Nearables by subscribing to the RangedNearables event.

NearableManager manager;
public override void ViewDidLoad ()
{
  base.ViewDidLoad ();
  manager = new NearableManager ();
  manager.RangedNearables += (sender, e) => {
    //Nearables detected, load into TableView or pop up alert
    new UIAlertView("Nearables Found", "Just found: " + e.Nearables.Length + " nearables.", null, "OK").Show();
  };
  //Specify the type of Nearable to range for. In this instance return All types.
  manager.StartRanging (NearableType.All);
}

Estimote Nearables Detected

The real power of Nearables is the additional attributes that are received when they are detected, such as their temperature, orientation, acceleration, and more. As an example, you could easily use these attributes to detect a Bike Nearable in motion for over 45 minutes and prompt your user to perhaps take a break.

NearableManager nearableManager;
public override void ViewDidLoad ()
{
  var identifier = "94064be7a9d7c189"; //Identifier ranged earlier
  var durationThreshold = 45 * 60; //45 minutes
  nearableManager = new NearableManager ();
  nearableManager.RangedNearable += (sender, e) => {
    var bike = e.Nearable;
    if(bike.IsMoving && bike.CurrentMotionStateDuration > durationThreshold) {
      Console.WriteLine("Bike is moving and has been in motion for over 45 minutes!");
    }
  };
  nearableManager.StartRanging(identifier);
}

Triggers and Rules

In addition to ranging and monitoring Nearables, there is an advanced trigger system in the Estimote SDK that enables you to specify several rules that would trigger a notification. Let’s say you want to be notified every time a Nearable changes orientation and is laid down in a horizontal position. You would simply create a OrientationRule and use the TriggerManager to wait for the Nearable’s state to change.

TriggerManager triggerManager;
public override void ViewDidLoad ()
{
  var rule = OrientationRule.OrientationEquals (NearableOrientation.Horizontal, NearableType.Shoe);
  var trigger = new Trigger (new Rule[]{ rule }, "TriggerId");
  triggerManager = new TriggerManager ();
  triggerManager.StartMonitoring (trigger);
  triggerManager.ChangedState += HandleTriggerChangedState;
  triggerManager.ChangedState += (sender, e) => {
    Console.Log("Shoe nearable has been placed horizontal");
  };
}

More complex rules can be configured that are based on DateTime, temperature, proximity, and more.

Enhanced C# Beacon API

Xamarin.iOS has been able to detect iBeacons from any vendor since the feature was introduced in iOS 7 in CoreLocation. However, the Estimote SDK greatly simplifies the tasks of requesting and a simplified API for ranging and monitoring for beacons. In addition, if you are using Estimote Beacons you can tap into advanced features, such as their accelerometer.

0001125_estimote-beacons

Learn More

The Estimote SDK for iOS has plenty of great samples for both Nearables and Beacons for you to start out with, including a full Getting Started Guide. In addition, Estimote has SDK reference documentation and a developer portal with more information.

If you are interested in adding iBeacon functionality to your Xamarin.Android apps, be sure to check the component store for multiple libraries that you can take advantage of.

Discuss this post on the Xamarin Forums.

February 21

Apple Watch Kit round-up

It's Saturday, a good excuse for a 'fun' post. Here's a little collection of tidbits about the Apple Watch...



Apple: Watch Kit - if you're thinking of developing for the platform, might as well start at the source :)

Wareable: The best Apple Watch apps... - some great screenshots of apps already being built, including Clear, BMW, and Nike. It's interesting to see the UI design approach being taken by different developers. Check out the similar list on ibtimes.com

FastCompany: How the Apple Watch will work... - a couple of thoughts on app design, and screenshots of Todoist.

eleks labs' unofficial Tesla app - more design thoughts and prototype video (unofficial development, not affiliated with Tesla)..

Daring Fireball: On the Pricing of the Apple Watch - so yeah, "starting at $349" sounds like it's going to be the understatement of the year.

WatchKit FAQ - Awesome collection of questions and answers (and cute watch drawings too).

MartianCraft: Designing for the Apple Watch with Briefs - even if you don't use the tool (which looks great) this is a lovely post on Watch app design.

If that's got you interested in building apps for the Apple Watch, it's time to check out Xamarin's Watch Kit Preview and how to get started (inc video) and my first watch app.



I've also got a couple of samples, including Magic 8 Ball, Calculator, Insta, and Todo for you to try.

^ watch frame screenshots generated with Bezel thanks to the fine folks at infinitapps.

February 20

Nordic Game Jam 2015

A couple of weeks ago, several of us from the Unity Copenhagen office took part in the Nordic Game Jam . With around 730 participants, it’s probably the largest game jam in Europe. I’d been told in the past that I absolutely had to try this, but all the other game jams I’ve been to before were much smaller, so I didn’t know what to expect. Here’s what went down!

People from different countries flew in to Copenhagen for the two day jam that took place at Aalborg University, which consists of these two enormous buildings by the water connected by a bridge. The view from the bridge was beautiful and great to catch the sunrise from on a clear day!

IMG_2437 Polish invasion! Live DJ set at NGJ pre-party

While the actual jam kicked off on a Friday, we started getting into Nordic Game Jam mode the day before. A large group of game devs from Poland came to visit us at the Unity Copenhagen office, what later became known as the Polish Invasion. After a day of hanging out, we gathered the troop and went to the NGJ pre-party in Christiania where lots of dancing, playing indie games like Progress and catching up with friends took place. There were a couple of game journalists accompanying the game devs and they wrote a nice piece about their visit to our office.

Once in place at Aalborg University for NGJ, we set up a booth where participants could stop by and chat with our HR manager Anders about landing a job at Unity and get temporary Unity tattoos! NGJ interviewed Anders about working at Unity.

IMG_2231 (1) IMG_2234 IMG_2695

While part of the audience may have been a bit tired after the pre-party, you could sense the atmosphere of excitement the next day. I gathered a group of friends from Sweden, Poland and Germany, which turned out to be a really cool team.

The theme of the game jam, “OBVIOUS”,  was revealed after a day filled with talks, including one from James Portnow of Extra Credits and a keynote from Steve Swink. What’s pretty cool about game jams in general is that anyone can participate, whether you have created several games or are completely new to game development. Extra Credits recently worked with us on a series of videos about getting started with game development and I believe game jams to some degree fill the same purpose.

IMG_2263 IMG_2440 IMG_2241

Everyone split up into their groups, moved into rooms or spaces for dev:ing, started brainstorming game ideas and seeing what skills everyone had that could be put to use. You could hear lively discussions going on and feel the atmosphere of creative minds interchanging genius thoughts. Different groups had different methods of getting their thought processes going, pinning googly eyes on pineapples and spontaneous dancing took place.

The NGJ organisers made sure any type of game could be created during the jam. There was equipment for creating arcade games, material for board games, 3D printers, Oculus DK2’s, joysticks, and so on. The best part was being able to use the sound lab, which looked insane when I walked in the first time, a dark room covered in enormous spikes pointing directly at you, so quiet I could hear my own thoughts. I could use this room when performing the voice acting for the role of a pregnant woman in our game, which was a pretty interesting experience as well. Getting to scream as loud as I could in a room all by myself is not an everyday activity. My voice did however take quite a beating and I was still recovering the week after from a sore throat. Totally worth it.

IMG_2439

Though there were teams that got started on their games on Friday night, my team first decided on a specific game idea during our Saturday morning meeting. It really was a matter of “our deadline to decide on something is before lunchtime and after that we work work work.” And so we did. Feeling confident, we all popped back up into the space we’d taken over the night before and started producing. We split the areas up well having quite a large group, so each person was able to dedicate their time to art, code, design and audio. The group sizes at NGJ varied from 2 person teams to 6 persons. A few lovely souls also jumped between teams to help out in any area they could, which was super awesome.

Several groups stayed up late or pulled all-nighters to finish up their games for Sunday’s submission deadline. I believe around 140 games were submitted, so presentations took place in separate rooms where participants were able to vote for their favourite. Several of the games were made using Unity and you can play many of them on NGJ15′s itch.io site. One of the best things about game jams is that you never know the outcome of what people are working on, the projects typically start out pretty comprehensible, but can quickly turn ridiculous, which makes them that much more memorable.

A ceremony was held after the absolute final voting had taken place and the jury had made their decisions on which games were the best in each category. Awards were handed out, speeches were given, songs were sung and everyone was happy with the results. A great game jam, making new friends and just having a swell time is the summary of the weekend.

But before you stop reading, check out a couple of my favourite submissions:

Look at my drawing

Screen Shot 2015-02-18 at 17.51.07

Press F to Win

Screen Shot 2015-02-18 at 17.46.27

Hest til fest

Screen Shot 2015-02-18 at 19.48.36

Double Trouble

SnRsjC

There were many more good games, you can find a complete list and play some on NGJ15′s itch.io site.

Here are a couple of games made by the teams that included some Unity folks:

Gone

oaQHaC

Express Delivery

Screen Shot 2015-02-18 at 17.42.05

Black Hole Battle: not #madewithunity, but a board game instead!

IMG_2442

Once again, a big thank you to the organisers for creating such a fun and memorable event, we look forward to participating next year which also happens to be NGJ’s 10 year anniversary!

February 19

Unity 4.6.3: Metal rendering support and update of IL2CPP for iOS

Today we shipped the public release of Unity 4.6.3. You can get it on our download pageWith this release, we’re bringing iOS Metal rendering support to Unity 4.x. Unity 4.6.3 is the first Unity 4.x public version supporting both critical features in the iOS world: iOS 64 bit via IL2CPP and Metal rendering. Unity 4.6.3 also brings critical updates to IL2CPP for iOS 64 bit.

What is Metal rendering ?

It is a new low-level rendering API developed by Apple for iOS 8 and further. It focuses on doing less in GPU drivers, so the CPU overhead while making Metal calls is minimal. This way, games can consume less CPU time and can do more fancy stuff in the remaining freed up time.

Here’s a short description from Apple:

“Metal provides the lowest-overhead access to the GPU, enabling you to maximize the graphics and compute potential of your iOS 8 app. With a streamlined API, precompiled shaders, and support for efficient multi-threading, Metal can take your game or graphics app to the next level of performance and capability.”

For more information, please consult the official Apple Metal rendering developer site.

How to enable Metal rendering ?

To bring Metal support, Unity takes care of most of the things that happen behind the scenes. Metal will be used by default on capable devices. If you want more control, you can find Graphics API selector in Player Settings; with values like Automatic, Metal, OpenGL ES 3.0, OpenGL ES 2.0:

If you want to detect whether you’re running on Metal at runtime, do something like if (SystemInfo.graphicsDeviceVersion.StartsWith(“Metal”)).

We worked really hard to make Metal usage as seamless as possible, but please report issues if you run into them!

Update of IL2CPP on iOS-64 bit

Unity 4.6.3 is a critical update to IL2CPP on iOS-64 bit:

  • Fifty fixes were made for various bugs and crashes. We are very grateful for your feedback which enabled us to move and iterate fast.

  • Missing support of .NET classes was added for ThreadPool, Asynchronous Sockets, WebRequest.

  • Added support for async delegates (BeginInvoke/EndInvoke).

We are committed to fixing and improving IL2CPP support for iOS-64 bit in further Unity 4.6.x patches and public releases as well as in Unity 5, so if you have any issues, do not hesitate to report those and ping us on the forums.

Other goodies

Unity 4.6.3 release is not limited only to Metal rendering or IL2CPP on iOS. It has number of fixes and improvements to Android, iOS, 2D, animation, shaders, UI and others. For a full list of changes, please consult the release notes.

Code Sharing Strategies for iOS & Mac

I fell in love with the Xamarin approach to mobile development because sharing code across platforms provided me with huge productivity gains when developing apps. With Xamarin you can share an average of 75% of app code across all mobile platforms and in this blog post, I’m going to give you some strategies to help you share even more code between iOS and OS X. If you’ve recently developed an app for iPhone or iPad using the traditional approach (sans Xamarin.Forms),  you might be surprised to learn how much code you can share between iOS and OS X. With the Mac platform becoming increasingly popular, there’s never been a better time to consider if your apps could benefit from targeting a new platform. Let’s learn some tips and tricks for sharing more code between the platform all in the context of C#.

codeshare

General Code Sharing Strategies

It’s common knowledge that both iOS and OS X share a common architecture, which results in a great code sharing story. Many classes are compatible on both platforms without modification and with the recent release of the Unified API, we’ve made it even easier to share code between OS X and iOS.

Before we get started on linking all of our existing Xamarin.iOS code in a new Xamarin.Mac project, we need to first look at what we should share and what should remain platform dependent. The most common architectural pattern for iOS and OS X development is the Model View Controller (MVC) pattern. This increases the amount of code reuse in our app as many of the models and controllers will still be relevant regardless of the underlying platform.

Conditional Compilation (Shimming)

Sharing our view code is a little more involved, but is possible with a couple of techniques used by Apple in apps such as Keynote. Your existing iOS apps will be using the UIKit namespace, which is the framework that provides the window and view architecture needed to manage your app’s user interface. Here you will find labels, buttons, colors and other classes that you’ll build your app’s UI with. UIKit is only available for iOS, which means any code utilizing this framework will not run on the Mac without some modification. Let’s take a look at the simple sample problem of sharing colors between platforms.

iOS

var xamarinBlue = UIColor.FromRGB(0.26f, 0.83f, 0.31f);

Mac

var xamarinBlue = NSColor.FromCalibratedRgba(0.26f, 0.83f, 0.31f, 1f);

The above example is fairly consistent with what I find when looking at customers’ projects. With a little bit of trickery, we can share colors between platforms. Apple calls this “shimming” but you might know it as “conditional compilation”.

public class Color
{
 #if __MAC__
 public static NSColor FromRGB(nfloat r, nfloat g, nfloat b)
 {
 return NSColor.FromCalibratedRgba(r, g, b, 1f);
 }
 #endif
 #if __IOS__
 public static UIColor FromRGB(nfloat r, nfloat g, nfloat b)
 {
 return UIColor.FromRGB(r,g,b);
 }
 #endif
}

Now in both our iOS and OS X app, we can use the following to create our blue color.

var xamarinBlue = Color.FromRGB(0.26f, 0.83f, 0.31f);

If this is running on iOS, it will return a UIColor and on OS X we will get an NSColor. This is an approach I apply in many areas of UIKit and AppKit. You could for example extend this to UIImage and NSImage.

public static CGImage ToCGImage(string imageName)
{
 #if __MAC__
 return NSImage.ImageNamed(imageName).CGImage;
 #endif
 #if __IOS__
 return UIImage.FromFile(imageName).CGImage;
 #endif
}

Sharing Your UI with CALayers

If you want to maximize your code sharing, then you might want to investigate using CALayers. UIViews are built on CALayers, which can be accessed by using the Layer property of the UIView. The benefit of using CALayers over UIView is that CALayers can very easily be ported to OS X and there is no performance loss over using UIViews or NSViews. Apple’s Keynote canvas uses CALayers which allows them to share over 1m LOC between OS X and iOS.

In the example below, I’ve inherited from a CALayer and overridden the DrawInContext method to get the view setup. I set the background color, using my shimming method, to be purple. I then override the HitTest, which allows me to respond to touch or click events. In this sample, I want to change the background color of the layer every time the user interacts with it. Despite being a basic example, this code works on both iOS and OS X without any modification.

public class ColorChanger : CALayer
{
 public override void DrawInContext(CGContext ctx)
 {
   base.DrawInContext(ctx);
   BackgroundColor = Color.FromRGB(0.65f, 0.22f, 0.72f).CGColor;
   count = 0;
   this.Contents = Image.ToCGImage("xamagon.png");
 }
 public override CALayer HitTest(CGPoint p)
 {
   switch (count)
   {
     case 0:
       BackgroundColor = BackgroundColor = Color.FromRGB(0.2f, 0.52f, 0.89f).CGColor;
       count++;
       break;
     case 1:
       BackgroundColor = BackgroundColor = Color.FromRGB(0.26f, 0.83f, 0.31f).CGColor;
       count++;
       break;
     case 2:
       BackgroundColor = BackgroundColor = Color.FromRGB(0.65f, 0.22f, 0.72f).CGColor;
       count = 0;
       break;
  }
  return base.HitTest(p);
 }
 int count = 0;
}

Conclusion

With Xamarin, you’ve always been able to share approximately 75% of your code between the different platforms, and now with the above tips you can share even more. If you’re looking for basic drawing between platforms, you may find Frank Krueger’s CrossGraphics library useful, as it allows for drawing graphics on Android, iOS, Mac, Windows, and ASP.Net using .Net.

February 18

Xamarin App Video Spotlight: Curse Inc.

At Xamarin Evolve 2014, I had the opportunity to speak with Xamarin customer Curse, a multimedia technology company that builds websites and software for gamers. With 50 million users on their websites and 6 million users on their desktop client, Curse turned to Xamarin to help them build out Mac, Android, and iOS apps for their new product, Curse Voice.

Watch the video below to get a better understanding of how the Curse team was able to take their existing Windows code and get their innovative Curse Voice apps up and running quickly on Mac, iOS, and Android with Xamarin.

Learn More

Try out Curse Voice, from Curse, here.

To get started developing with the Xamarin platform, check out our developer documentation, or get live online training with Xamarin University.

Working with Physically-Based Shading: a Practical Approach

Throughout the development of Unity 5, we’ve used our Viking Village project internally as a testing ground for shading and lighting workflows.

If you’re using the Unity 5 beta, you can download the Viking Village package from the Asset Store to get insights into how you can assemble and illuminate a scene in Unity 5. We also present some of our learnings below.

Creating a template environment

In order to ensure that your texturing and shader configuration is behaving appropriately, we recommend that you use a simple scene with a variety of lighting setups. This could mean differing skyboxes, lights etc – anything that contributes to illuminating your model.

When you open Unity 5, you’ll notice that any new empty scene has a procedural sky as well as default ambient and reflection settings. This provides a suitable starting point.

lightcond5fr640x480loop.1

For our template environment we used:

  • HDR camera rendering

  • A few scattered reflection probes (for localized reflections on objects)

  • A group of light-probes

  • A set of HDR sky-textures and materials, as well as procedural skies. The sky which ships with this project was custom-made for Unity by Bob Groothuis, author of Dutch Skies 360.

  • Off-white directional lights with matched intensity and HDR sky color

Adjusting sky texture panoramas

Most sky textures include the sun (along with flares etc.), thus, light from the sun gets reflected by surfaces. This has the potential to cause three issues:

1) The Directional light you use to represent the sun must match the exact direction of the sun painted onto the skybox or there will be multiple specular hotspots on the material.

2) The reflected sun and the specular hotspot overlap, causing intense specular highlights.

3) The baked-in sun reflection is not occluded when the surface is in shadow and it becomes overly shiny in darkness.

image11

The sun is erased from the sky texture and re-added using a directional light and a lens flare.

As a result, the sun highlight, flares, sunrays and HDR values need to be edited out of the sky texture and reapplied using Directional Lights.

Authoring physically-based shading materials

To avoid the guesswork involved in emulating real world materials, it is useful to follow a reliable known reference.The Standard Shader supports both a Specular Color and a Metallic workflow. They both define the color of the reflections leaving the surface. In the Specular workflow, color is specified directly, whilst in the Metallic workflow, the color is derived from a combination of the diffuse color and the metallic value set in the Standard Shader controls

For the Viking Village project, we used the Standard Shader’s Specular Color Workflow. Our calibration scene, which you can download from the Asset Store, includes some handy calibration charts. We referenced the charts regularly when designing our materials.

When approaching materials you can choose between what we call the Specular and the Metallic workflows, each with its own set of values and a reference chart. In the Specular workflow you choose the color of the specularly reflected light directly, in the metallic workflow you choose if the material behaves like a metal when it is illuminated.

The specular value chart:

UnitySpecularChart

The metallic value chart:

UnityMetallicChartChoosing between Specular or Metallic workflows is largely a matter of personal preference, you can usually get the same result whichever workflow you choose to use.

Aside from charts and values, gathering samples of real world surfaces is highly valuable. It is of great help to find the surface type you are trying to imitate and try to get an understanding of how it reacts to light.

Setting up the material

When starting out, it’s often useful to create a plain but tweakable representation of the materials using colors, values and sliders derived from the calibration charts. Then, you can apply textures while keeping the original material as a reference to confirm that characteristics are preserved.

materialcompare_refl.1

Top row: untextured. Bottom row: textured. Left to right: Rock, Wood, Bone, Metal.

The traditional approach to creating textures

Textures in the Viking Village have been authored using both manual-traditional methods (photos + tweaking) as well as through scanned Diffuse/albedo, specular-, gloss and normal map images which were provided to us by Quixel.

Be careful when adding detail in the texture channels of the material. For example, it usually pays to avoid placing lighting (Ambient Occlusion, shadows etc.) in your textures: remember that the physically based rendering approach provides all the lighting you should need.

Naturally, retouching photographs is more demanding than using scanned data, specially when it comes to PBS-friendly values. There are tools that provide assistance to make the process easier, such as Quixel Suite and Allegorithmic Substance Painter.

Scanned data

PBS-calibrated scanned textures alleviate the need for editing, since data is already separated into channels and contains values for albedo, specular and smoothness. It is best if the software that provides the PBS-calibrated data contains a Unity profile for export. You can always use the reference charts as a sanity check and as a guide if you need to calibrate the values using Photoshop or a related tool.

Material examples

The Viking Village Scene features a large amount of content while trying to stay within reasonable texture memory consumption. Let’s take a look at how we set up a 10-meter-high wooden crane as an example.

Notice that many textures, especially specular and diffuse textures, are homogenous and require different resolutions.

image10Example1: This Crane object has 2 materials: 2 diffuse, 1 specular-smoothness, 2 occlusion and 2 detailed textures.

image06Example 2: The shield prop has 1 material: 1 diffuse, 1 specular-smoothness, 1 occlusion and no detailed textures.

Screen Shot 2015-02-17 at 17.12.23On the left: Crane Inspector for both materials. Rightmost is the shield prop material.

  • Albedo texture: In the specular workflow it represents the color of diffuse light bounced off the surface. It does not necessarily need to be highly detailed as seen in the left image (crane), whereas the right texture (shield) includes significant unique detail.

vv albedo texturePainted Crane Diffuse Map snippet with plain wooden color and intensity. Contains a modest amount of detail. Right image: Shield Diffuse texture with higher (ppi) unique detail.
image09Diffuse value (no texture) for crane material

  • Specular: Non-metals (insulators) are comparatively dark and in grayscale while metal values are bright and could be colored (remember that rust, oil and dirt on a metal are not metallic). Specular for the wood surface did not benefit extensively from a specular texture, so a value was used instead of inputting a map.

image07Crane Specular values for wood.

Screen Shot 2015-02-17 at 17.40.23Crane Specular map for metal (not using metallic shader). Right: Shield Specular texture.

  • Smoothness is a key element in PBS materials. It contributes variation, imperfections and detail to surfaces and helps represent their state and age.
    For the crane, smoothness happened to be fairly constant across the surface and was therefore substituted by a value. This delivered a reasonable texture memory gain.

image05Crane Smoothness values for wood. No textures required!

Screen Shot 2015-02-17 at 17.51.13Crane Smoothness map for metal (not using metallic shader). Right: Shield Smoothness map with mixed metal and wood surfaces.

  • Occlusion indicates how exposed different points of the surface are to the light of the surrounding environment. Ambient Occlusion brings out surface detail and depth by muting ambient and reflection in areas with little indirect light.
    Keep in mind that there’s also the option of using SSAO (Screen Space Ambient Occlusion) in your scene. Using SSAO and AO could result in double darkening of certain areas, in which case you may want to consider treating the AO map as a cavity map.
    An AO map that would emphasise deep cracks and creases may be the best option if the game uses SSAO and/or lightmapped Ambient Occlusion.

image01

1 Lightmapped AO, 2: Occlusion texture, 3: Occlusion in Diffuse, 4: Image effect SSAO 

Secondary Textures and resolution

Secondary Textures can be used to increase the level of detail or provide variation within the material. They can be masked using the Detail Mask property.

Due to the low resolution primary diffuse wood texture in the Crane example, the secondary texture set is crucial. It adds the fine detail to the final surface. In this instance, the detail-maps are tiled and at a reasonably low resolution. They are repeated on many other wooden surfaces, thus delivering a major texture memory saving.

image20Secondary albedo- and normal maps compensate for the low-resolution main diffuse and normal map. Both textures reduce overall texture memory by being widely “overlayed” and tiled on wooden surfaces throughout the village. Be cautious when providing lighting information to a diffuse detail map as it this has a similar effect to adding such information to primary diffuse.

image16Crane wooden surface with (left) and without (right) secondary texture maps.

These workflows certainly helped us when designing the Viking Village project. We hope you also find them useful, and look forward to reading your comments!

Acknowledgements

The Viking Village project was launched in partnership with the creative team at Quixel, developer of HDR surface capture technology and the Quixel Megascans library of PBS-ready textures.

Big thanks to the very talented Emmanuel “Manu” Tavares and Plamen “Paco” Tamnev for bringing this scene to life.

Go and download the project at the Asset Store. Be aware that it’s optimised for Unity 5.0.0 RC2. Pre-order customers and subscribers can download this beta version of Unity here, for Mac and Windows.

February 17

StepCounter Gets in Shape

Last year we announced the release of My StepCounter for iOS. Since then, the iOS landscape has changed considerably – HealthKit was announced, two new iPhones have been introduced, and the iPad now supports My StepCounter AppIcon the same API for CoreMotion as the iPhone.

The original app was designed purely with the iPhone 5s in mind, as this was the only supported hardware available at the time. With an increase in the number of devices that support the step counting API, I thought it was time to make an update to ensure the app works perfectly on these new devices.

While updating the app, I opted to migrate the user interface from Apple’s older Xib format to a single Storyboard. With this change, you’re now able to visualize how My StepCounter will look from within both Xamarin Studio and Visual Studio. The new approach is great for developers using Visual Studio, as they can minimize the amount of time interacting with Xcode on their Mac build host.

myStepCounteriOSStoryBoard

Not only does My StepCounter now support Storyboards, a number of images have been replaced with custom views drawn using code generated from PaintCode. The new change has cut down the number of artwork assets the app needs to ship with, and thus reduces the final binary size.

The benefit of this is huge, as it means the app looks great on any screen size without the binary size bloating from additional images. One of my favorite things about using a tool like PaintCode is that the control is live rendered within our storyboard designer so you can instantly see how your App will look in the designer without having to deploy to the simulator or device.

A few extra little additions to the app include integration with Xamarin Insights, a new share option so you can tweet or post your step count, and improved animations across the entire app.

All of the code is up on GitHub for you to download and explore today.

PlasticDrive – dynamic readonly workspaces as windows drives

PlasticDrive is a tool to mount a changeset as a Windows drive and let you quickly browse the code using your favorite tools (Visual Studio, Eclipse, IntelliJ…). Files are downloaded from the server on demand (then cached) so the mount happens immediately, no need to wait for a big update to finish.

Production workflow improvements

The upcoming release of Unity 5.0 has a myriad of new features coming. It also includes a number of improvements to workflows and production, especially for teams.

Scene Conflicts

One of biggest annoyances when working in a team is conflicts in the scene files. This can easily happen when multiple people are working on the same scene file at the same time. Most often this is because of all the objects that a prefab instances would add to the scene file prior to Unity 5.0. Simply opening a scene and saving it again without modifying anything would make unity write all the object instances of a prefab to the scene file which would certainly create conflicts if someone else was doing the same.

These types of conflicts in instance objects were often hard to solve because they might be IDs and other properties where it was not always apparent how to resolve the conflicts.

The fact is that we don’t really need to write all the objects a prefab instances to the scene files, but for historical reason we have been doing that up until Unity 5.0.

In Unity 5.0 all the instance objects are no longer written to the scene file, only the prefab object containing the instance modifications and objects which might be referenced in the scene are written. Instance objects that are referenced are written in a stripped form only to ensure consistent reference ID.   

StrippedPrefabInstances

Click to see full image.

This is illustrated in the picture on the left. Here I have created a simple scene in Unity 4.6 containing a prefab which has a Cube, a Sphere and a Capsule GameObject. On the left you see how all the instance objects are written to the scene, on the right only the prefab object with instance modifications are stored in the scene file.

The only conflicts related to prefabs will now be in the prefab modifications which means two people modified the instance of a prefab in a scene and that should be a lot simpler to resolve.

A second cause for conflicts that is hard to solve and has been a problem for people working in a team is lightmapping.

In Unity 4.6 lightmapping properties like lightmap offset and scaling is stored in each Renderer that is included in the lightmap. That means these properties are stored in the scene file. If two artist were both changing the same scene and doing lightmapping, all these properties would probably conflict, resulting in a lot of tiny conflicts all over the scene file. Even if the changes to gameobject did not conflict. There can be a lot of these conflicts and you can’t just chose to use one of the updates to the scene file because you might lose work that was done in the other update, but you will probably have to make that choice and redo some of the work because resolving each and every conflict could be even more time consuming.

All the generated property values related to lightmapping, like the uv-offset, uv-scaling and the lightmap indexes, are as of Unity 5.0 stored in a separate asset. If two people modify a scene and then bake the lightmapping, the lightmapping asset will of course conflict, but chances are that the changes in the scene file will not conflict and if they do it should be much easier to solve. The lightmapping would still need to be rebaked, but that will most likely be less time consuming than trying to resolve the conflict or redo a modifications in the scene.

Scene Merging Tool

Of course there will still be situations where scene files will conflict, but having eliminated prefab instances and lightmapping properties, these conflict will mostly be related to actual changes made by an artist and should be much simpler to resolve. To make it even simpler to resolve these conflicts we have developed a scene merging tool which knows the semantics of a scene file and is able to merge scene files based on objects and not simple text comparison. The tool works as a pre-merger, which means you can set up your version control system to do the merging of a scene before launching a three-way merge tool, or if you are using the builtin VCS support in Unity, you can enable the scene merging tool through Editor Settings.

Of course all of the above only applies when you are using text serialisation. If you keep scenes and other serialized files in binary format, conflict cannot be resolved and scenes cannot be merged.

Tracking scene dirty state

Not only are the conflicts annoying, but constantly being asked to save the scene because it is dirty even though you did not make any changes is also annoying. Things like changing the docking position of a window or inspecting certain assets could make the scene appear dirty when obviously you did not change anything in the scene. Indirect changes to the scene like rebaking the lightmapping would also mark the scene dirty. For Unity 4.6 this was actually correct behaviour, but since all lightmapping related data was moved to an asset, Unity will no longer require you to save the scene again.

In order to fix the issue of falsely marking the scene as dirty we have moved the tracking of the scene dirty state to the undo system. Everything you do related to scenes in the editor is undoable. When a new item is added to the undo stack the UndoManager will inspect the item and see if the change is related to the scene or to an asset. If it is related to the scene, the UndoManager will increase the dirtiness of the scene. When you then perform an undo the dirtiness of the scene will actually decrease, so you can make a bunch of changes to your scene, undo them all and the scene will no longer be marked as dirty. Of course redoing will then make the scene dirty again.

But wait, there is more

The hierarchy window and scene view selection have been heavily optimized, which makes Unity feel more responsive when working in large scenes. 

AudioClip now supports multi-selection editing, so you no longer have to change the same property on all clips one by one. Just select all the clips you want to change and get it over with all at once.

On first import of FBX files, the scaling settings in the file are now taken in to account, so the work flow no longer requires you to import an FBX, change the scaling and then reimport it. If the scale settings in the file are setup correctly, the model should look the way you intended it to look after the first import.

The last, but not least, thing I will mention is that the editor is moving to 64bit on all platforms, which means working with projects that would previously produce out-of-memory issues should become less frustrating.

In conclusion, the changes mentioned above, and probably many more, should make your day to day life working with Unity even more productive and smooth.

February 16

Pollen VR: Developing high-end visuals with Unity 5

More stories from the adventures of an EMEA field engineer! Today, I wanted to share with you the development of Pollen VR. Mindfield Games are using Unity 5’s new graphics features and I have been able to follow their development of Pollen VR very closely.

I spoke with Ville Kivistö, CEO and Co-Founder of Mindfield Games, who is also their technical coder.

What Unity 5 features are you utilising for Pollen VR to achieve such high visual quality?

New physically based shading and realtime lighting are the definitive features of Unity 5 and we are using them to the fullest. We have material maps basically for every surface and due to design restrictions, environments need to be dynamically lit. With the new global illumination everything looks gorgeous and combined with reflection probes, the surfaces really come alive. Having a 64-bit editor is also crucial to have as developing a high end PC game can consume huge amount of memory. We used to have tons of out-of-memory crashes with Unity 4, but those days are now long gone.

pollen_interaction2_smaller

What techniques did you use for the foliage?

Foliage is a rather simple GPU particle effect. What makes it interesting though is how we got the lighting to work with it properly. As Graphics.DrawProcedural doesn’t integrate into Unity’s lighting passes and new CommandBuffer API doesn’t support compute buffers, we had to do a somewhat funky solution.

We have a cube the size of foliage bounds, so we know that whenever the cube is visible, foliage needs to be rendered as well. Now, whenever the cube’s OnWillRenderObject() is called, we render the compute buffer particles to two render targets in a single pass with MRT and using the settings of currently rendering camera. One texture has diffuse and roughness data and the other one has normal and depth. When we get to rendering the actual cube, the cube shader just gets those buffers as parameters and outputs corresponding data. Depth is written manually, so we get perfectly Z clipping output. And the lighting of leaves is affected by all lights and shadows to make them look like we want them to.

And because all the leaves are GPU particles, animating them is really cheap. So they aren’t just a static mesh, but can react to environment realistically (naturally with some limitations).

pollen_hydro_l_smaller

Any techniques or workflows you can share with Global Illumination?

For us it worked very well straight out of the box. Default values provide a good balance between bake times and quality. On day to day use we have quite conservative values to enable quick baking, but our automatic build system scales up the values to provide even better quality GI for our nightly build.

Are you using a mixture of realtime and baked lighting?

Our game design requires that all lighting is realtime. We don’t have a single static lightmap in the game.

In Unity 4, we used to work around that restriction by placing dozens of ambient point lights around the scene, trying to fake global illumination that way. In the end it was horrible, because the maintenance and designing was very tedious and time consuming. It was also pretty much impossible to keep all the lights from leaking through the walls.

Even the amount of draw calls and fillrate requirements caused by many lights was starting to be quite high, which in turn hampered the framerate. When we got our hands on the Unity 5 beta, we just enabled Enlighten and removed all the fake GI point lights and everything looked better, ran faster and GI worked perfectly in real-time when disabling or animating lights.

We also had a custom cubemap reflection system built, with box projection and everything. It worked rather well, but using it required dozens of custom shaders and the editor side was constantly missing features. We are very happy that new reflection probes basically replaced our own system. They require less maintenance and the workflow is much simpler.

pollen_lightshafts

How did you achieve the glow effect in the corridors?

We have a few different volumetric effects that we use throughout the game. For spotlights we like to use Robert Cupisz’ implementation, which gives very nice volumetrics for spotlights that perform very well. As spotlights are very versatile, it’s easy to use them to height based fog as well.

In some parts of the game you might see some fluid volumetrics for which we use Fluidity from the Asset Store. We use it in all scales, all the way from lighter fire to filling the room with gasses. It looks awesome and physics simulations always look yummy.

For outdoors, we use our custom post process volumetric fog solution as we want the player to feel the density of Titan’s atmosphere.

pollen_hydro_d

What tools do you use to generate your PBR Textures?

Adobe Photoshop and Quixel Suite along with some reference materials. We have our own “playground” scene where artists can inspect models in various lighting conditions.

Have you made modifications to Unity’s new Standard Shader? If so, any tips’n’tricks?

As our base is not tied to a single point in time and space, we need to be able to render the base differently. As we’re a small indie studio, we don’t have resources to start working on multiple assets that look different based on how old they are. Our solution was to do an extended version of the new Standard Shader that adds “grittyness” to materials. What’s best, it works for all objects and with the new material pipeline, we can have custom grittyness based on material type. Just one shader and few lines of code and we can change the look and feel completely.

Currently Unity doesn’t support code injection to Standard Shader, Our solution is to just copy the built-in shader and keep our code in include files, so we just have to write couple #includes into strategic places if the Standard Shader changes.

pollen_grunge_on_smaller

How did you deal with Anti aliasing?

As MSAA doesn’t cope that well with deferred rendering, we have to comply with post processing solutions. The one we have chosen is SMAA as it provides a nice and clean resolve with good performance. Even if it lacks the subpixel and temporal anti-aliasing, the final result is good enough even for Oculus Rift.

Unity 5’s new graphics is high-end, how has performance been?

Unity 5′s new MRT based deferred path basically increased the framerate about 30% when compared to the legacy two pass deferred path. Our draw calls can climb quite high and skipping one pass helps a lot to keep those calls down. As virtual reality is very important to us (we recommend playing Pollen with Oculus Rift), it’s crucial that framerate can be as high as possible. Therefore, we provide as many options as possible for tweaking the visuals to match players’ hardware and framerate requirements. With maximum settings Pollen can really turn on those GPU fans, but you can scale down the options a bit and play with an older GPU with Oculus Rift if you like.

pollen_rifteffect_smaller

Any advice on the optimisation for such a large scene?

Unity’s occlusion culling is very efficient and it handles most of the things we want. However, occlusion culling deals only with the rendering and does that only in play mode.

As we have spent lots of time to make everything behave physically correctly, be it books, basketball, microwave or punching bag, we have huge amount of physical objects in the game. Unfortunately, Umbra doesn’t help us much with physics, so we had to write our own custom portal/culling system. Because the rooms of our moon station are efficiently separated by safety doors, we’ve been able to write a simple portal system based on those doors. We simply disable all the rooms the player can’t see. This helps with physics and even with Umbra as it has less culling work to do. In editor, we can also easily activate only a single room to keep draw calls and poly count low and editor responsive.

Out the box Unity 5 handled everything else for us, we didn’t need to do any additional optimising for our large scenes, everything just worked!

Thanks to Ville for talking to me, can’t wait to see this game released!

Monologue

Monologue is a window into the world, work, and lives of the community members and developers that make up the Mono Project, which is a free cross-platform development environment used primarily on Linux.

If you would rather follow Monologue using a newsreader, we provide the following feed:

RSS 2.0 Feed

Monologue is powered by Mono and the Monologue software.

Bloggers