Disclaimer: this is an automatic aggregator which pulls feeds and comments from many blogs of contributors that have contributed to the Mono project. The contents of these blog entries do not necessarily reflect Xamarin's position.

February 10

Continuous Integration with Bitrise

Xamarin Bitrise Offer Dialog BoxAt Xamarin, we realize that great CI is core to the mobile development experience, which has lead us to work with leading mobile CI provider Bitrise to make it possible to bring the benefits of continuous integration to your Xamarin apps. As a special promotion, we’re offering three months of Bitrise service for free simply for being a valued Xamarin customer. There are only a few open spots for this promotion, so you should act now to participate.

When you set up your Xamarin apps on Bitrise, you can:

  • Build your app every time you commit new code to your repo
  • Validate your build by running your automated acceptance tests on Xamarin Test Cloud
  • Have your app monitored against alpha and beta channels of Xamarin Platform, helping ensure breaking changes are caught before they affect you

Sign up for this offer here to get started today.

The post Continuous Integration with Bitrise appeared first on Xamarin Blog.

Valve Brings SteamVR to the Unity Platform

Today, during the opening keynote of the inaugural Vision VR / AR Summit, Valve and Unity Technologies announced a new collaboration to offer native support for SteamVR in the Unity Platform, giving developers new reach at no extra cost. Additionally, we will be adding a new VR rendering plugin to further enhance functionality.

The collaboration means that all of Unity’s developers will have access to native support for Valve’s upcoming SteamVR platform. Beyond SteamVR support, Valve has developed an advanced rendering plugin for Unity to further enhance fidelity and performance, bringing consumers more realistic experiences.

Valve co-founder Gabe Newell announced the news during a special video address at the Vision Summit, adding: We made many of our Vive demos using Unity, and continue to use it today in VR development. Through that process, and in working with VR developers, we found some opportunities to make Unity even more robust and powerful for us and really want to share those benefits with all VR content creators.”

Unity CEO John Riccitiello went on to discuss the news during the Vision Summit opening keynote: “Valve and Unity are both dedicated to creating the highest quality VR experiences possible. That means giving developers every possible chance to succeed, and our collaboration with Valve is designed to do just that.”

Valve will also be providing a talk at Vision, and to celebrate the launch, they are surprising every developer at the conference with a free HTC Vive Pre, the latest SteamVR development system. For more information, please visit  http://visionsummit2016.com/

February 9

Join Apple Co-Founder Steve Wozniak at Xamarin Evolve 2016

Join us at Xamarin Evolve 2016 for a once in a lifetime opportunity to see Apple co-founder and computing pioneer Steve Wozniak take the stage to share his unique perspective on the software industry’s past, present, and future.

An electronics genius in his early teens, Steve Wozniak (“Woz”) teamed up at age 19 with 14 year-old Steve Jobs for some hacking, which ultimately turned into the Apple I, a kit computer for hobbyists. Based on early interest, they came out with the Apple II and launched one of the most successful personal computing technologies in the world.

FACEBOOK 1200 x 630 Steve Wozniak

Fast forward nearly 40 years, and mobile technology is ushering in another huge shift in how we work and play. “I see a lot of similarities between the dawn of personal computing and what’s happening at this beginning of the mobile era,” says Wozniak. “I’m excited to be a part of Xamarin Evolve, speaking with the developers who are leading one of the most important revolutions in human computer interaction we’ve ever seen.”

Woz joins a great lineup of Xamarin Evolve speakers who will be cover a broad array of mobile topics:

  • Grant Imahara, former MythBusters co-host, will speak about the convergence of robotics and mobile, drawing from his experience with movies and television, including R2-D2 and The Energizer Bunny.
  • Legendary technical author Charles Petzold, whose book Programming Windows taught an entire generation of developers how to code, will present on mobile app architectures.
  • Josh Clark, renowned UX designer and author of Tapworthy: Designing Great iPhone Apps and Designing for Touch, will help attendees reimagine user experiences that are finger and thumb-based, focusing on how handheld devices demand entirely new design patterns.
  • In addition to the general conference sessions, Julie Ask, Vice President and Principal Analyst at Forrester Research and co-author of the book The Mobile Mind Shift: Engineer Your Business to Win in the Mobile Moment, will present at this year’s invite-only Xamarin Evolve Mobile Leaders Summit.

Don’t miss this chance to see Woz in person at Xamarin Evolve 2016!
 

Register Now

The post Join Apple Co-Founder Steve Wozniak at Xamarin Evolve 2016 appeared first on Xamarin Blog.

Mono 4 upgrade

As you probably know by now, Plastic SCM requires Mono, the open source.NET Framework implementation, in order to run on non-Windows platforms: Linux, Mac, OpenBSD... Our public repositories distributed Mono version 3.0.3 so far, which we had thoroughly tested to be completely sure it could run Plastic SCM flawlessly.

However, as our GTK# GUI grew, we bumped into compatibility problems and minor issues that prevented users from having the best Plastic SCM experience. This is the reason why we decided to upgrade the shipped Mono version to version 4.3.0. Quite a leap, isn't it? :D

We've built new packages to allow Fedora, RedHat/CentOS, OpenSUSE, Debian and Ubuntu users to easily upgrade their installations, starting with version 5.4.16.726. These new packages will conflict with the previous Mono 3 ones, so the upgrade operation might require manual package dependency resolution. Don't worry, we'll walk you through it!

Alternatively, if you've never installed Plastic SCM on your Linux machine before, you just need to follow the installation instructions available on our website.

Important: This walkthrough will assume that both Plastic SCM and SemanticMerge are currently installed in your machine. If that's not the case, you can of course adapt the instructions to skip the software you don't currently have.

sql-migrate slides

I recently gave a small lightning talk about sql-migrate (a SQL Schema migration tool for Go), at the Go developer room at FOSDEM.

Annotated slides can be found here.

sql-migrate


Comments | More on rocketeer.be | @rubenv on Twitter

Launching a successful product on the Asset Store

Have you ever thought about making some extra money on the Asset Store? Or do you just want more people to see your awesome art and tools? How do you get attention without spamming everybody you’ve ever met and their grandma? Get advice from three publishers who’ve mastered the art of promoting their amazing creations! Put together, they’ve grossed more than $75,000 since October 2015.

There’s no silver bullet solution for all your marketing and communication needs. Everything depends on what you’re making and who you’re talking to. So take these examples as more of an inspiration and less of a to do list.

First of all, how do you even decide what to make? Seems like the best way to success is taking the long road from looking into something that inspires you, through lots of experiments, to talking to the community about what you’re making.

“I have always enjoyed beautiful environments like Skyrim and with Unity 5, it is possible to create them. But the tools to do this are difficult to learn and use so I decided to make my own. It’s been a long journey and three rewrites later, I have finally launched Gaia,” says Adam Goodrich whose terrain editing tool has taken the Asset Store by storm in the fall of 2015.

gaia2 gaia 3 gaia Gaia 4

Similarly, Pärtel Lang has been experimenting with physics-based ragdoll behaviors since 2011, but hasn’t actually released PuppetMaster until November 2015. “Developing PuppetMaster has been more like an idée fixe to me than a rational strategy,” he says.

None of the developers I talked to did any sort of sophisticated market research before embarking on making their hit assets. “I’ve learned that assets which come with a specific art theme are not bestsellers, because they target only people who are interested in that particular style. But I’m not in this business just for the money – releasing good quality products and personal satisfaction are equally important,” says Tom Lassota of Beffio, who recently released the slick looking Space Journey set.

That doesn’t mean, however, that it’s best to just rely on your gut. Adam Goodrich admits that the current version of Gaia is much better due to the input from fellow Unity developers. “Take advantage of the Works In Progress forum while you are in beta – show your work, demonstrate value and invite comments. Get people involved in the process. I really can’t stress how important this is! Gaia is a much better tool today thanks to them. As a  bonus, I now have a bunch of new friends.”

Around a third of his development time was focussed on usability. Understanding the negative feedback was key. ”The people with the biggest issues using your tool are your greatest asset. Put yourself into their shoes and find a way to solve their problem,” Adam Goodrich says.

PuppetMaster pic

Pärtel Lang started his own “Works In Progress” thread much later, roughly a month before the expected launch. It started to slowly gather interest and questions from potential users: “The best way to answer those questions is to make some showcase videos and post them on a YouTube channel and the forum thread. That will keep interested people coming back to the thread, keep it visible and help explain what the product can and can’t do.”

Using social media is important, but not vital. For successful Asset Store publishers, it’s a way of getting their name out there, get what marketing pros call “brand awareness”. Adam Goodrich sees a lot of potential there, if you have something to say: “Facebook, YouTube and Twitter are an awesome way to grow awareness. Create interesting and relevant stories and images. If someone takes the time to engage with you then it’s a gift – engage right back!”

For Tom Lassota, social channels helped raise awareness for Beffio, but didn’t directly lead to sales: “There still might be some channels for actually boosting the sales, but I have not found them yet – so make sure you let me know when you do!”

SpaceJourney_PIC

Pärtel Lang hardly ever promotes his assets: “I have tweeted about PuppetMaster just once, so other than the Unity Community and my YouTube channel I have nothing.”

Their approach to creating promotional content is also very different. An artist needs images and videos that show his assets in the best light, even if it means spending a lot of extra resources. “I’ve spend 30% of total production time on creating marketing materials only. Eye-catching design is a must. For a customer, the first glance is the most important. If you don’t have a knack for creating  promotional art, I would strongly recommend to hire somebody who could create quality materials for you instead,” says Tom Lassota.  

However, the opposite can be true for an editor extension, as Pärtel Lang found out: “Once I bought some assets from the Store and really spent a lot of time to make a proper cinematic video, but that has not received nearly as many views as the simple voice-over tutorials I made later. Tutorials are really the best way to show the asset in depth, are really appreciated by the clients and considerably help to relieve my support load. So just cut the music and the fancy effects, turn on the microphone and speak to the people – it can do wonders.”

Last thing to remember before you release your assets is to set up Google Analytics. “Check out the demographics of who is looking at your pages and work out where they are coming from. This will provide you with useful audiences to engage further with,” recommends Adam Goodrich.

Just like with design, coding and a lot of other aspects of game development, the best thing to do when deciding on the optimal marketing strategy is to run a lot of tests and get the data you need to make a rational decision.

February 8

Contest: Show Us Your Favorite C# 6 Feature

C# 6 includes a wealth of new features to make you a more productive developer, simplify your code, and vastly increase readability. This builds on an already great foundation that makes C# the best language for mobile development, such as easy asynchronous operations with async/await, generics support, and more.

If you use Xamarin Studio or Visual Studio to build applications, you’re already using C# 6, so you can start incorporating these features into your applications today! In this contest, we invite you to share your favorite C# 6 feature in action.

How to Enter

To enter, tweet a photo of your favorite C# 6 feature being used in your code, along with the hashtags #Xamarin and #CSharp, as seen below:
 

Prize

The first 50 participants with a valid entry will receive an adorable plush Xamarin code monkey!

Xamarin code monkey hard at work on his own tiny Macbook

Need some ideas?

Check out Xamarin University’s “What’s New in C# 6” video, or pick from one of the features listed below:

  • Expression-bodied functions
  • String interpolation
  • Null-conditional operators (elvis operators)
  • Auto properties
  • Property initializers
  • Using static
  • Nameof expressions
  • Index initializers
  • Exception filters

Rules

All submissions must be made by Monday, February 22 at 12 pm EST. A valid entry consists of a tweet containing the hashtags above, along with an image of your favorite C# 6 feature in action on your mobile application. The first 50 submissions with a valid entry will receive a plush Xamarin code monkey. To be eligible, you must follow @XamarinHQ to enable us to DM you for private-follow up. There is no purchase necessary to enter the “Show Us Your Favorite C# 6 Feature” contest.

The post Contest: Show Us Your Favorite C# 6 Feature appeared first on Xamarin Blog.

February 5

Podcast: Simplify Your Code With C# 6

This week on the Xamarin Podcast, Mike and I are joined by James Montemagno to overview all of the fantastic features introduced in C# 6 to simplify your code and bring readability to a new level.

Subscribe or Download Today

Knowing the latest in .NET, C#, and Xamarin is easier than ever with the Xamarin Podcast! The Xamarin Podcast is available from iTunes, Stitcher, and SoundCloud. Do you have an interesting story, project, or advice for other .NET mobile developers? If so, we’d love to share it with the Xamarin community! Tweet @pierceboggan or @MikeCodesDotNet to share your blog posts, projects, and anything else you think other mobile developers would find interesting. Be sure to download today’s episode breaking down all the awesome features of C# 6, and don’t forget to subscribe!

The post Podcast: Simplify Your Code With C# 6 appeared first on Xamarin Blog.

Report an Exception with Xamarin Insights Contest Winner

Proactively monitoring the health of your mobile apps is crucial to ensuring a bug-free, positive user experience. Xamarin Insights makes it extremely simple to do this by identifying what issues real users are facing and how to fix them. Xamarin Insights was promoted to general availability in Xamarin 4, giving all Xamarin subscribers access to free crash reporting with detailed crash reports and crashed-user identification.

Two weeks ago, we invited you to start monitoring the health of your apps by adding Xamarin Insights to your mobile app(s) with just a few lines of code and tweeting an unexpected exception discovered with your free crash reporting.

There were some exceptional entries, but I’m happy to announce that the winner of the “Report an Exception with Xamarin Insights” contest, and brand new Xamarin swag bag, is Ken Pespisa for his submission:


A big thank you from all of us here at Xamarin to everyone who entered the “Report an Exception with Xamarin Insights” contest and shared how Xamarin Insights came to the rescue for their mobile app! Everyone who submitted a valid entry will be receiving 10 Xamarin Test Cloud hours to help test their mobile apps on thousands of devices.

Didn’t get a chance to enter in this contest?

Be sure to follow us on Twitter @XamarinHQ to keep up with Xamarin announcements, walkthroughs, case studies, contests, and more!

The post Report an Exception with Xamarin Insights Contest Winner appeared first on Xamarin Blog.

The Making of The Robot Factory

We talked to Tinybop’s Rob Blackwood, lead iOS engineer, Jessie Sattler, production designer, and Cameron Erdogan, iOS engineer, about their experience using to Unity to build The Robot Factory.

The Robot Factory, was the 2015 iPad App of the Year on the App Store. It is the sixth app (of eight total, now) that the studio launched and the first app in a new series of creative building apps for kids. As the first app in that series, it was the first they built with Unity. It will also be the first app TinyBop made available for Apple TV.

Moving to Unity enabled the development and design teams to work together more quickly and efficiently. Rob Blackwood, lead iOS engineer, and Jessie Sattler, production designer, walk through how they work together to bring the app to life. Cameron Erdogan, junior iOS engineer, chimes in about preparing The Robot Factory for tvOS with Unity. Every app TinyBop builds is a learning process which helps refine and improve their processes on the next app.

Building tools for development & design

Rob: As software engineers, it’s our duty to architect solutions for all the concepts that make up the app. We need to identify the app’s systems and rules and implement them through code. For example, a system we created in The Robot Factory determines the way a robot moves. In an app about plants, we created systems to represent the different seasons of the deciduous forest. There are also many cases where we must create tools for tweaking these systems and rules. These tools can then be used by a production designer to create the right look and feel for the app.

Jessie: As a production designer at Tinybop, I’m in charge of putting together the visual elements that live inside the app (to put it simply). We commission a different illustrator for each app, which gives us a range of styles and techniques. It’s my job to translate all the artwork into interactive, moving parts. I build scenes in Unity, animate characters and objects, and make sure everything runs smoothly between the art/design and the programming/development of our apps.

Rob: When we began our first app, we were a very small team using a development environment that required pure programming to develop for. We developed a tool for layout and physics simulation but it was not very sophisticated. As our team grew, we realized we had a bottleneck on the engineering side since most everything had to be programmed and then built manually before it could be tested. Not having immediate visual feedback when developing also meant a lot more iteration on the code, a time-consuming task. Not having an automated build system, like Unity Cloud Build, meant an engineer had to sink time into manually delivering a build to devices or sending it up to Testflight.

Jessie: Our previous editor lacked a friendly interface for someone who wasn’t primarily working in the code. I relied heavily on the engineers to perform simple tasks that were not accessible to me. Unity has alleviated the engineers of menial production tasks, and at the same time enabled me to perfect things to the slightest detail. We also couldn’t see the result of what we were making until we built to device, whereas in Unity I can live preview the project as I work.

Rob: The most important thing Unity has done for us is allow us to easily separate engineering from production design. Unity is a very graphics-driven environment which means production can do much of the visual layout before ever having to code a single line. This also allows us to continually integrate and iterate as the engineers develop more and more systems. The production team can get immediate feedback as they design because Unity lets you play and stop the app at any moment you like. We also use Unity Cloud Build which lets us push new builds out to actual iOS devices as frequently as every 20 minutes. S o, everyone can test and give feedback on the current state.

Jessie: Using Unity has made collaboration with engineers a dream! I can specify what visual effects I want to achieve. Then, we work together to build tools and scripts for me to use directly in the editor. The visual nature of Unity makes it much easier for me to have the control I need as an artist to get the projects to look the way we want them to. It also facilitates our iterative process. I can go back and forth with our engineers to find solutions that meet both our aesthetic and technical requirements.

robots5_2048x1536

In The Robot Factory, giving robots locomotion based on the parts they were created with was a big challenge. Using pure physics to move the robots made it too difficult to control, and having pre-planned walk cycles was boring and predictable. I worked with the engineers to create tools to draw the path of motion for each robot part, within a set of restraints, as well as each part’s gravity and rotational limits. We were able to maintain enough physics-based movement to get unique locomotion, but users still had enough control and part reliability to navigate their robots through a world.

Adapting for different apps & artwork

Rob: We’ve always given priority to the art and we try to not funnel the artist toward too many particulars, style-wise. This can sometimes be difficult from a technical standpoint because it means our strategies for creating the feel of animations and interactions often need to change. The Robot Factory artwork has a lot of solid-colored shapes and hard edges. We were able to identify a fairly small set of re-useable elements that could be combined to create most every robot part—each one comprising as many as 50 small pieces—that could then be animated independently. (This was important because real robots have a ton of moving parts, as everyone knows!) This contrasted sharply in our most recent app, The Monsters, where we wanted the monsters kids created to appear more organic and even paintable. In this instance, we created actual skeletons and attached something akin to skin so that they could bend naturally and be colored and textured dynamically when a child interacted with it. So while there are many challenges to adapting to different artistic styles, the benefit is that we are much closer to the artist’s vision, which is always more interesting.

robots7_2048x1536

Jessie: Each illustrator brings a different style, thus a different set of challenges for every app. On the production side, we have to decide what aspects of the art are integral to keep intact, and what can be translated through simulations and programmatically generated art. A lot comes down to balancing three needs: widely scoped content, efficient development, and beautiful visuals. It’s a big challenge to create reusable techniques that we can carry over app to app. Many instances call for unique solutions. Where we would rig, bone, and animate meshed skeletons in one case, another app might need large, hi-res sprites, or small repeatable vector shapes and SVGs. Having disparate techniques means longer production time, but because we deem the quality of art and design in our apps so important, it is a necessary step in the process.

Moving on over to tvOS with Unity

Cameron: I didn’t have to change much code to get The Robot Factory up and running on Apple TV. After downloading the alpha build of Unity with tvOS, I made a branch off of our original app’s repository. After a day or two, I was able to get the app to compile onto the Apple TV. I had to remove a few external, unsupported-on-Apple TV libraries to get it to work, but the majority of the work was done by Unity: I merely switched the build target from iOS to tvOS. Pretty much all of the classes and frameworks that work with Unity on iOS work on tvOS, too.

After I got it to compile, I had to alter the controls and UI to make sense on TV. To do that, I used Unity’s Canvas UI system, which played surprisingly nicely with the Apple TV Remote. The last main thing I did was add cloud storage, since Apple TV has no local storage. To do that, I wrote a native iOS plug-in, which again was integrated easily with Unity.

robots3_2048x1536

Looking ahead

Jessie: We currently build apps with 2D assets in 3D space. This allows us to create certain dimensional illusions that help bring life to our apps. I’ve been experimenting with using more 3D shapes in our apps and working with new 3D particle controls in Unity 5.3. I’m excited about tastefully enhancing 2D worlds with 3D magic.

Rob: As we look to the future, we’d like to expand our apps to even more platforms. Unity attempts to make this step as seamless as possible by exporting to multiple platforms with just a little bit of additional engineering on our end. Like our experience moving to the tvOS platform, we hope Unity will do much of the heavy lifting for us. And by the way, we’re hiring senior Unity engineers right now. If you love Unity and building advanced simulations, look us up at http://www.tinybop.com/jobs.

The Robot Factory is available for iOS and Apple TV on the App Store:

Congratulations to Tinybop and thanks for sharing your story.

February 4

Consulting Partners Bring Real-World Experiences to Xamarin Evolve 2016

Since launching the Xamarin Consulting Partner program in 2012, the network has grown to over 350 partners worldwide. We’re excited to showcase the expertise from the following partners at Xamarin Evolve 2016 and we encourage you to attend to learn from these successful companies.

Zühlke: Is Your App Secure?

There’s a lot of discussion about security on the web, Kerry W Lothrop, Lead Software Architect, Zuehlke Groupbut what about app security? What do developers need to look out for when attempting to write a secure app? How should we handle sensitive data? What should we consider when designing an API consumed by a mobile app?

Kerry W Lothrop, Lead Software Architect at Zuehlke Group will demonstrate the different security aspects that Android and iOS developers should be aware of, the corresponding infrastructure to consider at the beginning of their projects, and some techniques to help ensure a secure mobile app.

Magenic: Understanding Implications of Build Options

Xamarin.iOS and Xamarin.Android have several build optionsKevin Ford, Mobile Practice Lead, Magenic that can have a large impact on runtime performance, compile times, and even the size of an app binary. What changes when I switch between linker options or select the SGen generational garbage collector? Should I enable incremental builds? We’ll compare these different options and discuss how to prepare your libraries for linking or dealing with a library that wasn’t. Understanding these build options can have huge benefits in the application you deploy.

Pariveda Solutions with their client Compass Professional Health Services: Healthcare Redefined: How We Used Xamarin to Make Healthcare Simpler and Smarter
MattLineberger

With the rise of mobile technology and consumers’ desire to manage their healthcare via mobile devices, Compass saw an opportunity to transform their business and brought in technology consulting firm Pariveda Solutions to help them execute their mobile-first vision.

Given the diversity of the Compass client base (from truck drivers to CEOs), the first design consideration was that the app had to support multiple devices from the beginning. Working with ParivedaCliff Sentell, CTO, Compass Professional Health Services, Compass was able to meet the goal of deploying across multiple devices, while also significantly reducing development time, lowering testing costs and enabling data backed decisions from analytics captured with Xamarin Insights.
 
 
 
You won’t want to miss the expertise shared by these partners at Xamarin Evolve 2016, so be sure to register today to reserve your spot!

Register Now

The post Consulting Partners Bring Real-World Experiences to Xamarin Evolve 2016 appeared first on Xamarin Blog.

February 3

Easy App Theming with Xamarin.Forms

Popular Twitter Client Tweetbot for iOS "Night Theme"Beautiful user interfaces sell mobile apps, and designing a successful user experience for your app is a great first step for success. But what about all of the little details that combine to create a fantastic design, such as colors and fonts? Even if you create what you believe to be the perfect design, users will often find something to dislike about it.

Why not let the user decide exactly how they would like their app to look? Many popular apps have taken this approach. Tweetbot has light and dark modes and the ability to change fonts to find the one that works best on the eyes during late-night Twitter sessions. Slack takes user customization to the next level by allowing users to customize the entire theme of the app through hexadecimal color values. Properly supporting theming also brings some tangible benefits to code, such as minimizing duplicated hardcoded values throughout apps to increase code maintainability.

Xamarin.Forms allows you to take advantage of styling to build beautiful, customizable UIs for iOS, Android, and Windows. In this blog post, we’re going to take a look at how to add theming to MonkeyTweet, a minimalistic (we mean it!) Twitter client, by replicating Tweetbot’s light and dark mode as well as Slack’s customizable theming.

Introduction to Resources

Resources allow you to share common definitions throughout an app to help you reduce hardcoded values in your code, resulting in massively increased code maintainability. Instead of having to alter every value in your app when a theme changes, you only have to change one: the resource.

In the code below, you can see several duplicated values that could be extremely tedious to replace and are ideal candidates for using resources:

Resources are grouped together and stored in a ResourceDictionary, a key-value store that is optimized for use with a user interface. Because a ResourceDictionary is a key-value store, you must supply the XAML keyword x:Key for each resource defined:

#33302E
White
24

You can define a ResourceDictionary at both the page and app-level, depending on the particular scope needed for the resource at hand. If a particular resource will be shared among multiple pages, it’s best to define it at the app-level in App.xaml to avoid duplication, as we do below with the MonkeyTweet app:


    
		
			#33302E
			White
        
    

Now that we have defined reusable resources in our application ResourceDictionary, how do we reference these values in XAML? Let’s take a look at the two main types of resources, StaticResource and DynamicResource, and how we can utilize them to add a light and dark mode to MonkeyTweet.

Static Resources

The StaticResource markup extension allows us to reference predefined resources, but have one key limitation: resources from the dictionary are only fetched one time during control instantiation and cannot be altered at runtime. The syntax is very similar to that for bindings; just set the property’s value to “{StaticResource Resource_Name}”. Let’s update our ViewCell to use the resources we defined:

Dynamic Resources

StaticResources are a great way to reduce duplicated values, but what we need is the ability to alter the resource dictionary at runtime (and have those resource updates reflected where referenced). DynamicResource should be used for dictionary keys associated with values that might change during runtime. Additionally, unlike static resources, dynamic resources don’t generate a runtime exception if the resource is invalid and will simply use the default property value.

We want MonkeyTweet’s user interface to be able to switch between light and dark modes at runtime, so DynamicResource is perfect for this situation. All we need to do is change StaticResources to DynamicResources. Updating our resources on-the-fly is super easy as well:

App.Current.Resources ["backgroundColor"] = Color.White;
App.Current.Resources ["textColor"] = Color.Black;

Users can now switch between a light and dark theme with the click of a button:
Monkey Tweet with a dark and light theme applied via dynamic resources.

Introduction to Styles

When building a user interface and theming an app, you may find yourself repeatedly configuring controls in a similar way. For example, all controls that display text may use the same font, font attributes, and size. Styles are a collection of property-value pairs called Setters. Rather than having to repeatedly set each of these properties to a particular resource, you can create a style, and then simply set the Style property to handle the theming for you.

Building Custom Styles

To define a style, we can take advantage of the application-wide resource dictionary to make this style available to all controls. Just like resources, each style must contain a unique key and target class name for the style. A style is made up of one or more Setters, where a property name and value for that property must be supplied. The TargetType property defines which controls the theme can apply to; you can even set this to VisualElement to have the style apply to all subclasses of VisualElement. Setters can even take advantage of resources to further increase maintainability.


    
		
			#33302E
			White
			
        	
        
    

We can apply this style by setting the Style property of a control to the name of the style’s unique key. All properties from the style will be applied to the control. If a property is explicitly defined on the control that is also part of a referenced style, the property set explicitly will override the value in the style.

Our style is a dynamic resource behind the scenes, so they can be altered at runtime. I’ve created a custom page that allows users to enter their own hexadecimal colors to theme MonkeyTweet thanks to Xamarin.Forms resources and styles:

Feb 03, 2016 15:34

Conclusion

In this blog post, we took a look at theming applications with the Xamarin.Forms’ styles by theming our MonkeyTweet application to have a customizable, user-defined theme. We only just scratched the surface of styling; there are lots of other cool things you can do with styling, including style inheritance, implicit styling, platform-specific styling, and prebuilt styles. Be sure to download the MonkeyTweet application to apply your own theme and see just how easy it is to build beautiful, themed UIs with Xamarin.Forms!

The post Easy App Theming with Xamarin.Forms appeared first on Xamarin Blog.

Live Webinar: Xamarin vs. Hybrid HTML: Making the Right Choice for the Enterprise

Selecting the right mobile platform for your enterprise can be a high-risk gamble that will affect thousands of your employees and millions of your customers. Building the right app will either digitally transform your business or derail your efforts and force you to start over while the industry and customers leave you behind.

The two leading choices for building cross-platform native apps are Xamarin or hybrid mobile solutions that utilize HTML and JavaScript. How do you know which option is the best fit for you? Which solution provides superior user experience (UX), performance, faster development time, full hardware access, and a lower TCO?

Magenic, a leading solution provider, built an enterprise-focused application using the Xamarin Platform and a hybrid HTML framework to quantitatively compare the differences between the two approaches. In this webinar, Steven Yi from Xamarin and Kevin Ford of Magenic will break down the essential advantages, roadblocks, and observations they found to help you make the best choice for your strategic mobile initiatives.

Sign up below to join us on Thursday, February 18, 2016 at 8:30 am PT / 11:30 am ET / 4:30 pm GMT.
 

Register

About the Speakers

Kevin Ford
Kevin Ford is the Mobile Practice Lead with Magenic, leading development with native mobile technologies, Xamarin, and Cordova. He has worked with application development using the Microsoft stack for over twenty years. He is an accomplished architect, speaker and thought leader.
 
 
Steven Yi, Xamarin
Steven Yi is the Head of Product Marketing at Xamarin. Prior to Xamarin he held senior leadership roles in product management and strategy for Microsoft Azure and Red Hat, as well as architecting and developing large-scale applications.

The post Live Webinar: Xamarin vs. Hybrid HTML: Making the Right Choice for the Enterprise appeared first on Xamarin Blog.

Light Probe Proxy Volume: 5.4 Feature Showcase

Unity 5.4 has entered beta and a stand out feature is the Light Probe Proxy Volume (LPPV). I just wanted to share with you all what it is, the workflow and some small experiments to show it in action.

Correct as of 30.01.2016 – Subject to changes during 5.4 beta.

What Is A Light Probe Proxy Volume?

The LPPV is a component which allows for more light information to be used on larger dynamic objects that cannot use baked lightmaps, think Skinned Meshes or Particle Systems. Yes! Particle Systems receiving Baked Light information, awesome!

How To Use The LPPV Component?

The LPPV component is a dependency of the Light Probe Group. The component is located under Component -> Rendering -> Light Probe Proxy Volume, by default, the component looks like this:
Light Probe Proxy Volume Component_1

It’s a component you will need to add to the GameObject such as a Mesh or even a Light Probe Group. The GameObject you want to be affected by the LPPV needs to have a MeshRenderer / Renderer that has the Light Probes property set to “Use Proxy Volume:

Light Probe Proxy Volume Component_3

You can borrow another existing LPPV component which is used by other GameObjects by using the Proxy Volume Override, just drag and drop it into the property field for each Renderer you want to use it. An example: If you added the LPPV component to the Light Probe Group object, you can then share that across all renderers with the Proxy Volume Override property:

Use Proxy Volume

Setting up the Bounding Box:

There’s three options for setting up your Bounding Box:

  • Automatic Local
  • Automatic World
  • Custom

Automatic Local:

Default property setting – the bounding box is computed in local space, interpolated light probe positions will be generated inside this box. The bounding box computation encloses the current Renderer and all the Renderers down the hierarchy that have the Light Probes property set to Use Proxy Volume, same behaviour for Automatic World.

Light Probe Proxy Volume Component_1

Automatic World:

A world-aligned bounding box is computed. Automatic Global and Automatic Local options should be used in conjunction with Proxy Volume Override property on other Renderers. Additionally you could have a whole hierarchy of game objects that use the same LPPV component set on a parent in the hierarchy.

The Difference between this mode and Automatic Local is that in Automatic Local the bounding box is more expensive to compute when a large hierarchy of game objects uses the same LPPV component from a parent game object, but the resulting bounding box may be smaller in size, meaning the lighting data is more compact.

Custom:

Empowers you to edit the bounding box volume yourself in the UI, changing the size and origin values in the Inspector or by using the tools to edit in the scene view. Bounding box is specified in local space of the GameObject. You will need to ensure that all the Renderers are within the Bounding Box of the LPPV in this case.

Light Probe Proxy Volume Component

Setting Up Resolution / Density:

After setting up your bounding box, you need to then consider the density / resolution of the Proxy Volume. To do this there’s two options available under Resolution Mode:

Automatic:

Default property setting – set a value for the density i.e. number of probes per unit. Number of probes per unit is calculated in the X, Y and Z axis, so defined by the bounding box size.

Custom:

Set up custom resolution values in the X, Y and Z axis using the drop down menu. Values start at 1 and increment to a power of 2 up to 32. You can have 32x32x32 interpolating probes

Interpolating Probes

Performance Measurements To Consider When Using LPPV:

Keep in mind the interpolation for every batch of 64 interpolated light probes will cost around 0.15ms on CPU (i7 – 4Ghz) (at the time of Profiling). The Light probe interpolation is multi-threaded, anything less than or equal to 64 interpolation light probes will not be multi-threaded and will run on the main thread.

Using Unity’s built-in Profiler you can see BlendLightProbesJob on the main thread using the Timeline viewer, if you increase the amount of interpolated light probes to more than 64 you will see BlendLightProbesJob on the worker thread as well:

BlendLightProbesJob

The behaviour for just one batch of 64 interpolated light probes is it will run only on the main thread and if there are more batches (>64) it will schedule one on the main thread and others on the worker threads, but this behaviour is just for one LPPV. If you have a lot of LPPVs with less than 64 interpolated light probes each, they will all run on the main thread.

Hardware Requirements:

The component will require at least Shader Model 4 graphics hardware and API support, including support for 3D textures with 32-bit floating-point format and linear filtering.

Sample shader for particle systems that uses ShadeSHPerPixel function:

The Standard shaders have support for this feature. If you want to add this to a custom shader, use ShadeSHPerPixel function. Check out this sample to see how to use this function:

Shader "Particles/AdditiveLPPV" {

Properties 
{
    _MainTex ("Particle Texture", 2D) = "white" {}
    _TintColor ("Tint Color", Color) = (0.5,0.5,0.5,0.5)
}

Category 
    {
    Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
    Blend SrcAlpha One
    ColorMask RGB
    Cull Off Lighting Off ZWrite Off

    SubShader 
    {
        Pass 
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma multi_compile_particles
            #pragma multi_compile_fog
            // Don’t forget to specify the target
            #pragma target 3.0

            #include "UnityCG.cginc"
            // You have to include this header to have access to ShadeSHPerPixel
            #include "UnityStandardUtils.cginc"

            fixed4 _TintColor;
            sampler2D _MainTex;

            struct appdata_t 
            {
                   float4 vertex : POSITION;
                   float3 normal : NORMAL;
                   fixed4 color : COLOR;
                   float2 texcoord : TEXCOORD0;
            };

            struct v2f 
            {
                   float4 vertex : SV_POSITION;
                   fixed4 color : COLOR;
                   float2 texcoord : TEXCOORD0;
                   UNITY_FOG_COORDS(1)
                   float3 worldPos : TEXCOORD2;
                   float3 worldNormal : TEXCOORD3;
            };

            float4 _MainTex_ST;
            v2f vert (appdata_t v)
            {
                  v2f o;
                  o.vertex = UnityObjectToClipPos(v.vertex);
                  o.worldNormal = UnityObjectToWorldNormal(v.normal);
                  o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
                  o.color = v.color;
                  o.texcoord = TRANSFORM_TEX(v.texcoord,_MainTex);
                  UNITY_TRANSFER_FOG(o,o.vertex);
                  return o;
             }
            
             fixed4 frag (v2f i) : SV_Target
             {
                    half3 currentAmbient = half3(0, 0, 0);
                    half3 ambient = ShadeSHPerPixel(i.worldNormal, currentAmbient, i.worldPos);
                    fixed4 col = _TintColor * i.color * tex2D(_MainTex, i.texcoord);
                    >col.xyz += ambient;
                    UNITY_APPLY_FOG_COLOR(i.fogCoord, col, fixed4(0,0,0,0)); // fog towards black due to our blend mode
                    return col;
             }
             ENDCG
         }
      }
   }
}

February 2

Turn Events into Commands with Behaviors

Utilizing data binding in mobile apps can greatly simplify development by automatically synchronizing an app’s data to its user interface with minimal set up. Previously, we looked at the basics of data binding, and then explored some more advanced data binding scenarios where values are formatted and converted as they are passed between source and target by the binding engine. We then examined a Xamarin.Forms feature called commanding, that allows data bindings to make method calls directly to a ViewModel, such as when a button is clicked.

In this blog post, I’m going to explore a Xamarin.Forms feature called behaviors, which in the context of commanding, enables any Xamarin.Forms control to use data bindings to make method calls to a ViewModel.

Introduction to Behaviors

Behaviors let you add functionality to UI controls without having to subclass them. Instead, the functionality is implemented in a behavior class and attached to the control as if it was part of the control itself. Behaviors enable you to implement code that you would normally have to write as code-behind, because it directly interacts with the API of the control in such a way that it can be concisely attached to the control and packaged for reuse across more than one app. They can be used to provide a full range of functionality to controls, from adding an email validator to an Entry, to creating a rating control using a tap gesture recognizer.

Implementing a Behavior

The procedure for implementing a behavior is as follows:

  1. Inherit from the Behavior<T> class, where T is the type of control that the behavior should apply to.
  2. Override the OnAttachedTo method and use it to perform any set up.
  3. Override the OnDetachingFrom method to perform any clean up.
  4. Implement the core functionality of the behavior.

This results in the structure shown in the following code example:

public class CustomBehavior : Behavior<View>
{
	protected override void OnAttachedTo (View bindable)
	{
		base.OnAttachedTo (bindable);
		// Perform setup
	}
	protected override void OnDetachingFrom (View bindable)
	{
		base.OnDetachingFrom (bindable);
		// Perform clean up
	}
	// Behavior implementation
}

The OnAttachedTo method is fired immediately after the behavior is attached to the UI control. This method is used to wire up event handlers or perform other set up that’s required to support the behavior functionality. For example, you could subscribe to the ListView.ItemSelected event and execute a command when the event fires. The behavior functionality would then be implemented in the event handler for the ItemSelected event.

The OnDetachingFrom method is fired when the behavior is removed from the UI control and is used to perform any required clean up. For example, you could unsubscribe from the ListView.ItemSelected event in order to prevent memory leaks.

Consuming a Behavior

Every Xamarin.Forms control has a behavior collection to which behaviors can be added, as shown in the following code example:

<Editor>
	<Editor.Behaviors>
		<local:CustomBehavior />
	</Editor.Behaviors>
</Editor>

At runtime the behavior will respond to interaction with the control, as per the behavior implementation.

Invoking a Command in Response to an Event

In the context of commanding, behaviors are a useful approach for connecting a control to a command. In addition, they can also be used to associate commands with controls that were not designed to interact with commands. For example, they can be used to invoke a command in response to an event firing. Therefore, behaviors address many of the same scenarios as command-enabled controls, while providing a greater degree of flexibility.

The sample application contains the ListViewSelectedItemBehavior class, that executes a command in response to the ListView.ItemSelected event firing.

Implementing Bindable Properties

In order to execute a user specified command, the ListViewSelectedItemBehavior defines two BindableProperty instances, as shown in the following code example:

public class ListViewSelectedItemBehavior : Behavior<ListView>
{
	public static readonly BindableProperty CommandProperty =
            BindableProperty.Create ("Command", typeof(ICommand), typeof(ListViewSelectedItemBehavior), null);
	public static readonly BindableProperty InputConverterProperty =
            BindableProperty.Create ("Converter", typeof(IValueConverter), typeof(ListViewSelectedItemBehavior), null);
	public ICommand Command {
		get { return (ICommand)GetValue (CommandProperty); }
		set { SetValue (CommandProperty, value); }
	}
	public IValueConverter Converter {
		get { return (IValueConverter)GetValue (InputConverterProperty); }
		set { SetValue (InputConverterProperty, value); }
	}
    ...
}

When this behavior is consumed by a ListView, the Command property should be data bound to an ICommand to be executed in response to the ListView.ItemSelected event firing, and the Converter property should be set to a converter that returns the SelectedItem from the ListView.

Implementing the Overrides

The ListViewSelectedItemBehavior overrides the OnAttachedTo and OnDetachingFrom methods of the Behavior<T> class, as shown in the following code example:

public class ListViewSelectedItemBehavior : Behavior<ListView>
{
    ...
	public ListView AssociatedObject { get; private set; }
	protected override void OnAttachedTo (ListView bindable)
	{
		base.OnAttachedTo (bindable);
		AssociatedObject = bindable;
		bindable.BindingContextChanged += OnBindingContextChanged;
		bindable.ItemSelected += OnListViewItemSelected;
	}
	protected override void OnDetachingFrom (ListView bindable)
	{
		base.OnDetachingFrom (bindable);
		bindable.BindingContextChanged -= OnBindingContextChanged;
		bindable.ItemSelected -= OnListViewItemSelected;
		AssociatedObject = null;
	}
    ...
}

The OnAttachedTo method subscribes to the BindingContextChanged and ItemSelected events of the attached ListView. The reasons for the subscriptions are explained in the next section. In addition, a reference to the ListView the behavior is attached to is stored in the AssociatedObject property.

The OnDetachingFrom method cleans up by unsubscribing from the BindingContextChanged and ItemSelected events.

Implementing the Behavior Functionality

The purpose of the behavior is to execute a command when the ListView.ItemSelected event fires. This is achieved in the OnListViewItemSelected method, as shown in the following code example:

public class ListViewSelectedItemBehavior : Behavior<ListView>
{
    ...
	void OnBindingContextChanged (object sender, EventArgs e)
	{
		OnBindingContextChanged ();
	}
	void OnListViewItemSelected (object sender, SelectedItemChangedEventArgs e)
	{
		if (Command == null) {
			return;
		}
		object parameter = Converter.Convert (e, typeof(object), null, null);
		if (Command.CanExecute (parameter)) {
			Command.Execute (parameter);
		}
	}
	protected override void OnBindingContextChanged ()
	{
		base.OnBindingContextChanged ();
		BindingContext = AssociatedObject.BindingContext;
	}
}

The OnListViewItemSelected method, which is executed in response to the ListView.ItemSelected event firing, first executes the converter referenced through the Converter property, which returns the SelectedItem from the ListView. The method then executes the data bound command, referenced through the Command property, passing in the SelectedItem as a parameter to the command.

The OnBindingContextChanged override, which is executed in response to the ListView.BindingContextChanged event firing, sets the BindingContext of the behavior to the BindingContext of the control the behavior is attached to. This ensures that the behavior can bind to and execute the command that’s specified when the behavior is consumed.

Consuming the Behavior

The ListViewSelectedItemBehavior is attached to the ListView.Behaviors collection, as shown in the following code example:

<ListView ItemsSource="{Binding People}">
	<ListView.Behaviors>
		<local:ListViewSelectedItemBehavior Command="{Binding OutputAgeCommand}"
            Converter="{StaticResource SelectedItemConverter}" />
	</ListView.Behaviors>
</ListView>
<Label Text="{Binding SelectedItemText}" />

The Command property of the behavior is data bound to the OutputAgeCommand property of the associated ViewModel, while the Converter property is set to the SelectedItemConverter instance, which returns the SelectedItem of the ListView from the SelectedItemChangedEventArgs.

The result of the behavior being consumed is that when the ListView.ItemSelected event fires due to an item being selected in the ListView, the OutputAgeCommand is executed, which updates the SelectedItemText property that the Label binds to. The following screenshots show this:

ExecuteCommand

Generalizing the Behavior

It’s possible to generalize the ListViewSelectedItemBehavior so that it can be used by any Xamarin.Forms control, and so that it can execute a command in response to any event firing, as shown in the following code example:

<ListView ItemsSource="{Binding People}">
	<ListView.Behaviors>
		<local:EventToCommandBehavior EventName="ItemSelected" Command="{Binding OutputAgeCommand}"
            Converter="{StaticResource SelectedItemConverter}" />
	</ListView.Behaviors>
</ListView>
<Label Text="{Binding SelectedItemText}" />

For more information, see the EventToCommandBehavior class in the sample application.

Wrapping Up Behaviors

In the context of commanding, behaviors are a useful approach for connecting a control to a command. In addition, they can also be used to associate commands with controls that were not designed to interact with commands. For example, they can be used to invoke a command in response to an event firing. Therefore, behaviors address many of the same scenarios as command-enabled controls, while providing a greater degree of flexibility.

For more information about behaviors, see our Working with Behaviors document.

The post Turn Events into Commands with Behaviors appeared first on Xamarin Blog.

Texturing high-fidelity characters: Working with Quixel Suite for Unity 5

Following the release of the assets from our demo “The Blacksmith”, we got many requests for more information on the artistic side of creating high-end characters.

Check out this tutorial which was just published in the recent days by the team at Quixel.
It introduces you to how you can texture your character with Quixel Suite and set it up in Unity 5.

The video uses the main character from “The Blacksmith” demo as a sample, and covers the process step-by-step, with useful tips along the way.

If you want to try it yourself, remember you can download the characters and the environments from “The Blacksmith” from the Unity Asset Store, free of charge, and you are welcome to use them for any purpose, including commercial.

Happy texturing!

February 1

Xamarin Events in February

Join one of these many user groups, conferences, webinars, and other events to help celebrate something we all love this February — native mobile development in C#!

February 2016 Banner
Here are just a handful of the many developer events happening around the world this month:

SWETUGG se

  • Stockholm, Sweden: February 2
  • Get Started Building Cross-Platform Apps with Xamarin (Xamarin MVP Johan Karlsson speaking)

XLSOFT Japan Japan

  • Tokyo, Japan: February 5
  • Xamarin with NuGet and CI

Xamarin Costa Rica Mobile .NET Developer Group cr

  • San Jose, Costa Rica: February 8
  • Introduction to Mobile Development with C#/.NET and Xamarin

Gauteng Xamarin User Group za

  • Johannesburg­, South Africa: February 9
  • Xamarin 4: Everything You Need to Build Great Apps

South Florida Xamarin User Group us

  • Fort Lauderdale, FL: February 9
  • Demo Day: Share Your Xamarin Apps

Mobile-Do Developers Group do

  • Santo Domingo, Dominican Republic: February 12
  • Data Persistence with SQLite and SQLite.Net

Concord .Net User Group us

  • Concord, NH: February 16
  • Cross-Platform .NET Development with Carl Barton – Xamarin MVP

Orlando Windows Phone User Group us

  • Orlando, FL: February 17
  • Xamarin.Forms for .NET Developers

.NET Coders Brazil

  • São Paulo, Brazil: February 18
  • Deliver Top Quality Apps Using Xamarin Test Cloud and Xamarin Test Recorder

Boston Mobile C# Developers’ Group us

  • Boston, MA: February 18
  • Powerful Backends with Azure Mobile Services

Mobilize Enterprise Applications with Oracle and Xamarin au

  • Melbourne, Australia: February 23
  • Build Better, More Engaging Mobile Apps with Oracle and Xamarin in Melbourne

Chicago .NET Mobile Developers us

  • Chicago, IL: February 24
  • FreshMvvm : A Lightweight MVVM Framework for Xamarin.Forms

Mobilize Enterprise Applications with Oracle and Xamarin au

  • Sydney, Australia: February 26
  • Build Better, More Engaging Mobile Apps with Oracle and Xamarin in Sydney

XHackers in

  • Bangalore, India: February 27
  • MVVM & DataBinding + Intro to Game Dev with Xamarin

Didn’t see an event in your area?

Not to worry! Check out the Xamarin Events Forum for even more Xamarin meetups, hackathons, and other events happening near you.

Interested in getting a developer group started?

We’re here to help! Here are a few tools to help you out:

Also, we love to hear from you, so feel free to send us an email or tweet @XamarinEvents to let us know about events in your neck of the woods!

The post Xamarin Events in February appeared first on Xamarin Blog.

Profiling with Instruments

In the Enterprise Support team, we see a lot of iOS projects. At some point, in any iOS development, developers often end up running their game and sitting there thinking “Why the hell is this running so slowly?”. There are some great sets of tools for analysing performance out there and, one of the best is Instruments. Read on to find out how to use it to find your issues!   

To use Instruments, or any of XCode’s debugging tools, you will need to build a Unity project for the iOS Build Target (with the Development Build and Script Debugging options unchecked). Then you will need to compile the resultant XCode project with XCode in Release mode and deploy it to an attached iOS device.

After starting Instruments (by either a long press on the play button, or selecting Products>Profile), select the Time Profiler. To begin a profiling run, select the built application from the application selector, then press the red Record button. The application will launch on the iOS device with Instruments connected, and the Time Profiler will begin recording telemetry. The telemetry will appear as a blue graph on the Instruments timeline.

blogpic

P.S. To clean up the call hierarchy, the Details pane on the right-hand side of the Call Tree has two options, located in the “Settings” submenu (click on the gear icon in the Details pane). Select Flatten Recursion and Hide System Libraries.

A list of method calls will appear in the detail section of the Instruments window. Each top-level method call represents a thread within the application.

In general, the main method is the location of all hotspots of interest, as it contains all managed code.

Expanding the main method will yield a deep tree of method calls. The major branch is between two methods:

  • [startUnity] and UnityLoadApplication (These method names sometimes appear in ALL CAPS).
  • PlayerLoop

[startUnity] is of interest as it contains all time spent initializing the Unity engine. A method named UnityLoadApplication will be found beneath it.It is beneath UnityLoadApplication that startup time can be profiled.

image00

Once you have a nice time-slice of your application profiled, pause the Profiler, and start expanding the tree.  As you work down the tree, you will notice the time in ms reduces in the left hand column.d What you are looking for are items that cause a significant reduction in the time.  This will be a performance hotspot.  Once you have found one, you can go back to your code-base, and find out WTF is going on that is taking so much time.  It could be that it is a totally necessary operation, or it could be some time in the distant past you hacked some  pre-production code in that has made it over to your production project, or …well… it could for a million reasons really.  How/if you decide to fix this hotspot would be largely up to you, as you know your codebase better than anyone :D.

Instruments can also be used to look for performance sinks that are distributed broadly — ones that lack a single large hotspot, but instead show up as a few milliseconds of lost time in many different places in a codebase.  To do this, type either a partial or full function name into Instruments’ symbol search box, located above and to the right of the call tree. If profiling a slice of gameplay, expand PlayerLoop and collapse all the methods beneath it. If profiling startup time, expand UnityLoadApplication and collapse the methods beneath it.  The total number of milliseconds wasted on a specific operation can be roughly estimated by looking at the total time spent in PlayerLoop or UnityLoadApplication and subtracting the number of milliseconds located in the self column.

Common methods to look for:
– “Box(“, “box” and “box” — these indicate that C# value boxing is occurring; most instances of boxing are trivially fixed
– “Concat” — string concatenation is often easily optimized away
– “CreateScriptingArray” — All Unity APIs that return arrays will allocate new copies of arrays. Minimize calls to these methods.
– “Reflection” — reflection is slow. Use this to estimate the time lost to reflection and eliminate it where possible.
– “FindObjectOfType” — Use this to locate repeated or unnecessary calls to FindObjectOfType, or other known-slow Unity APIs.
– “Linq” — Examine the time lost to creating and discarding Linq queries; consider replacing hotspots with manually-optimized methods.

As well as profiling CPU time, Instruments also allows you to profile memory usage.  Instruments’ Allocations profiler provides two probes that offer detailed views into the memory usage of an application. The Allocations probe permits inspection of the objects resident within memory during a specific time-span. The VM Tracker probe permits monitoring of the dirty memory heap size, which is the primary metric used by iOS to determine when an application must forcibly closed.

Both probes will run simultaneously when selecting the Allocations profiler in Instruments. As usual, begin a profiling run by pressing the red Record button.

To set up the Allocations probe correctly, ensure the following settings are correct in the Detail tab on the right-hand side of Instruments.   Under Display Settings (middle option), ensure Allocation Lifespan is set to Created & Persistent.  Under Record Settings (left option), ensure Discard events for freed memory is checked.

The most useful display for examining memory behavior is the Statistics display, which is the default display when using the Allocations probe. This display shows a timeline. When used with the recommended settings, the graph displays blue lines indicating the time and magnitude of memory allocations which are still currently live.By watching this graph, you can watch for long-lived or leaked memory by simply repeating the scenario under test and ensuring that no blue lines remain alive between runs.

Another useful display is the Call Trees display. It displays the line of code at which allocations are performed, along with the amount of memory consumption the line of code is responsible for.

Below you can see that about 25% of the total memory usage of the application under test is solely due to shaders. Given that the shaders’ location in the loading thread, these must be the standard shaders bundled with default Unity projects which are then loaded at application startup time.

image01

As before, once you have identified a hotspot, what you do with it is totally dependent on your project.

So there you go.  A brief guide to Instruments. 1000(ish) words and no A-Team references. We don’t want to get into trouble like last time. Copyright violations are officially Not Funny™.

The Enterprise Support team is creating more of these guides, and we will be posting the full versions of our Best Practice guides in the coming months!

We love it when a plan comes together.

January 31

Show me the way

If you need further proof that OpenStreetMap is a great project, here’s a very nice near real-time animation of the most recent edits: https://osmlab.github.io/show-me-the-way/

Show me the way

Seen today at FOSDEM, at the stand of the Humanitarian OpenStreetMap team which also deserves attention: https://hotosm.org


Comments | More on rocketeer.be | @rubenv on Twitter

January 29

Unity Comes to New Nintendo 3DS

We announced our intention to support Nintendo’s recently released New Nintendo 3DS platform at Unite Tokyo and we’ve been very busy in the meantime getting it ready.  Now we’re pleased to announce it’s available for use today!

The first question people usually ask is “do you support the original Nintendo 3DS too?”  To which the answer is a qualified “yes”. We can generate ROM images which are compatible with the original Nintendo 3DS, and there are certainly some types of game which will run perfectly well on it, but for the majority of games we strongly recommend targeting the New Nintendo 3DS for maximum gorgeousness.

We’ve been working very closely with select developers to port a few of their existing games to New Nintendo 3DS. We’ve been busy profiling, optimizing, and ironing out the niggles using real-world projects, so you can be confident your games will run as smoothly as possible. In fact, one game has already successfully passed through Nintendo’s exacting mastering system; Wind Up Knight 2 went on sale at the end of last year!

Wind Up Knight 2

Wind Up Knight 2 – Japanese Version. (c) 2016 Robot Invader

Unity’s internal shader code underwent a number of significant changes in the transition from version 5.1 to 5.2.  This brought many benefits, including cleaner and more performant code, and also fixed a number of issues we had on console platforms.  We’re not able retrofit those fixes to the 5.1 based version, so we shall only be actively developing our shader support from version 5.2 onwards.

We’ve been putting Unity for New Nintendo 3DS version 5.2 through its paces for a few months, and it’ll be made available once it’s proved itself by getting a game through Nintendo’s mastering system too.  That should be in the near future, but it’s not something that’s easy to put a date on.

So far, we’ve been in development with a Nintendo 3DS-specific version of the Unity editor, but now we’ve switched our focus towards upgrading to the latest version, with a view to shipping as a plug-in extension to the regular editor.  We have a 5.3 based version running internally, and we’re working hard to get it merged into our mainline code-base.

It should be mentioned that some features are not yet implemented in this first public release, notably UNet and Shadow Maps (although Light-Maps are supported). We’re prioritising new features according to customer demand, but right now our main goal is to get into the regular editor.

In common with other mobile platforms, there are some limitations as to what can be achieved with the hardware. For instance, Unity’s Standard Shader requires desktop-class graphics hardware so it’s not something we can support on Nintendo 3DS. However, as with other platforms, if you try to use a shader which is unsupported then Unity will fall-back to a less complex shader that gives the best possible results.

Preparing your game for New Nintendo 3DS

This platform is unique in several ways, so games will need some modification to make best use of its features.

  • There are two screens, so you will need to redesign your user interface to accommodate the additional display.  The lower screen is touch sensitive, so it makes sense to put menus and other interactive UI items there.

The device’s coolest feature is that the picture is 3D, without needing glasses!  However, this does mean that the distance of objects is visible to the player in a way that it isn’t on other platforms.  So graphical effects which “cheat” to simulate distance won’t work.  For example, 2½-D games which use an orthographic projection and parallax layers will show up as completely flat.

  • There is less memory available than on other platforms, but that’s not as big an issue as it might seem at first. Textures can be down-sized drastically since the screen resolution is much lower than typically found on smartphones and tablets.
  • Unity for New Nintendo 3DS was one of the first platforms to use our in-house IL2CPP technology exclusively; we don’t use Mono at all. This brings substantial performance benefits, but there are a couple of downsides:

All compilation is done AOT (when the project is built). We don’t support JIT compilation (at runtime).

Various other platforms are also AOT-only, so if you’re porting a game from one of those platforms then you won’t have any problems. However, if you’re porting from a platform which does allow JIT compilation, then you might run into issues. In particular, some middleware JSON parsers which use introspection can be problematic. The good news is that Unity now comes with its own high-performance JSON parser, which doesn’t suffer from such issues.

hammers

Opening a celebratory barrel of sake, at Unite Tokyo.

How to Get Involved

Unity for New Nintendo 3DS is available at no charge. Just like with Nintendo’s Wii U, if you sign up to develop games for the platform, you get to use Unity for free!

Simply visit Nintendo’s Developer Portal and enrol in the Nintendo Developer Program*, then you’ll be able to download Unity for New Nintendo 3DS.

Of course, you will need some development hardware too. Devkits and testing units can also be purchased via Nintendo’s Developer Portal.

* Conditions apply, see site for details.

Monologue

Monologue is a window into the world, work, and lives of the community members and developers that make up the Mono Project, which is a free cross-platform development environment used primarily on Linux.

If you would rather follow Monologue using a newsreader, we provide the following feed:

RSS 2.0 Feed

Monologue is powered by Mono and the Monologue software.

Bloggers