Learning · Programming

360|iDev 2016 Conference Report

I wrote this report as part of Stack Overflow’s conference policy.  Enjoy!

This year I attended 360|iDev in Denver, the largest independent iOS developer conference.  Videos aren’t up yet but when they are, I will add the links.

Talks I loved

Conference Proposal Writing Workshop

This was a hands-on session by Chi-Ki Chan and Cate Huston of the Technically Speaking newsletter.  I would highly encourage you to sign up for the newsletter at techspeak.email.  The workshop went through the process of coming up with ideas for a talk, writing a proposal, writing a bio.

Key takeaways:

  • We completed this worksheet to help surface ideas and it turns out I have a lot of things I can write a talk about.
  • I should be blogging a lot.  It’s a good way to work through and clarify ideas and create public artifacts along the way.  Blog posts don’t have to be masterpieces but can just be a note about something that you found helpful, a description of an idea you tried, or a thought larger than 140 characters.  (I’m cross-posting this on my blog, for example.)
  • A proposal is a sales pitch, not a talk summary.  When working on a talk, you should identify (1) why your talk is important, (2) what people should take away, and (3) what you plan to cover, but you should only include 1 and 2 in your proposal. 1 is really the hook to engage the reader and works well as a relatable story.  In blind screenings, a proposal needs to stand on its own and tell you why your talk is better than other talks on the same subject.
  • A bio is your way of differentiating yourself as an expert in second round screenings when proposals are ranked about equal.  You can reuse parts of your bio for different talks, but each one should call out your specific experience in the topic you are proposing.
  • Submit early, submit often.  Proposal writing and speaking are skills, the more you do them the better you’ll get and the easier it will be to get accepted.  You can submit and present the same talk to multiple conferences.

Teaching and iPhone to See: Adventures in Machine Learning

Michael Schneider of Hivebrain Software walked us through a machine learning project he tackled: Teaching an iPhone to watch the map on a FPS HUD so a helmet could give you a tap if you were about to be shot in the back.

The basic problem was that they had a helmet that could provide haptic feedback, they knew that it got great results when integrated with a first person shooter, and game studios were completely uninterested in doing so. Their idea: point an iPhone camera at the screen and have it read the map.

Machine learning was the right tool for the job, and every job, and life, the universe, and everything.  JK, but it’s good for the problem of looking and a thing and finding other smaller things in it, like a bunch of red triangles in a circle.  Mike went with OpenCV because of prior experience and used Cascading Neural Networks because they are best suited to the things in things problem.  Cascading in this case means that they do quick passes to weed out obvious failures, then do finer passes with more details to weed out more nuanced failures.  This is the same approach used in facial recognition, if you can’t find a head, don’t look for eyes.

The biggest failure point he ran into was not filtering the data.  Because OpenCV only works on greyscale images, it couldn’t easily tell the difference between a yellow friend and a red foe.  Applying the filter based expose hue in the the image fixed this.

The next biggest challenge the sheer number of positive and negative samples needed to construct the model.  To help with this, he started saving every a snapshot of every hit the model detected, feeding successes and failures back into it so it could learn from its mistakes.  In the end, 20,000 positive and 20,000 negative samples were needed to construct a reliable model, requiring a day of calculation to produce a model that could fit in a few KB of XML.

A last important point is that compute time for these models can be huge, with a major factor being how big of an image you need to process.  Essentially, if you can’t scale the image to 48×48 and still make out what you want the machine to see, you may need another approach.

And… it was successful.  It only took 3 weeks to go from idea to implementation and because of what they learned the second game was way faster to model than the first.

A Tale of Two Tracers (code)

I really enjoyed this talk by Jeff Biggus and my former teacher Jonathan Blocksom, where they went through their individual experiences implementing ray tracers in Swift and then implementing one in Metal.

The idea of ray tracing is that of taking every point in an image, shooting a ray from your camera, through that point and into whatever it hits first, figuring out where the light light on that object would be coming from and how much of the object would reemit, and tracing that line out until you either hit the sky or hit your recursion limit.

There are technique like anti-aliasing which involves sending multiple rays per pixel and averaging the color to smooth out edges, as well as bounce depth limit to avoid stack overflows or painfully long recursion on corners.

Jeff and Jonathan had different approaches to some of the problems, which they discussed, using the same or different structs for points and vectors, using enums or protocols for materials, using simd or not, different math for intersection calculations,  etc, using unicode for custom operators, fun stuff.

Then they did the same problem again with Metal, Apple’s low level shader language which is a subset of C++-14.  They were able to translate a lot of the code from Swift to Metal pretty easily but some changes were required based on memory and thread limits, the lack of an RNG, and the lack of subclassing.  Byte alignment was also an important factor in passing data between the two languages.  In the end, the Metal implementation was about 60 times faster on a 2012 MBP than Swift.

On a personal note, I went ahead and implemented the same ray tracer from this talk, Ray Tracing in One Weekend by Peter Shirley, and can say (1) it’s really fun, (2) the math is really interesting, and (3) it really does take a weekend’s worth of time.  I just did a Swift Playground version (expect a blog post on it) and would love to do a Metal version if I can find the time.  Here’s what I got with 418 lines of code an 88 minutes of processing:

raytrace-c3a789f4-0435-4623-aaf5-b42b98de95cb

Other notable ideas

In Crafting Great Accessible Experiences, Sally Shepard pointed out that because accessibility is built into iOS your app is already shipping an accessible experience, it may just be a terrible one.  Taking the time to try out your app with accessibility features turned on, you can see where it is failing and work to improve it.

In Fast, Fun, and Professional Audio in Your Apps, two developers of AudioKit showed how to synthesize and really play with audio in a Swift Playground using AudioKit.  It’s a really impressive framework and you should see what McDonalds did with it.  Of note was on the idea of earcons, using different sounds to provide confirmation to users.  For example, the iOS 10 keyboard has different sounds for letter keys, the spacebar, and backspace.  Facebook also employs this heavily for actions like Like, drafting a comment, or hitting back.  It’s an interesting idea I’d like to try out in our app.

Better CoreAnimation Using Swift Playgrounds by Russel Mirabelli was a whirlwind too fast to follow but the moral of the story was that Swift Playgrounds are a great way to quickly iterate through animations.  All the playgrounds from his presentation are on GitHub.  Similarly, in Play First Development, Kendall Gelner demonstrated the idea of using playgrounds rather than unit tests as an approach to TDD, copying the final playground code into a unit test when you’re done.  A big takeaway was that if you add a playground to a project, it can import and of the frameworks used by that project.

In Developing Wearable Software, Conrad Stoll walked through enhancing a watchOS 2 app for watchOS 3 and made the important point that for a wearable app, you want your users to tap in, quickly pick what to do, and have the confidence to drop their wrist.

Because iOS development has its roots in C, projects largely consist of one massive directory with hundreds of source files.  Xcode lets you then group files in virtual folders (literally called groups) in the project navigator.  In Writing Code for Humans, not Compilers (slides), René Cacheaux demonstrated grouping files by based on the user’s journey and the visual placement of components rather than by functionality.  I’ve started restructuring my app in this way and it already feels much better.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s