Now that WWDC 2017 has officially come to a close I thought I would write a follow up recapping my wishlist from my previous article, Attending WWDC 2017, and then talk about some of the other big developer announcements from this year as well. First, let's recap Xcode command line tools. Xcode 9 had a lot of really nice updates this year. One of the biggest updates that was announced for Xcode command line tools was the ability to manually add provisioning profiles using xcodebuild instead of letting Xcode decide the appropriate profile based upon the assigned signing team. This looks like it will help avoid a lot of confusion in many continuous integration systems by clearly indicating which profile is being used to sign which build.
Next, let's talk about networking enhancements. From what I saw, there were quite a few networking enhancements made this year in https TLS, to certificate revocation, and background scheduling with the URLSessionSchedulingAPI. The URLSessionSchedulingAPI now gives the developer the ability to schedule a request to be made instead of running requests on an interval in the background. Also, with the new URLSessionSchedulingAPI a request can pass through a delegate method to be ignored or dispatched if it makes sense to send that request.
Next, let’s talk about Swift 4. To be honest, there was not a lot that was uncovered at WWDC about Swift that I did not already know from following Swift Evolution. I did attend the What’s New in Swift talk and I tried to make the Swift Panel across the street from WWDC but nothing really in the conference came up that I was not already following on the mailing list or in evolution. It’s funny, I was having a discussion, with an Apple developer evangelist about Swift, and he told me that the Swift community is so vibrant that there is not a large need to support developers writing Swift. The community does an amazing job of that already.
As you can see, I did not do too bad. At least two out of the five items in my wish list will be coming out with the new release in iOS 11. Now, let's discuss a few of the new features that were not on my wishlist but I am very excited about.
1) MultiPath TCP:
The MPTCP is an excellent new developer API available in iOS 11 that when configured allows the operating system to always use the interface with the lowest latency. The interface being either WiFi or cellular, but does not default to one over the other. To take advantage of this new API seems very simple, it looks like it is just a few properties to URLSessionConfiguration and then server software like nginx that supports multi-path tcp needs to be used on the server side to process requests. Very excited about this new API and trying it out in my applications.
CoreML is a new Apple development framework specifically built for developers to implement machine learning into their applications on macOS, iOS, tvOS, and watchOS. The reason CoreML is so exciting is because Apple is now providing developer tools for all of the complex procedures and algorithms that you previously had to implement yourself to perform object detection, model training, or language processing, all in one developer framework. Apple has been perfecting these APIs in existing applications like Camera, iMessage, and Siri over the last three years and now they are available for developers to integrate into their own projects.
I can specifically remember using openCV and other third party libraries to implement object recognition and image based neural networks in my iOS applications over the years and remember it being a very big headache working out the performance and optimization bugs that would come out of expensive procedures like this. With CoreML, and the Vision framework specifically, this looks to be fairly straight forward and all designed for Apple's hardward which is even better! I cannot wait to get started with CoreML!
Having had a good amount of experience with augmented reality throughout my career I am very well aware of some of the challenges in making a great augmented reality experience on mobile. The first challenge is tracking. Your app has to track well, track consistently, and track without flickering. The second challenge is scale and experience pinning. Your experience really needs to scale up and down based upon your 3d environment and it needs to either pin to your marker or interact well based upon your touch. The third challenge is the ease of the experience creation. If your developers and artists creating your experiences are blocked by the complexity to create an enriching experience for your markers, then sacrifices often have to be made and the final outcome often suffers.
Having attended a few ARKit sessions at WWDC, and after taking a look at the sample code and tested the ARKit APIs, it really looks like ARKit is knocking it out of the park on at least two of the three challenges and really lowering the bar on a third. The first major win you receive with ARKit is tracking. ARKit tracks consistently and holds well from very long distances. The second major win you receive is scaling and experience pinning. ARKit does an excellent job at hold an experience to a marker and scaling up and down to match the scale of your camera position compared to the overlay experience on the marker. The third improvement that ARKit provides, but not a complete win is the ability to create experiences with SceneKit, SpriteKit, and Metal2. This is a major win if you are familiar with SceneKit, SpriteKit, and Metal2 already as ARKit will now provide you a complete end to end workflow. However, there still is a lot of uncovered territory with programs like Unity and Unreal. Apple has said that there is full support for these platforms, but the full scope of this is still yet to be determined.
Well, that is it for my recap on WWDC 2017. I had an absolute blast and certainly look forward to digging into some of these new APIs very soon. As always, if you have any questions or concerns, please be sure to leave a comment and I will make sure and get back to you as soon as possible.