• Swift 4 Codable with Alamofire & PromiseKit

    I’ve been really excited about Swift 4’s Codable since it landed earlier this year. Finally, now that the Xcode 9 GM is out, I’m ready to start converting the rather large Swift 3 project I work on day to day. Its full of the usual init?(dict: Dictionary<String,Any>) initializers you know and love.

    Converting our model classes to Codable has been a big win and allowed me to delete a lot of code, but my API functions based around Alamofire and PromiseKit immediately broke without the custom dictionary initializers.

    // The old
    func getFeed() -> Promise<FeedPage> {
        return Alamofire.request("https://null.info/feed").responseJSON().then { json in
            guard let jsonDict = json as? Dictionary<String, Any>,
                let feedPage = FeedPage(dict: jsonDict) else {
                    return Promise(error: NSError(domain: "net.skyebook", code: -1, userInfo: [NSLocalizedDescriptionKey: "Server Error"]))
            }
            
            return Promise(value: feedPage)
        }
    }
    

    While it worked, this old way is kind of gross. It requires you to manually initialize each object and to specify its type twice: once in the function signature’s return type (Promise<FeedPage>) and again when deserializing the response (FeedPage(dict:jsonDict)).

    Since Codable provides us a uniform interface for deserialization, we can clean this up in a rather nice way that will be reusable across projects. The same way Alamofire gives us the really nice responseJSON() for returning a dictionary, let’s create a responseCodable(), which could return any class or struct which conforms to Codable.

    extension Alamofire.DataRequest {
        // Return a Promise for a Codable
        public func responseCodable<T: Codable>() -> Promise<T> {
            
            return Promise { fulfill, reject in
                responseData(queue: nil) { response in
                    switch response.result {
                    case .success(let value):
                        let decoder = JSONDecoder()
                        do {
                            fulfill(try decoder.decode(T.self, from: value))
                        } catch let e {
                            reject(e)
                        }
                    case .failure(let error):
                        reject(error)
                    }
                }
            }
        }
    }
    

    This is a fairly routine use of generic types. The really cool bit is using it in practice. Since we already have a bunch of function signatures specifying the expected return type (in this case, a FeedPage), Swift can infer the generic type to use when calling responseCodable() based on the return type. Have a look:

    // The new
    func getFeed() -> Promise<FeedPage> {
        return Alamofire.request("https://null.info/feed", parameters: params).responseCodable()
    }
    

    This has allowed me to delete nearly 50% of the code in the API class and has turned out to be the really huge win in letting us delete boilerplate, error-prone code.

  • WWDC 2017 Wish List

    Listening to this week’s ATP and considering their hopes and dreams for this year’s WWDC got me to thinking more about the things I wish could be a bit better developing on Apple’s platforms. I present this not as a set of predictions but as a more formal list of observations and suggestions than my afternoon twitter complaints.

    Auto-Layout

    SnapKit should be the API, really. As someone who went from setting frame, to setting center, to being really intrigued [and ultimately turned off] by Auto-Layout when it was released, this is the most natural layout API I’ve ever used. Over the past two years, typing .snp in Xcode has never stopped being novel.

    Tell me that this isn’t an improvement:

    let box = UIView()
    let rightBox = UIView()
    let container = UIView()
    
    container.addSubview(box)
    container.addSubview(rightBox)
    
    box.snp.makeConstraints { (make) -> Void in
        make.size.equalTo(50)
        make.center.equalTo(container)
    }
    
    rightBox.snp.makeConstraints { (make) -> Void in
        make.size.equalTo(box)
        make.left.equalTo(box.snp.right).offset(8)
    }
    

    Swift 4 Codables

    The acceptance of SE-0167 will bring about a new protocol in Swift 4 called Codable. This will allow for direct mapping of JSON data to Swift classes (as well as property list support). While NSKeyedArchiver/NSKeyedUnarchiver have added support for the new protocol, it would be great if this were taken a step further by adding support to CloudKit where Codables could be loaded straight into CKRecord. Considering CKRecord’s supported data types, there are a few cases where there would be some work involved to add support:

    • [✅] NSString
    • [✅] NSNumber
    • [✅] NSArray
    • [✅] NSDate
    • [❓] NSData
    • [❓] CKReference
    • [❓] CKAsset
    • [✅] CLLocation

    I particularly worry about support for NSData and CKReference. Creating many-to-one relationships in JSON is already kind of gross. Building a CloudKit-specific solve to handle CKRecord for a problem like this outside of the new core JSONEncoder would seem to me like a bad idea. It will be interesting to see where this gets adopted around the system frameworks (and how Objective-C interop is handled).

    Also, it supports ISO 8601 date encodings out of the box… At the encoder level. 👍

    var encoder = JSONEncoder()
    encoder.dateEncodingStrategy = .iso8601
    

    Gonna be sweet.

    SceneKit “Bug Fixes and General Improvements”

    SceneKit is great. So much of it feels like using jMonkeyEngine again, except it isn’t open source and I can’t just jump in and fix bugs. The bugs totally exist and some of the API is kind of weird, let’s talk about that.

    Initializing SCNView with no frame

    Derp. This should work. I love using the no-argument initializer on UIViews, it keeps my code clean and serves as a nice informal convention for “this view is under the control of auto-layout”. I first happened upon this issue in the iOS 9 days, hopefully this gets cleaned up in iOS 11.

    class BestAppEverViewController: UIViewController {
        // MARK:  I fail!
        let sceneView = SCNView()
    
        // MARK: - I work!
        let sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 1, height: 1))
    }

    OpenGL textures as SCNMaterial contents

    SCNMaterial allows for you to attach a GLKTextureInfo as the material content. Unfortunately, Apple’s only officially supported way of creating a texture info object is by using the rather limited GLKTextureLoader, which allows for loading from files or image representations in memory (raw data or CGImage). With no way to simply specify a texture ID, there aren’t many options.

    If you’re adventurous, you might override GLKTextureInfo with writable properties and notice that it totally works. It might also scare the hell out of you as one eagerly awaits the iOS update that topples the house of cards.

    In most cases, this shouldn’t be an issue for folks. Shader modifiers are a pretty incredible way of having fun with pixels. Unfortunately this won’t be of much use for video considering the state of SKVideoNode.

    PhotoKit Asset Status

    I love iCloud Photo Library. For as much as I truly miss Aperture, its made my traditional photo management nightmare much easier to deal with.

    Unfortunately, writing apps to deal with these photo libraries isn’t quite as lovely as actually using the service. Without a way to tell if a PHAsset is cached on-device, the code for handling these assets (especially in the case of video) becomes increasingly complicated the more you try to improve the user experience of accessing one of these assets. There are some tricks you can try to make an educated guess as to whether or not the file exists on device, like attempting to load the asset and checking out what the progress/completion callbacks do. This makes nearly impossible the ability to have nice animations for assets already on device, as there’s always a bit of time in between the request for the asset and when you actually get data.

    Then there’s also the curious case of making resource requests if you actually need the raw data for a video and attempting to use PHAssetResourceProgressHandler. I am yet to see this progress handler return data correctly or consistently. If you’ve hit this and are looking for a fix, by the way, you can concurrently make an AVAsset request to PHCachingImageManager and use its progress handler, PHAssetVideoProgressHandler, which actually works. Thankfully, the progress and completion for the asset request will match what should be happening within the resource request. A fix for this would be super duper as well.

    perspective

    I can go on longer about shortcomings, irritations, and bugs, but when I put the last few years of iOS development in context I see a whole picture that’s pretty darn positive. The blemishes on the system frameworks don’t feel any more severe than looking at something like the Android SDK, while the nastiest bugs I’ve encountered are around the corner cases of use. Those are the projects I love though, the ones where I can take the sealed box and see how hard I can smash it without breaking. I’m looking forward to WWDC and the chance to talk over some of these things (and the lack of Swift refactoring in Xcode, holy smokes do I want that) in the labs. See you in San Jose!

  • Mythbusting Java String Interning

    Aleksey Shipilëv:

    In almost every project we were taking care of, removing String.intern from the hotpaths was the very profitable performance optimization. Do not use it without thinking, okay?

    I’ve worked on projects where .intern() is called on almost every String and found it baffling (and never got a good answer out of anyone as to why it was used, beside that it was already their convention). In the past I’ve seen crazy StackOverflow posts like this where answers have long comment threads contradicting each other.

    This series of tests designed and run by someone who really understands the JVM internals sheds much-needed light on the side-effects of a large String Table. Seeing GC pauses in the range of 13ms is enough to make heavy use of this a non-starter for anything involving real time graphics.

  • AI Drives the Rise of Accelerated Computing in Data Centers | NVIDIA Blog

    In which Nvidia responds to Google’s public benchmarks of their Tensor Processing Unit:

    To update Google’s comparison, we created the chart below to quantify the performance leap from K80 to P40, and to show how the TPU compares to current NVIDIA technology.

    The P40 balances computational precision and throughput, on-chip memory and memory bandwidth to achieve unprecedented performance for training, as well as inferencing. For training, P40 has 10x the bandwidth and 12 teraflops of 32-bit floating point performance. For inferencing, P40 has high-throughput 8-bit integer and high-memory bandwidth.

    The updated chart is worth looking at, but one of the main takeaways is 2x inferencing performance at 3x the power usage. For workstation builds that seems like a fair tradeoff (especially since you can’t go out and buy a Google TPU for yourself), but in the data center this appears to confirm Google’s argument that it helped them build fewer data centers (lower power = less heat = higher density).

    In broader terms, its been neat over the last 10 or so years seeing GPU’s being used (and bragged about) for more than pushing pixels. I think back to Standford’s Folding@Home project and what a boon that video cards with programmable pipelines became to mapping out proteins. Deep learning is now bringing about changes in how graphics cards are designed, pretty amazing.

  • Apple Building a GPU

    This has been kicking around my drafts since December 2015 as the point I was trying to make about Metal and Apple in the context of the rest of the industry didn’t seem to have an obvious proof. This all just became relevant again today, but for the original context:

    I came across rumors of Apple building its own GPU on Fudzilla last night. I’m not usually one to pay much attention to what’s on Fudzilla, but I think there’s a case to be made that Apple is probably pursuing this.

    I [initially] thought it would make sense for Apple to snap up Imagination Technologies, which owns PowerVR. Thinking more on it today, the huge distribution of the Imagination workforce all over the planet would likely be an issue for any acquisition. Based on job postings I’d wager that the biggest chunk of design and engineering is happening in the UK. The jobs in their UK office are clearly the ones any buyer would want: FPGA design, video drivers, OpenCL engineering. These are who Apple need, and they’re 5300 miles away from the mothership. That’s a lot of families that need convincing to move a long way. One of the most attractive things about Apple doing their own GPU tech would be the tighter integration in their own SoC, I’m not sure I see a way forward where the Imagination/PowerVR team stays in the UK and is able to effectively integrate with the existing semiconductor team at Apple.

    Another check in the “yeah, they’re probably doing this” column has to do with Apple’s Metal API and Khronos’ Vulkan API. While it looks like Mantle, DirectX 12, Metal, and Vulkan are cut roughly from the same cloth, the sense I get is that Apple has kept its distance from the Vulkan group more than any of the other players.

    Almost a year and a half later, it looks like Apple is happy enough with what they’ve made, as Imagination Technologies has put out a press release that Apple has informed them they’ll no longer be using their IP in “15 months to two years time”. This seems quite damning for Imagination and an obvious move for Apple. The point I was trying to make [and never succeeded in] about Metal and Apple seems more clear as Apple’s description of MetalPerformanceShaders (introduced in iOS 9 but with neural net support added in iOS 10) makes clear:

    Add low-level and high-performance kernels to your Metal app. Optimize graphics and compute performance with kernels that are fine-tuned for the unique characteristics of each Metal GPU family.

    Its a tremendous amount of work to optimize for each GPU family, and the complexity in doing so is betrayed by the limited support of the MPS framework in devices. No support on the iPhone 5S (which indeed supported Metal) and zero Mac support. Having control over the hardware will provide the same advantages that shipping the first iPad with the A4 chip in 2010 provided. I wouldn’t be surprised if shortly after we start seeing graphics parts in iPhones that they start appearing on the Mac, even if Intel remains in the picture. GPU’s are the interesting chips now and is one of the areas where performance and power consumption are on a more radical trajectory than the CPU. It only makes sense that Apple would want to be in it.