• UIBarButtonItem Sizing in iOS 11

    When setting UIBarButtonItem in previous versions of iOS I’ve been able to reliably control the size of buttons by using a UIButton initialized with a frame. In iOS 11 this has changed as UINavigationItem now [finally] uses AutoLayout.

    The easy solve for this, using SnapKit looks like:

    let button = UIButton()
    button.setImage(UIImage(named: "btn-img"), for: .normal)
    button.addTarget(self, action: #selector(buttonAction), for: .touchUpInside)
    let barItem = UIBarButtonItem(customView: button)
    barItem.customView?.snp.makeConstraints({ (make) in

    And if you’re using vanilla AutoLayout:

    let button = UIButton()
    button.setImage(UIImage(named: "btn-img"), for: .normal)
    button.addTarget(self, action: #selector(buttonAction), for: .touchUpInside)
    let barItem = UIBarButtonItem(customView: button)
    let width = barItem.customView?.widthAnchor.constraint(equalToConstant: 22)
    width.isActive = true
    let height = barItem.customView?.heightAnchor.constraint(equalToConstant: 22)
    height.isActive = true

    Extra bonus: you can also safely connect your bar buttons to custom navigation item titleView’s to avoid buttons overlaying your title content. This is a great improvement to have in iOS 11 and should obviate the need for the UINavigationBar hacking of yore.

  • Swift 4 Codable with Alamofire & PromiseKit

    I’ve been really excited about Swift 4’s Codable since it landed earlier this year. Finally, now that the Xcode 9 GM is out, I’m ready to start converting the rather large Swift 3 project I work on day to day. Its full of the usual init?(dict: Dictionary<String,Any>) initializers you know and love.

    Converting our model classes to Codable has been a big win and allowed me to delete a lot of code, but my API functions based around Alamofire and PromiseKit immediately broke without the custom dictionary initializers.

    // The old
    func getFeed() -> Promise<FeedPage> {
        return Alamofire.request("https://null.info/feed").responseJSON().then { json in
            guard let jsonDict = json as? Dictionary<String, Any>,
                let feedPage = FeedPage(dict: jsonDict) else {
                    return Promise(error: NSError(domain: "net.skyebook", code: -1, userInfo: [NSLocalizedDescriptionKey: "Server Error"]))
            return Promise(value: feedPage)

    While it worked, this old way is kind of gross. It requires you to manually initialize each object and to specify its type twice: once in the function signature’s return type (Promise<FeedPage>) and again when deserializing the response (FeedPage(dict:jsonDict)).

    Since Codable provides us a uniform interface for deserialization, we can clean this up in a rather nice way that will be reusable across projects. The same way Alamofire gives us the really nice responseJSON() for returning a dictionary, let’s create a responseCodable(), which could return any class or struct which conforms to Codable.

    extension Alamofire.DataRequest {
        // Return a Promise for a Codable
        public func responseCodable<T: Codable>() -> Promise<T> {
            return Promise { fulfill, reject in
                responseData(queue: nil) { response in
                    switch response.result {
                    case .success(let value):
                        let decoder = JSONDecoder()
                        do {
                            fulfill(try decoder.decode(T.self, from: value))
                        } catch let e {
                    case .failure(let error):

    This is a fairly routine use of generic types. The really cool bit is using it in practice. Since we already have a bunch of function signatures specifying the expected return type (in this case, a FeedPage), Swift can infer the generic type to use when calling responseCodable() based on the return type. Have a look:

    // The new
    func getFeed() -> Promise<FeedPage> {
        return Alamofire.request("https://null.info/feed", parameters: params).responseCodable()

    This has allowed me to delete nearly 50% of the code in the API class and has turned out to be the really huge win in letting us delete boilerplate, error-prone code.

  • WWDC 2017 Wish List

    Listening to this week’s ATP and considering their hopes and dreams for this year’s WWDC got me to thinking more about the things I wish could be a bit better developing on Apple’s platforms. I present this not as a set of predictions but as a more formal list of observations and suggestions than my afternoon twitter complaints.


    SnapKit should be the API, really. As someone who went from setting frame, to setting center, to being really intrigued [and ultimately turned off] by Auto-Layout when it was released, this is the most natural layout API I’ve ever used. Over the past two years, typing .snp in Xcode has never stopped being novel.

    Tell me that this isn’t an improvement:

    let box = UIView()
    let rightBox = UIView()
    let container = UIView()
    box.snp.makeConstraints { (make) -> Void in
    rightBox.snp.makeConstraints { (make) -> Void in

    Swift 4 Codables

    The acceptance of SE-0167 will bring about a new protocol in Swift 4 called Codable. This will allow for direct mapping of JSON data to Swift classes (as well as property list support). While NSKeyedArchiver/NSKeyedUnarchiver have added support for the new protocol, it would be great if this were taken a step further by adding support to CloudKit where Codables could be loaded straight into CKRecord. Considering CKRecord’s supported data types, there are a few cases where there would be some work involved to add support:

    • [✅] NSString
    • [✅] NSNumber
    • [✅] NSArray
    • [✅] NSDate
    • [❓] NSData
    • [❓] CKReference
    • [❓] CKAsset
    • [✅] CLLocation

    I particularly worry about support for NSData and CKReference. Creating many-to-one relationships in JSON is already kind of gross. Building a CloudKit-specific solve to handle CKRecord for a problem like this outside of the new core JSONEncoder would seem to me like a bad idea. It will be interesting to see where this gets adopted around the system frameworks (and how Objective-C interop is handled).

    Also, it supports ISO 8601 date encodings out of the box… At the encoder level. 👍

    var encoder = JSONEncoder()
    encoder.dateEncodingStrategy = .iso8601

    Gonna be sweet.

    SceneKit “Bug Fixes and General Improvements”

    SceneKit is great. So much of it feels like using jMonkeyEngine again, except it isn’t open source and I can’t just jump in and fix bugs. The bugs totally exist and some of the API is kind of weird, let’s talk about that.

    Initializing SCNView with no frame

    Derp. This should work. I love using the no-argument initializer on UIViews, it keeps my code clean and serves as a nice informal convention for “this view is under the control of auto-layout”. I first happened upon this issue in the iOS 9 days, hopefully this gets cleaned up in iOS 11.

    class BestAppEverViewController: UIViewController {
        // MARK:  I fail!
        let sceneView = SCNView()
        // MARK: - I work!
        let sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 1, height: 1))

    OpenGL textures as SCNMaterial contents

    SCNMaterial allows for you to attach a GLKTextureInfo as the material content. Unfortunately, Apple’s only officially supported way of creating a texture info object is by using the rather limited GLKTextureLoader, which allows for loading from files or image representations in memory (raw data or CGImage). With no way to simply specify a texture ID, there aren’t many options.

    If you’re adventurous, you might override GLKTextureInfo with writable properties and notice that it totally works. It might also scare the hell out of you as one eagerly awaits the iOS update that topples the house of cards.

    In most cases, this shouldn’t be an issue for folks. Shader modifiers are a pretty incredible way of having fun with pixels. Unfortunately this won’t be of much use for video considering the state of SKVideoNode.

    PhotoKit Asset Status

    I love iCloud Photo Library. For as much as I truly miss Aperture, its made my traditional photo management nightmare much easier to deal with.

    Unfortunately, writing apps to deal with these photo libraries isn’t quite as lovely as actually using the service. Without a way to tell if a PHAsset is cached on-device, the code for handling these assets (especially in the case of video) becomes increasingly complicated the more you try to improve the user experience of accessing one of these assets. There are some tricks you can try to make an educated guess as to whether or not the file exists on device, like attempting to load the asset and checking out what the progress/completion callbacks do. This makes nearly impossible the ability to have nice animations for assets already on device, as there’s always a bit of time in between the request for the asset and when you actually get data.

    Then there’s also the curious case of making resource requests if you actually need the raw data for a video and attempting to use PHAssetResourceProgressHandler. I am yet to see this progress handler return data correctly or consistently. If you’ve hit this and are looking for a fix, by the way, you can concurrently make an AVAsset request to PHCachingImageManager and use its progress handler, PHAssetVideoProgressHandler, which actually works. Thankfully, the progress and completion for the asset request will match what should be happening within the resource request. A fix for this would be super duper as well.


    I can go on longer about shortcomings, irritations, and bugs, but when I put the last few years of iOS development in context I see a whole picture that’s pretty darn positive. The blemishes on the system frameworks don’t feel any more severe than looking at something like the Android SDK, while the nastiest bugs I’ve encountered are around the corner cases of use. Those are the projects I love though, the ones where I can take the sealed box and see how hard I can smash it without breaking. I’m looking forward to WWDC and the chance to talk over some of these things (and the lack of Swift refactoring in Xcode, holy smokes do I want that) in the labs. See you in San Jose!

  • Mythbusting Java String Interning

    Aleksey Shipilëv:

    In almost every project we were taking care of, removing String.intern from the hotpaths was the very profitable performance optimization. Do not use it without thinking, okay?

    I’ve worked on projects where .intern() is called on almost every String and found it baffling (and never got a good answer out of anyone as to why it was used, beside that it was already their convention). In the past I’ve seen crazy StackOverflow posts like this where answers have long comment threads contradicting each other.

    This series of tests designed and run by someone who really understands the JVM internals sheds much-needed light on the side-effects of a large String Table. Seeing GC pauses in the range of 13ms is enough to make heavy use of this a non-starter for anything involving real time graphics.

  • AI Drives the Rise of Accelerated Computing in Data Centers | NVIDIA Blog

    In which Nvidia responds to Google’s public benchmarks of their Tensor Processing Unit:

    To update Google’s comparison, we created the chart below to quantify the performance leap from K80 to P40, and to show how the TPU compares to current NVIDIA technology.

    The P40 balances computational precision and throughput, on-chip memory and memory bandwidth to achieve unprecedented performance for training, as well as inferencing. For training, P40 has 10x the bandwidth and 12 teraflops of 32-bit floating point performance. For inferencing, P40 has high-throughput 8-bit integer and high-memory bandwidth.

    The updated chart is worth looking at, but one of the main takeaways is 2x inferencing performance at 3x the power usage. For workstation builds that seems like a fair tradeoff (especially since you can’t go out and buy a Google TPU for yourself), but in the data center this appears to confirm Google’s argument that it helped them build fewer data centers (lower power = less heat = higher density).

    In broader terms, its been neat over the last 10 or so years seeing GPU’s being used (and bragged about) for more than pushing pixels. I think back to Standford’s Folding@Home project and what a boon that video cards with programmable pipelines became to mapping out proteins. Deep learning is now bringing about changes in how graphics cards are designed, pretty amazing.