This has been kicking around my drafts since December 2015 as the point I was trying to make about Metal and Apple in the context of the rest of the industry didn’t seem to have an obvious proof. This all just became relevant again today, but for the original context:
I came across rumors of Apple building its own GPU on Fudzilla last night. I’m not usually one to pay much attention to what’s on Fudzilla, but I think there’s a case to be made that Apple is probably pursuing this.
I [initially] thought it would make sense for Apple to snap up Imagination Technologies, which owns PowerVR. Thinking more on it today, the huge distribution of the Imagination workforce all over the planet would likely be an issue for any acquisition. Based on job postings I’d wager that the biggest chunk of design and engineering is happening in the UK. The jobs in their UK office are clearly the ones any buyer would want: FPGA design, video drivers, OpenCL engineering. These are who Apple need, and they’re 5300 miles away from the mothership. That’s a lot of families that need convincing to move a long way. One of the most attractive things about Apple doing their own GPU tech would be the tighter integration in their own SoC, I’m not sure I see a way forward where the Imagination/PowerVR team stays in the UK and is able to effectively integrate with the existing semiconductor team at Apple.
Another check in the “yeah, they’re probably doing this” column has to do with Apple’s Metal API and Khronos’ Vulkan API. While it looks like Mantle, DirectX 12, Metal, and Vulkan are cut roughly from the same cloth, the sense I get is that Apple has kept its distance from the Vulkan group more than any of the other players.
Almost a year and a half later, it looks like Apple is happy enough with what they’ve made, as Imagination Technologies has put out a press release that Apple has informed them they’ll no longer be using their IP in “15 months to two years time”. This seems quite damning for Imagination and an obvious move for Apple. The point I was trying to make [and never succeeded in] about Metal and Apple seems more clear as Apple’s description of MetalPerformanceShaders (introduced in iOS 9 but with neural net support added in iOS 10) makes clear:
Add low-level and high-performance kernels to your Metal app. Optimize graphics and compute performance with kernels that are fine-tuned for the unique characteristics of each Metal GPU family.
Its a tremendous amount of work to optimize for each GPU family, and the complexity in doing so is betrayed by the limited support of the MPS framework in devices. No support on the iPhone 5S (which indeed supported Metal) and zero Mac support. Having control over the hardware will provide the same advantages that shipping the first iPad with the A4 chip in 2010 provided. I wouldn’t be surprised if shortly after we start seeing graphics parts in iPhones that they start appearing on the Mac, even if Intel remains in the picture. GPU’s are the interesting chips now and is one of the areas where performance and power consumption are on a more radical trajectory than the CPU. It only makes sense that Apple would want to be in it.
Republicans in Congress just voted to reverse a landmark FCC privacy rule that opens the door for ISPs to sell customer data. Lawmakers provided no credible reason for this being in the interest of Americans, except for vague platitudes about “consumer choice” and “free markets,” as if consumers at the mercy of their local internet monopoly are craving to have their web history quietly sold to marketers and any other third party willing to pay.
The only people who seem to want this are the people who are going to make lots of money from it. (Hint: they work for companies like Comcast, Verizon, and AT&T.) Incidentally, these people and their companies routinely give lots of money to members of Congress.
So here is a list of the lawmakers who voted to betray you, and how much money they received from the telecom industry in their most recent election cycle.
Its an incredible table of data, and I’m always amazed at how cheap it is to purchase votes in congress. This is one of the most egregious uses of lobbying I’ve ever seen, for something that offers no benefit to anyone outside of the telecom industry. Glad to see that the swamp has been so thoroughly drained.
I recently encountered my first need for on-device NDK debugging. While the setup is kind of a pain, its super cool to be able to attach to a remote process over a network socket and have access to the full power of the GNU Debugger. Hopefully this quick guide takes some pain out of the initial setup (root required!).
First, let’s load the ARM64 build of
gdbserveron to the phone. This can be found wherever you’ve placed the Android NDK (in my case, the latest version is hanging out in my downloads folder)
Now let’s jump into the phone, gain root access, and have a look at the PATH. For the sake of convenience, we’ll want to move
gdbserverto somewhere the system is looking for executables.
Looks like there are some options for where to put it.
/system/binare read-only locations, but
/su/binis writable. Let’s move
gdbserverthere and make it executable.
With the debug server in place and executable, let’s check that it can execute.
Great! Now for something more cool. Find the Android Talk process and attach the debugger. The
pscommand lists processes and we’ll use
grepto filter. The second column in the result is the PID of the
com.google.android.talkprocess, let’s start the debug server and attach it to that process.
Now in another terminal from your desktop, we’ll forward the debug ports over the Android USB bridge and try connecting.
The system symbols download and we’re presented with an entry point for the debugger. At this point, you can use GDB in exactly the same way as if you were using it locally or from within an IDE. The Quick Reference is your friend, especially if you’ve spent the last few years in Xcode living the lldb high life.
I spent some time debugging a strange issue today where
outputMediaDataWillChangewas not being consistently called in my
AVPlayerItemOutputPullDelegateimplementation. Without this delegate function behaving properly it is impossible to suspend and resume the
CADisplayLinkdriving our display updates.
On looking at the initial configuration of the
AVPlayerItemOutput, I noticed a race condition:
With the above code it is completely possible (and happens with some frequency) for the
AVPlayerto attempt a call to its video outputs’
This works but it remains pretty strange as you wouldn’t expect the order of operations to matter before the output is added to an
AVPlayerItem. The world is a weird place.
I’ve been having some fun in the last few days blitting video into SceneKit textures. Now going back to
AVPlayerLayerin the same project today, I found an internal SceneKit assertion failing.
Assertion failed: (renderSize.x != 0), function -[SCNRenderContextMetal _setupDescriptor:forPass:isFinalTechnique:], file /BuildRoot/Library/Caches/com.apple.xbs/Sources/SceneKit/SceneKit-332.6/sources/Core3DRuntime/NewRenderer/SCNRenderContextMetal.mm, line 688.
After checking that my SceneKit properties weren’t being used (
sceneViewisn’t even added to the view hierarchy), I changed the initialization of my
SCNViewto use an explicit frame.
Sure enough, using a frame with some pixels in it works. Even if your
SCNViewis not on screen. This would definitely seem to be a SceneKit bug in iOS 9.3 and perhaps going back earlier.