It’s WWDC week. I’m back home after a short week in San Jose. Monday’s keynote was a big event, with great news for Mac and iPad users, but I keep coming back to the most surprising “miss” in all of the announcements, and the one thing that I thought was surely a lock for the conference: SiriKit.
When I was on The Talk Show last week with Brent Simmons, John Gruber wrapped up the episode by asking about expectations for WWDC. I’m a big fan of the Amazon Echo, and I think Siri is behind in responsiveness and extensibility. I predicted at least 20 new domains and intents for SiriKit, if not a more open architecture that could grow into as big a platform as Alexa’s thousands of skills.
This week should’ve been a great one for Siri. Instead, we got not the hoped for 20 new domains but literally only 2, for taking notes and managing to-do lists. HomePod will have no support for apps at all, and it will initially ship only into a few English-speaking countries, erasing the traditional advantage Siri had over Alexa for localization.
There are good ideas in SiriKit. I’m excited about experimenting with creating “notes” in my app, and I like that in session 214 and session 228 they highlighted some of the new tweaks such as alternative app names and vocabulary. But there has to be more. Siri deserves several sessions at WWDC and much more attention from Apple.
Voice assistants represent the first real change in years to how we interact with computers, perhaps as important as the original graphical user interface. The company that created the Knowledge Navigator concept video should be firmly in the lead today. A year from now, at WWDC 2018, the lack of significant improvements to Siri will have stretched to 2 years, and that delay is going to look like a mistake even to Siri’s most vocal defenders.
This week on Core Intuition, Daniel and I talked about recent Apple news:
Daniel and Manton react to the European Union’s €13B retroactive tax demand to Apple, talk about the impact of tax laws on indies and small companies, and weigh in on Apple’s purported AI and machine learning triumphs. Finally they catch up on their ambitions to be more productive as the busy summer transitions to fall.
I wondered whether Apple is so obsessed with privacy that they are blinded to what is possible with more computation and extensibility in the cloud. I judge their efforts not only by the remarkable work the Siri team has done, and by what Google and Amazon are building, but also by Apple’s own gold standard: the Knowledge Navigator video from 1987. That vision is too ambitious for one company to develop all the pieces for. We eventually need a more open Siri platform to get us there.
I’ve owned an Amazon Echo since it first shipped and it’s great. I also use Siri and like it, though I use it less often for the kind of random questions I might ask Alexa. But after watching yesterday’s Google I/O keynote, I can’t help but feel that eventually Google is going to be far ahead of Amazon and Apple in this space.
Here’s John Gruber writing at Daring Fireball about the keynote:
Google is clearly the best at this voice-driven assistant stuff. Pichai claimed that in their own competitive analysis, Google Assistant is “an order of magnitude” ahead of competing assistants (read: Siri and Alexa). That sounds about right.
The problem with a voice assistant is that the better it gets, the more you want it to do. You continue to ask it more complicated questions, pushing at the limits of the assistant’s capabilities.
Only Google has the expertise in web services and the massive amount of data to keep going beyond basic questions. I expect both Siri and Alexa will hit brick walls that Google will get past, especially in conversational queries that let the user drill down below the most popular, superficial facts.
That is, unless Apple can open up Siri. Not just plugging in new trigger keywords like Alexa’s “skills” API (which would be a good start), but maybe a complete way to extend Siri with third-party code that feels seamless to the user. Surfacing voice-enabled apps automatically through natural queries might be on the same scale of app discoverability as we saw when the App Store launched.
As Ben Thompson lays out well in today’s essay, Google faces a different internet than the open web on which they built their search engine. The default for all these new platforms — from Facebook to Siri to the App Store — is to be closed. There’s a narrow window, right now, for someone to take the lead on creating an open voice assistant standard built on the open web.
We posted episode 228 of Core Intuition this week. From the show notes:
Daniel and Manton discuss the iPhone SE’s evident popularity, touch on the challenges of designing for extremes in screen size, and bemoan some of Siri’s shortcomings when compared to competitors. The two also discuss tax time as an indie software developer, weigh the merits of heading to SF for WWDC, and finally delve into some deep reflections about the psychology of not shipping in too long.
We talked a lot about Siri and the Amazon Echo — the problems with both and where voice software may be headed. After we recorded, Daniel wrote a great post with additional ideas for using Siri with distance-based reminders, for example the ability to ask Siri while driving “remind me in 15 miles to get gas”:
How would this be solved? By introducing a notion of distance-relative reminders in iOS and by extension in Siri. In the same way that Siri allows you set a reminder for a specific time or for a relative time from now, it should offer the same functionality for distance.
I hope you enjoy the podcast. I’ve been thinking lately that maybe the secret with Core Intuition is that it’s not actually a developer podcast. It’s a tech podcast with major tangents into software development and business.