UX Collective

We believe designers are thinkers as much as they are makers. https://linktr.ee/uxc

Follow publication

WWDC style. (That’s the summary of this story)

Redesigning Siri and adding multitasking features to iOS

Kévin Eugène
UX Collective
Published in
13 min readAug 1, 2018

After working on a redesign of Youtube and publishing my concept for macOS, I decided I should take care of one of my most beloved platforms of all time: iOS for the iPhone.

Back in 2009, hardly anyone around me knew what an iPhone was: the next big thing in tech had not hit the shores of France yet. The first time I saw it with my own eyes was at a restaurant with my aunt (yes, the same who had a huge iMac in her living room). She had brought it with her and I spent the whole evening on it. I remember how amazed I was by the responsiveness of the screen and how natural the multitouch gestures felt to me. Until then, touch screens had meant resistive technology (you know, those touch screens where you had to press really hard to do anything) and it clearly didn’t allow for such a smooth experience. Back home, I begged my parents to buy me an iPhone and a few months later, I happily received my glorious iPhone 3G for my birthday.

The phone lacked crucial features, like a video camera, and sending MMS was not even possible at the beginning, but I couldn’t care less: the user experience was miles ahead of the rest of the industry, and the App Store had a lot of potential for the years to come.

Almost ten years later, iOS has improved so much, and the excitement lives on as Apple concocts a new version bundled with new features each year. This time around, I wanted to imagine an update that I would personally be excited about if it showed up at the WWDC, and this is what I came up with.

Let me introduce you to iOS Mogi.

This is Mogi, a beautiful fishing village near Nagasaki in Japan. I took this picture last year.

« Hey Siri, help me »

The first part of this concept is focused on Siri. The idea here is not to create new commands, rather to display existing vocal requests that work well (like « Find me a good restaurant nearby » or « Get me pictures of Japan I took last year ») in a different way so they could be more useful to the user.

In iOS Mogi, Siri has been designed around a concept I call parallel help. The idea is to have a vocal assistant that is non-intrusive (it won’t take the whole screen like it does today), context aware, and can do things in the background for the user while they are doing something else.

As images are more explicit than words, here’s a very simple example:

Using Siri in Messages.

When using apps, Siri takes the shape of a notification so as to be less intrusive as possible (if summoned from the lock screen or the home screen, it will still be fullscreen).

Siri in iOS Mogi.

In the example above, I ask Siri to show me pictures of Japan as I want to send one to my friend Yannick. Once the request is fulfilled, the result is displayed in the Siri notification so I can continue to do what I was doing without being interrupted. I can swipe down the notification to reveal more and select the photos I want to send.

Selecting pictures before sending them.

Or, I can directly drag a picture from the expanded notification to the app below (finally putting to contribution the drag-and-drop API on the iPhone):

Dragging a photo from Siri to Messages. To cancel the drag-and-drop, drag the object to the screen’s borders.

If it has not been used for a while, the notification shrinks down to the top:

Shrinking down.

And it is possible to open it again by swiping down from the top of the screen (we’ll come back to it later):

Opening Siri again. The last request pops up again.

What’s really cool about the new Siri is that the results it displays can be used in another app very easily if needed:

When switching between apps, the result of the request remains accessible so it can be used again.

I believe designing Siri to be non-intrusive allows for so many use cases and could make a real difference with its competitors, thanks to its deep integration into the OS. Here’s a few examples of what could be achieved with this new Siri:

In response to “Find restaurants nearby”. Tap the bubble for more info, or drag to share it. Swipe to see other restaurants nearby.
Proactive Siri. If Siri detects an address or determines you are running late, it pops up and lets you take actions. In this case, tap the notification to send a message to Craig, or swipe down the notification to see your options. An action sheet will let the user choose between their favorite messaging apps.

And there’s even more to it. In iOS Mogi, it is possible to ask Siri to show pages from an app while doing something else on another one. So, for instance, I could be writing an email and wanting to add a picture that a friend sent me on iMessage. Here’s what it would look like in iOS Mogi:

Opening a conversation while writing an email thanks to Siri.

That’s the beginning of a true multitasking on a mobile device, and I think it is more adapted to the mobile limitations than just vertically splitting the screen in 2.

« Hey Siri, I want to… »

You know how sometimes, you feel frustrated when you try to send a message with Siri, and end up taking your phone and typing your text? In iOS Mogi, instead of relying completely on Siri to do things for you, you can ask it to help you get things done faster. No more wandering in the UI, simply begin your sentence with « I want to… » and Siri will let you do it, without leaving what you were doing (in iOS Mogi, what you are doing is really precious).

Writing a message to Yannick while writing an email.

And it works right from the lock screen:

Writing to Yannick from the Lock Screen with Siri.

« Hey Siri, scroll down a bit »

With Siri now being completely non-intrusive, new use cases show up. One of them is Siri actions.

Siri actions basically translate any touch gestures into voice commands. From the tap to the scroll, everything can now be performed using solely vocal requests. So I can ask Siri to scroll down my list of albums for instance, and open High as Hope from Florence + The Machine.

Navigating in Apple Music with Siri.

Elementary use cases like this one were previously not possible as Siri would take the whole screen and would not be context aware. Now, when writing an email for instance, I can ask Siri to change the recipient on the go, modify my signature, or even change the style while I am writing my email:

Editing an email on the go with Siri.

I think it would be a huge step forward for disabled people in particular. Siri actions would make it easier than ever for them to navigate in the OS. Even for non-disabled people, I think it would be really useful when hands are busy, like when cooking for example, or just to make some redundant tasks easier (as seen above in Mail).

And things could go even further thanks to ARKit 2 which has proved to be precise enough to track the eye (but let’s try to keep things simple for this concept):

I believe that Siri actions are very coherent with the initial approach of Siri, which was to allow users to perform simple actions with their voice. Making Siri non-intrusive just takes it a step further.

Siri actions also come with a new ability for Siri: saving elements from anywhere to use them elsewhere. Open a photo, and say « Save this photo » for Siri to save it.

Ask Siri to save elements and it will keep them for you.

Open another app, and drag the photo from the Siri notification to the app:

Dragging a saved picture in Medium.

And it works for copy-pasting too:

When content is copied from somewhere, it appears in a Siri notification so it can be used elsewhere easily. Swipe down the notification to see all of your previous saved content, like text, images, or emails.

Simply tap to paste your text in the current app where the cursor is at. You can also drag the content to where you want:

Pasting content from Notes to Medium in a second thanks to Siri.

You can ask Siri to show you all your saved elements with the sentence « Show me all my saved elements »:

Just ask “Show me my saved elements” and Siri will display them in front of you. You can drag them onto the app below or tap them to edit before use.

Once saved elements are used elsewhere, they disappear from the list.

And that’s it for the new Siri. Now, what if we applied the same principles of non-intrusive multitasking to other areas in the OS? Surely, the new gestures and visual code could be applied elsewhere. What could we do with it? Before digging further into this idea, let’s see if we can improve the experience in a particular area where I personally spend a lot of time: Apple Maps.

Apple Maps

Don’t hate me, but I really enjoy Apple Maps. Although some of its data may not be as trustworthy as it should, especially here in France, I find the user experience and interface very delightful. The drawer at the bottom introduced in iOS 10 when the app was redesigned is truly a brilliant idea. No wonder have we started to see it appear in other apps, like Lyft for instance.

However, something that has been bothering me from time to time is the lack of context awareness, especially when using transportation systems. More specifically, when I’m using a line, and that I try to get new directions, Apple Maps doesn’t know I’m stuck in that line and gives me directions based solely on my current location, while it should base the computations on my current location and current line. Very often, I end up given directions from starting points that are unreachable from where I am; at this point, navigating in Apple Maps becomes useless.

Surely there must be ways for Apple Maps to infer which line I am currently using. Lines are known from Apple Maps, maybe it could just check my position and compare it to the trajectory of the line. Once the algorithms determine that I may be using, say Keiō Line in Tokyo, here’s what could happen:

When Apple Maps detects the user is in a line.

A prompt could ask the user if they’re really using Keiō Line. The prompt can be dismissed by swiping it down, or if the line is correct, but not the direction, it is possible to swipe left or right until the right direction is found. Once the user confirms their line, Apple Maps would calculate new directions based on this information:

Once the line is confirmed, itineraries will be calculated using the line as a starting point.

3D Touch the small black button to have a linear view of your position on the line in real time:

Linear view of the line with the position in real time.

Why am I suddenly talking about Apple Maps and in what way is it coherent with the previous part you may ask? Well, here’s the thing: iOS lacks ways of keeping users updated about things that are happening right now. Sure, it uses some tricks here and there to do so, but they feel like hacks, as we’ll see a bit later. Typically, I think it would be really interesting for users to easily keep track of their route while doing something else (like playing or reading in the subway) and they should also be able to quickly go back to Apple Maps (or their favorite navigation app) without having to open the App Switcher or go back to the Home Screen.

In response to this issue, let me introduce you to Live Notifications.

Live Notifications, or the multitasking for mobile

First and foremost, let’s highlight how iOS tries to inform the user that things are happening in the background.

There’s the timer:

During a phone call/Facetime:

During an itinerary:

Yes, that’s the Lock Screen. This one definitely feels like a hack. How come third-party apps like Google Maps or Waze can’t take control of the Lock Screen as well?

When using the phone as a personal hotspot:

When the screen or a memo is being recorded:

And… that’s it (I didn’t put the music player in the list as it is treated a bit differently, and I don’t see why it should be changed).

What about when a Uber is on its way? When food is to be delivered? When a video is playing on Youtube or Netflix? Or when… we’re using transportation and need to easily keep track of the route? (😜)

It’s good that Apple made phone calls or Apple Maps’ itineraries easily accessible (in the status bar or in the Lock Screen). However, the ways of doing so are too restrictive (restricted to a few native apps and no action can be triggered without opening the corresponding app), they should be accessible anywhere and anytime without interruption of the current task, and some of them are not visually coherent and well integrated into the system. In a nutshell, they feel more like a quick fix that hacks the system than a true feature, and there should be a better and more elegant solution to handle these situations and more (it should be open to third-party apps as well!).

Live Notifications were designed to solve this issue.

Let’s take our earlier example of a user reading a book on Apple Books while on a metro line. Here’s what it could look like to have a peek at the current route:

Checking the route while reading thanks to a Live Notification. After a while of non-interaction, the blue bar fades away. Also, the name of the app in the Live Notification can be changed to tell more about the situation (here, “Apple Maps” has been changed to “Using Keiō Line”).

To have a glimpse at their current position on the line, the user simply needs dragging down the small bar at the top of the screen.

To open Notification Center, drag further:

You have to think of this gesture as this old iOS 10 one: you drag a bit from the top, you feel a haptic feedback to let you know your Live Notification is open, and if you drag beyond, you open Notification Centre.

And just like that, in a few seconds and without leaving Apple Books, the user can know where they are on the line and not miss their stop.

These gestures should feel familiar to you. If you remember well, these are the same ones as for Siri. In fact, in iOS Mogi, Siri is a Live Notification as well. So, all the gestures that were introduced before are also available for other tasks.

For instance, in this case, the user can swipe down the Live Notification to reveal their position on the map in real time. They can also trigger actions right from there:

The Live Notification once it’s open, showing in real time the position on the map. (I personally have never found it convenient to 3D Touch an element located near the top of the screen. I prefer a good old swipe, especially now that iPhones are longer than ever. However, 3D Touching notifications in the Notification Center makes sense, as they are more accessible there)

If they want, they can also tap the Live Notification, just like a normal notification, to open the corresponding app:

Opening Apple Maps with a tap.

Live Notifications are the natural evolution of Rich Notifications introduced in iOS 10. Now more than ever, they feel like mini apps.

How does one trigger Live Notifications? They have nothing to do. Apps determine whether they trigger Live Notifications or not, depending on the situation. When a Live Notification is activated, this is what happens:

The stopwatch transforming into a Live Notification.

The app shrinks down into a Live Notification to notify the user, then into a little bar at the top of screen to prevent intrusiveness.

Here’s a few examples of how things that are happening in the background could be handled by the OS thanks to Live Notifications:

Phone calls. Notice that the bar can change its color depending on the type of activity (green for phone or FaceTime calls, blue for navigation, etc…).
Thanks to Live Notifications, Uber can tell you in real time when your driver will arrive, even if you’re not in the Uber app. Notice that the style of the Live Notification can be customised by the developers to fit their apps’ aesthetics.
Checking the itinerary on Google Maps while writing a note (Notes is in the background and the Live Notification has been expanded).
Medium’s Now Reading Live Notification. (Dear Medium team, if you read this, please introduce a feature where we can quickly pickup a story where we left off!)

Live Notifications can also be used to quickly share something:

Medium’s Now Reading Live Notification pops up when Medium is closed. You can then share the article with a simple drag-and-drop.

Same for screenshots which can now be shared with a single drag-and-drop:

Sharing screenshots with drag-and-drop.

When multiple Live Notifications are running at the same time, they stack in iOS 12 fashion:

Multiple Live Notifications running.

They can be easily rearranged with drag-and-drop so you have your favorite Live Notification on top of the rest:

Tap the grouped Live Notifications and rearrange them with drag-and-drop. You can also just have a look at all your Live Notifications with this ungrouped view.

And of course, Live Notifications are also available directly from the Lock Screen, so it’s easy for you to have a glimpse at your itinerary or get back to where you were:

Reading an article on Medium while using the Keio Line. On the Lock Screen, developers can choose to have their Live Notifications expanded by default, for itineraries for instance where it’s important to have directions. (I have also slightly updated the style of the Lock Screen to match the new bolder style of iOS)

Wrapping up

That’s it for iOS Mogi. iOS Mogi is the third iteration of a concept I started about 2 years ago (at the same time I started working on macOS Newton). Back then, it was called iOS Fuji.

Since the beginning, I have wanted to find an elegant way to bring multitasking to the mobile, and splitting the screen was never an option. I wanted something that was more coherent with the mobile approach, and I hope you find that Live Notifications are a good step into that direction.

As always, generous, genuine and generative feedback is appreciated. :)

EDIT: My new article is out! This time, I try to rethink live collaboration on iOS. If you want to see how Live Notifications could be used elsewhere, please head to this story.

Responses (29)

Write a response