Why UX is booming in the automotive industry

Part 2: Human Machine Collaboration

Paul Schouten
UX Collective

--

Future traffic? — https://vimeo.com/106226560

This is the second part of a three-part series on why UX design is more relevant than ever in the automotive industry. In part 1, I discussed how the future car infotainment system will have to seamlessly integrate our digital needs. In this part, I will focus on the (current) main function of the car, driving!

Driving is not becoming easier. There are increasingly more people on the road and our digital lives are constantly distracting us. Luckily, cars can help us as they are becoming intelligent. They’re getting eyes, ears (sensors) and a brain (AI) to make sense of the environment around them. The challenge for UX designers is to design the interaction, or better, collaboration we will have with these virtual co-pilots that will help make driving safer.

Sensing the context

Cars are packed with computers and sensors for years. A high-end car today can have more than 100 electronic control units (ECUs). These sensors and ECU’s take care of things like traction control, active suspensions, automatic climate control, park assist, you name it.

However, due to developments such as sensor fusion and machine learning, these computers are becoming crazy smart. Instead of focusing on one task only, as they did previously, they are able to make an intelligent interpretation of the car’s full context. Yeah, just like you, but then in all directions, all of the time!

A Tesla Model S for instance has 8 surround cameras, 12 ultrasonic sensors and a forward-facing radar and some brands are even integrating lidars in their cars. Through neural networks we can make sense out of all the input from these sensors. And by fusing all this information together, our understanding of the context becomes more robust, especially since different kinds of sensors complement each other. A sensor coverage, such as a Tesla has, allows for 360 degrees of visibility around the vehicle, up to hundreds of meters.

TomTom’s representation of the digital map

To get an even better understanding of the environment, we can correlate all this information with virtual representations of the world, a.k.a. digital maps, something we make at TomTom. By doing this, the car can have a better understanding of where other road users are, what traffic rules apply and what future scenarios would be possible.

Another cool thing about these digital maps is that they allow the car to look beyond the coverage of those onboard sensors. Using maps, the car can see far far away, around corners or other elements that block the view such as big trucks. And what other cars see, can be communicated back to the map as well. Think about more precise locations of other road users, status of traffic lights or hazards on the road such as black ice, all in real-time.

All these sensors combined with a digital map are great to assess the context outside of the vehicle. But cars are also getting eyes on the inside of the vehicle. They can keep an eye on the driver and passengers. Are the seatbelts on, are the driver’s hands and eyes on the road or is he or secretly checking his smartphone?

Driving Assistant

All these sensors create a lot of contextual data! What can we possibly do with all this data? I know, the car could drive itself! Well… this might still take a while. And even longer before this is adopted by the majority of cars. Until that time, the virtual co-pilot will not drive for us, it will help us driving. It’s called ADAS (Advanced Driver Assistance Systems) and comes in four flavours.

Inform

Remember those great moments when you are on a holiday and your co-driver is struggling with a way to clumsy paper map in the passenger seat, informing you to take the exit you just passed?

No? You’re lucky, kiddo. As this is nowadays done by your virtual co-pilot in the form of navigation. The navigation device (probably your phone), knows the best route for you and where you currently are. So, by knowing this, it can inform you with the right instruction on the right time.

And this technology is still improving. Our maps are becoming more detailed and obtaining your current position is becoming more precise. Future navigation systems will not only get you on the right road, it will get you in the right lane. And it will inform you what to do at complex intersections, taking into account traffic rules, traffic lights and all other present road users.

Inform the user for possible future scenario’s

I hear you thinking, “I’m not stupid, I can see this information for myself!”. Sure. But when technology can filter the information from the environment for you, this can lower the required cognitive workload while driving. In complex and unfamiliar driving context, this can make driving a lot safer. People also started to prefer navigation over signs and paper maps at some point…

Warn

Of course, informing the driver to stop at a traffic light is pointless when he or she was already slowing down. Therefore, informing can be quite subtle and unobtrusive. But when the driver is not slowing down, the level of criticality rises. Now instead of informing, we need to warn the user.

There are already quite a lot of ADAS (Advanced Driver-Assistance Systems) on the market that do just that, warn the user when something is about to go wrong. And this is great, since sometimes we forget to check our blind spot before switching lanes, or we didn’t see the car in front of us braking since we were distracted, or even worse, we simply fall asleep. Yes, the car checks both outside and inside of the vehicle. Often these systems direct your attention with a loud noise or some flashy lights, so that you know how to respond as fast as possible.

Blind Spot Detection System by KIA — https://www.youtube.com/watch?v=1Wl5fgTpXTk

Intervene

And when the driver is not able to respond fast enough to the warning, a situation gets even more critical. Advanced systems can then respond in time as a last safety measure, think about emergency braking or steering maneuvers. And these intelligent systems made up by sensors and lots of computing power, are way faster than people. Way faster in terms of analyzingthe situation and responding adequately to them.

Automate

Driving can be quite boring, especially when you have to drive for long consecutive times. Some cars now are able to do this boring part for you. Keep the car at the right speed, at a decent distance to the car in front of you and between the lines.

Is this merely a comfort feature or does this make driving safer? Maybe it does, since the driver is slightly freed up from some of the cognitive workload to perform these simple tasks. However, they are really simple and often a driver does this on auto-pilot (pun intended).

In fact, the car is doing something a 5-year-old could do. The hard part of the driving task is still left for the driver. Staying vigilant for unexpected events, understanding traffic rules, predict what other road users will do etc. Again, it’s a team effort, a human machine collaboration.

Status quo

This all sounds very promising right? Well it is. But the real-world implementation not really optimal yet.

ADAS features are often treated separately. When front facing radars were introduced in cars, things like Forward Collision Avoidance were introduced. Ultrasonic sensors enabled other features like Blind Spot Detection System and Park Assist. And these luxury features would each be sold separately on top of the standard car package, business is business.

From different interaction models for cruise control (left) to all kinds of icons and buttons (right)

So, each ADAS function would get its own fancy name, button, icon and settings. And for every brand and model this has to be different, in order to differentiate from its competitors. And even more important, the logic of how such a safety feature should work and its limitations is different for every car.

Do you see the issue? For every new car I have to learn how to enable or disable these safety features or how to change their setting. I need to create a mental model that allows me to understand how it works and what the limitations of the system are, since there aren’t really standards or conventions as of yet.

This is a lot to digest, and I only want the car to get me from A to B. So, what do I do? Yup, don’t even bother using them. But things can get quite dangerous if I’m unaware of a safety system that is still active. A confusing alarm or an unexpected intervention can scare the hell out of me, potentially causing me to crash. Quite the opposite of what you’d like from a safety system.

This all doesn’t sound quite ideal for the future we are heading, as we will not own our cars but step in different ones every time we want to drive. Car sharing is the future, if we must believe the trends. But you know what, this issue isn’t only for shared cars. People don’t even know what ADAS features is in their own car. And when they do, they often don’t understand their functions or limitations.

Let’s design holistically

So, we end up with a lot of separated systems in cars. We have infotainment features that are part of the car. People often expand these with external devices such as phones, music players or navigation devices. And on top of that, we are adding many safety systems. And yes, these systems often function in isolation.

How nice would it be that all these systems would co-exist in pure harmony? A smartphone serves as an OK example for this. When I get instruction by my favorite navigation application, my beloved music application lowers its volume for that duration. When my mom gives me a call, the video I was watching on the interwebs pauses. That’s a great user experience!

So, should my forward collision avoidance system wait with its warning until the woman of the navigation system is done talking? Well maybe not. An important chunk of the user experience in a car is called safety. That’s what makes UX design for cars different than most consumer tech.

Information hierarchy

First of all, it’s important to have a clear hierarchy of information. A collision warning is very critical and requires the user to respond adequately within milliseconds. A lane departure warning is less critical when no big danger is foreseen. A navigation instruction is even less critical, nobody gets hurt when an exit is missed, at least with friendly passengers. Infotainment is even less critical; an incoming text message should never distract a driver.

Low fidelity user test setup to try out the use of different interfaces

The right interface

There are multiple ways of communicating information to the user. We can use audio, haptic or visual modalities, but often it is a combination of two. Think about a warning, when using audio only, it’s quite hard to explain what’s happening. You could use a voice assistant explaining that “there is a car in your left blind spot”. But that sentence might be a bit too long and doesn’t sound as critical as it should. An alarming sound can cause confusion when not explained. So, combining an alarm with something visual can work, like a flashing car icon in the left side mirror.

In general, when information that has to be understood ASAP, in the event of a critical situation, it might be wise to use icons, colors, shapes or sounds that are easy to understand. Therefore, it would be great if the car industry would adopt some conventions…

A navigation instruction is less critical but often needs a more elaborate explanation. For such information, it is better to use rich interfaces. Some people like voice instructions and some people don’t, but most people want to see a map. Although an instruction can be shown on a HUD in a simplified way, it’s often better to use a screen that can show a detailed, schematic representation of the real world, especially in complex traffic situations.

The position of the interface is also relevant. As mentioned earlier, having a flashing car icon in the left side mirror while an audio alarm is coming from a left speaker, the user’s attention is directed to the correct location. A HUD, for instance, is in the driver’s line of sight; use this to show the most critical information in a way that it is understood at a glance, like a collision warning, not for Twitter messages.

The situational aware co-pilot

So, it is important to understand what kind of information we can communicate to the user and also knowing where or how we should communicate this. But it’s also relevant to know when to inform; the context of a vehicle changes constantly as it is in motion.

This is where the virtual co-pilot enters. One that is fully aware of the context in & outside of the vehicle at all times. Getting an incoming call while you are in a stressful traffic situation? Let’s mute that distracting ringing sound for a bit. Your navigation tells you to change lane, but it’s fully occupied? Since you have your turn signal on and you are slowing down, no need to remind you of the navigation instruction again and again. I can think of a thousand examples of how the user experience could be much more relaxing when a situational aware co-pilot would curate the flow of information.

And that curation could also make driving safer. We’ve been doing low fidelity tests on this topic, to see whether ‘adding’ more information, such as traffic rules, traffic lights or other road users to our navigation experience could be helpful to the user. We’ve done this in EU Horizon 2020 projects such as ADAS&ME and VI-DAS, with several partners in the automotive industry. This research is relevant as lot’s of accidents still occur due to misinformed drivers. They focus on the wrong traffic light, don’t provide right of way in complex intersections, or miss-calculate highway entry ramps. In these situations, we can assist the user with our map data.

User test on different screen variations (cluster & center display)

Based on the context, the co-pilot can also assess whether this ‘extra’ information is helpful or not. It might be the first time, but the 4th time when the driver encounters the same situation maybe not. And in a less complex situation, when there are no other road users to worry about, it might even be distracting. This doesn’t mean the extra information can be left out, it could still be good to communicate it, but subtler. In the end, the information should help the driver, and not distract him or her.

When the co-pilot is ‘driving’ in automated mode, we might want to show everything the car sees, instead of filtering. Why? Since the driver now will be checking on the co-pilot, instead of the other way around. When the co-pilot informs what it will do next, it allows the driver to understand what to expect and what the limitations of the system are. We’ve seen a lot of accidents with ‘automated cars’ so far in the news. In these situations the driver wasn’t checking on the co-pilot, so the co-pilot needs to check on the driver too, whether he is checking on the co-pilot… Woah, this is a lot of checking. But this is essential to a good collaboration between the two.

In the end, the driver and the co-pilot are driving together. But as technology evolves, the co-pilot will be able to drive… autonomously. This is what the industry calls level 3 and above. This will bring a lot of challenges, but also a lot of exciting wild ideas, which I will be writing about in my next and final article in this series. So, stay tuned!

Thanks for reading!

I’m Paul, a UX designer with a passion for mobility. I’m currently working on exciting things for the automotive market @ TomTom.

Have a question or want to share your thoughts? Feel free to reach out to me at paul.schouten@tomtom.com

--

--

Designer @TomTom with passion for mobility, new interactions & virtual imagination. www.paulschouten.nl