Select Page

Episode 10

In this episode we have broken down the different types of fear people exhibit during user testing, both irrational and legitimate, people usually have about AI. We cover what causes the fear, and what to change in the designs to take care of the problems.

Music: The Pirate And The Dancer by Rolemusic

Transcripts

Have you ever had this happen to you? You created a new AI product, you have made sure everything flows well, it solves the users needs, and the model accuracy is spot on. The only thing standing between you and being swarming by venture capitalists is they want to see one last test of the product with users.
When you run the test, everyone keeps comparing your app to killer robots from movies. They talk about Elon Musk, Stephan hawking, and Bill Gates warning them about evil AIs taking over, and your product is the first step in that direction… somehow? 
Let’s make sure that doesn’t happen. In todays episode we will be covering how to separate the noise of the fear of AI in user testing .
This podcast is called design for AI
It is here to help define the space where Machine learning intersects with UX. Where we talk to experts and discuss topics around designing a better AI.
music is by Rolemusic
Im your host Mark Bailey
Lets get started
music
When doing any user research for AI, you should ask the people you are talking to describe what they think what AI is vs what they think the app is. Ask them to define AI, machine learning, and machine intelligence. Then ask them to define the AI in their phone (for example Siri, or OK Google). Most people will define AI as “what hasn’t been created yet”, and AI models that already exist as just technology. These descriptions will help to level set what is just fear vs what fears are awoken by your product.
Now, when I do talk about these fears in this episode, and when you do ask the questions don’t discount them. One of the problems Developers and researchers can get into is being so deep into the product that it can seem silly people are worried about things like killer robots. But the way to a better product is to recognize concerns your customers have instead of discounting fears and writing them off.
First let’s cover the best case scenario. The reality is there are just too many movies out there where AI is out to get you. No one thinks AI is AI when it is working well like in movies like Star Trek, because, well, it just works. Even in cases where the AI is buggy like in C3PO and R2D2 in Star Wars people don’t think of that as AI.
Any time anyone reads about AI in the news it is always paired with a picture of Terminator. So if you are doing user testing and someone brings up a scary AI movie that is normal; it is part of the American culture. From my experience if they don’t bring up Terminator, then they are not familiar with AI at all. I use it as litmus test to gauge the person’s knowledge level of machine learning.
That is not to say that large companies don’t try to avoid that association. If you look at Google’s ads they refer to everything as machine learning to avoid the association. Apple calls the chips in their iPhones neural engines, and Amazon uses the term “Smart” instead of machine learning for everything. For example, smart speaker, smart displays, even smart home. So it seems all of the big companies have completely avoided the use of AI word associations as much as possible, and depending on your product and user’s skill level that might help your design strategy too.

Irrational Fears

So what if you do that and people still seem apprehensive about using your app? Well I have broken down the different types of fear people exhibit during user testing, both irrational and legitimate, people usually have about AI. We cover what causes the fear, and what to change in the designs to take care of the problems.
We will cover the irrational fears first. The broad categories are:
  • Fear of the unknown
  • Mass unemployment
  • Bad actors
  • Uncaring super intelligence

Fear of the unknown

Let’s start with Fear of the unknown or fear of change. This fear has always existed when there is a large shift in society. So right now it is a fear of AI because that is what is on the news. Before that there was a general anxiety about new tech in general. Back in the 60’s it would have been a fear about nuclear power. Before that you can find old articles about people’s fear of mass media when newspapers became popular. This fear goes can be traced all the way back to the industrial revolution. In other words, it is normal.
When the person you are interviewing has a fear of the unknown, it is kind of annoying because it is so vague, they will have a hard time conveying why they don’t trust your app, but they are sure there is some reason to not trust it. If are running into this a lot for your targeted users then most likely the app is doing a bad job of telegraphing intentions.
Machine learning is used by your product to take shortcuts. These shortcuts take away the tedious steps from the user. But, you still need to tell the user what you are doing, and the steps you took to give them the answer you are giving them. Without this there is just a black box doing stuff that they don’t know.
A good example of this is Kayak.com. When they search for airline tickets, there is a whole lot going on in the background. While they could just throw up a progress bar, instead they show some of the steps they are doing to filter down flights based on search parameters. So think of what your AI model is doing for the user and how you can write those steps out to make them obvious.

Mass unemployment

The next fear is Mass unemployment
Pretty much everyone you talk to will probably bring this up as a fear. The older the person the more likely they will think it will affect someone else, not them. The younger someone is the more likely they are going to plan that it will affect them. If this is a strong concern or if you are getting out of age worries then ask them about their view on how e-commerce or mobile disruption changed jobs available. They should match their fear of AI for the same level of job disruption.
If they don’t match, there is a good chance you will need to look at the user journeys. Your product is probably doing something for the user that they want to do. Find out which parts of your tasks that people think are important and which are tedious. Only do the tedious parts for the user. If you are doing some of the important tasks design in ways for the user to see everything that is happening and can take over at any time.
A good example of this is mail chimp. They do a lot of things automatically things for you. But, at any time you can jump in and take over configuration and the most important step of sending the bulk email is left up to the user and must be confirmed.

Bad actors

Next lets talk about bad actors
This covers all the people that would use AI for nefarious purposes. Now if ever there was a problem that there was a huge need for design, this is it. Currently, there is a real problem with state-sponsored AI models trying to spread misinformation through deep fakes, or impersonations. But I am guessing if you are listening to this podcast you are not trying to create this type of app since it definitely breaks good design rules.
But, with Russia admitting to influencing elections world-wide this is defiantly something that is in the news more and more so there is a good chance for it to come up during a user interview since it will be on the users mind.
Before doing user testing, you should have as part of the experience, a way to recognize people or other systems trying to game your AI model. You don’t need to go into specifics but it would be good to let the user know how you are protecting them from the bad actors.
Fears of cyber warfare and viruses can make users reluctant to give up their information to you. So if your model requires collecting a lot of personal information then make sure to show how is your app protecting user data? If you can use things like federated learning then you can reassure the user that no matter what happens, since you don’t know their data, neither will anyone else.

AI super intelligence

The last irrational fear is the uncaring AI super intelligence
This fear is based around the idea that AI is going to take over the world, and when it does machine learning models will be able to achieve more human than human. The idea here is that AI will be able to adapt faster than we can. Which will at some point cause the AI to see people as a threat.
A way this fear can be expressed during user testing is by complaining about lack of control, or the expectation of betrayal by computers. Basically the AI will be your friend until it isn’t. This is again people that want to have control over the system. Rather than giving them control over everything since they don’t trust AI to make decisions for them, in this case it is better to strive for better transparency.
Obviously the user need to feel like they have control over the system. So review what the user wants to do vs what is seen as tedious for the user journey. Another thing to make sure of is don’t try to be too human. The closer your model tries to get to the uncanny valley the more likely users will become suspicious.
For transparency, once you have the user journey mapped out, write them up and integrate them into the app so that as the user goes through using the app they know what you are doing for them and what to expect in the future steps of the journey.

Valid Fears

Now that we have covered all of the irrational fears, there are some valid fears people can have too. These fears might come up during user testing. From a design standpoint these are extra things you need to worry about if you are using AI in your product. They are:
  • Need for data safeguards
  • Need for data protection
  • Avoiding dark patterns
  • and loss of skills

Data safeguards

Let’s start with the need to design in data safeguards. I’ll link an example in the show notes. https://www.kiro7.com/news/local/woman-says-her-amazon-device-recorded-private-conversation-sent-it-out-to-random-contact/755507974
In this example a couple’s Amazon Alexa crashed and started sending audio recordings of theirs to their contact list.
Now obviously you can’t know where all the bugs creep in. This is especially true with a machine learning models since they are non-determinant. (non-determinant means you can’t predict what they will do.) But, you can know normal behavior. And machine learning models are great at detecting anomalies. If you had a model that was watching how Alexa was working, sending audio clips to all the contacts would defiantly stick out as an anomaly and could be shutdown before the user noticed, or ask for confirmation before continuing.
So start with your user journey, map out the expected behavior for your model. If you don’t know the type of actions your model will give you should still be able to classify the types of answers. When you are running your beta tests you can find the norms for what types of actions your model will be expected to give. A monitoring model can make sure that the model that is interacting with your users is acting correctly.

Data hacking

The next real fear is losing data from getting hacked. Either the customer gets hacked or the company can. Either way this is a real problem that does happen more frequently than anyone wants to admit and the consequences are only getting more severe the more data that gets collected by computers.
This can be broken down into three different areas to verify: company servers, customer’s computer, and the communication between them. The first area is keeping things protected on the company servers. If you can’t do this, the company won’t be in business that long. The good news is that I’ve covered this previously. Federated learning is an easy way for a company to protect themselves. If they don’t have information stored on the servers then it can’t get hacked. Also look how people interact with the model. If someone sends bad info can they improve their benefit and detriment others? You need to make sure part of the design doesn’t allow for gaming the system to improve their situation.
Verifying the users system is protected is a little harder. Requiring strong authentication, and encrypting all the data locally should be a default. There is a good chance you will need to make sure model isn’t compromised locally. Verification of file integrity will help you know that the local model is running with the right information.
Communication between the model and servers also needs authentication, encryption, and verifying data integrity for updates. It is impossible to cover this in one podcast since data security is its own subfield of computers, so I am only trying to raise awareness. Simple causes of data breaches like insecure servers or no authentication happen every year for big companies who should know better; so obviously awareness is not high enough.

Addictive AI

The next fact based fear is technology that is purposely addictive. People feel like they are losing their human connection with other people and becoming more and more dependent on AI. Even companies with good intentions that pick the wrong metrics to pivot on can cause this problem.
Maybe one of the biggest example of mis-directed metrics may have happened with many social media companies. They start with the stated goal of helping people to connect with each other. But you can’t help people connect if your company goes out of business; so to maximize profit they create an AI model to help show the information people want so they will also see ads. The AI model metric is set to maximize people’s time on site. The AI model gets so good because it finds that conflict means longer view times. People stop connecting with each other and just consume more information because everyone gets split up into splinter groups who just want to yell at everyone because they’ve learned it gets their message out farther from the created conflict. Basically the end result is the exact opposite of the stated goal because of one metric.
So be extra careful with the metrics you implement. The law of unintended consequences can be harsh with AI models. Most developers will optimize their models for some form of accuracy or precision. To lower the chances of unintended consequences, add metrics for customer happiness, fairness, model regression testing,  and faster iteration times.
I dug into this more in the last episode so look there for more details, but for now, know that to have long term customers at a low cost of acquisition, the easiest way is to design the metrics to think of the customer first.

Loss of skill

The last real fear that we will talk about is the loss of skill to technology. As people become more dependent on AI models that complete simple tasks they will forget how to do them for themselves. People will become more dependent on AI models. I agree this will happen. This is a real fear. But I don’t see it as the problem that people fear it do.
As cell phones became ubiquitous studies have shown people no longer memorize as many phone numbers. Calculators make it so people don’t learn as many formulas. Neither of these have anything to do with machine learning models but the outcome is the same. People adjust to the tools they have at their disposal. I think the same will happen with wider adoption of AI.
Because of this, how to allay this fear then becomes closer to the fear of the unknown discussed earlier. I can only give the advice to let the user see exactly what the AI model is doing. Knowing the steps that the model is completing for you is a good transitional interface until everyone sees machine learning models as just another tool.
and on that note, what fears have you encountered with your users?
That’s all we have for this episode, but I would love to hear back from you on how you were able to work around people’s fear of AI for your products.
, use your phone to record a voice memo,
then email it to podcast@designforai.com
That is also an awesome way to let me know what you like and would like to hear more of, or If you have questions or comments record a message for those too.
If you would like to see what I am up to, you can find me on Twitter at @DesignForAI
Thank you again
and remember, with how powerful AI is,
lets design it to be usable for everyone