I’ve gotten this question. So the scenario that everyone seems to come up with is:
Sure at first it was just the repetitive jobs that got replaced by AI.
Then GANs started generating everything.
Who needs a designer when a computer can put out 1000 designs a second?
Obviously, I wouldn’t be talking about this if I thought this was a problem.
Today we are covering how AI will change UX and design
This podcast is called design for AI
It is here to help define the space where Machine learning intersects with UX. Where we talk to experts and discuss topics around designing a better AI.
music is by Rolemusic
Im your host Mark Bailey
Lets get started
I want to start with an example
Everyone knows of deep blue, the chess application that first got everyone’s attention by beating the best chess player.
Since then AI has beat the best Go player and can beat anyone at competitive video games.
But do you know who has beat the AI systems?
Human and AI hybrids.
The top ranked chess systems right now that can beat any AI out there are all human-AI hybrids.
The human brain and an AI system both make shortcuts.
They do them in different ways.
They do better filling in for each others weak spots.
The best system is always augmented, not replacement.
I’m not the only one who thinks this
IBM CEO Ginni Rometty recently expressed that “If I considered the initials AI, I would have preferred augmented intelligence.”
Now while AI isn’t going to take over UX, or make it obsolete, I do think a lot will change. In the scenario I talked about GANs generating 1000 designs a second. This is actually the case, they can. But automated generation doesn’t mean good. A few years back there was a company that promised to get rid of the need for website design, called The Grid. It delivered underwhelming results. But, you might ask it could get better from another company. Google tried something similar. You may have heard of them testing 42 shades of blue against each other to get just the right blue with the best response rate. That was successful. But when they tried expanding the analytics based design past those very basic items they kept hitting a wall.
So why is this the case?
To get better AI we need better UX
There is mutual benefit on both sides that run in a cycle.
As AI starts getting used more,
The ML model produces more data that is useful
The new model is trained off of that data
AI becomes more useful
ML models start spouting up delivering unneeded advice and tasks which just add to confusion instead of solving problems
The need for a better UX becomes more important
A better UX is created and refined
AI gets used more, the cycle starts again.
So if the cycle shows that there is still a need for UX, how will the job itself change. Well just like most other jobs, it is the boring repetitive monotonous parts going away. There will be an automation of design. The part I talked about in the scenario about GAN ML models able to generate 1000 designs a second. That already exists. The new UX designer will become more of a curator instead of a generator if designs.
This has already started to happen as the design tools get better and UX design matures anyway. There is no reason to redesign the same widgets over and over for an entire career. The need for systems designers is evidence of this. They design a component once and it is used and customized by everyone using that system. The same should hold true.
I’m going to talk about this from the three main areas that have to do with design: Design, Research, and Management
For the designs you do create, nothing new here, but concentrate on empathy. The problem with ML models is understanding humanness. How can you figure out for the app to react to users current context and mood?
Some of the cutting edge research right now is called ensemble models. The basic idea is that ML models are good at doing one thing well, so if you take a bunch of models and put them together and add another model to make a decision for which model to use it creates a more robust experience. This is what needs to be designed. Every change in context is going to need a different ML model.
As part of knowing the context, I covered previously to know when to tell jokes. That depends completely on context. Another area where context is important is when the model knows when it gets something wrong, you will need to design how admit when it is wrong.
Context matters for the device UI. Know the device they are using and the difference in the devices. I’ve heard of a tool called https://applitools.com
that helps to test on all the different platforms. How does the context change based on which device they are using?
You will need to keep abreast about what new devices are released because of UI changes. New devices means new features. Know the features available and customize the experience for them. Amazon Alexa is an example. When it first came out it was just a speaker. Now it can has a screen or can interact with different screen in the house. The interaction needs to be designed because AI is good for all the things in the past. It depends on data of things that have already happened. It can predict how things might happen, but it takes a while for it to adapt to a new normal like a new product coming out. These experiences need to be designed for.
Also know that AI does not do transitions well. because of the need to focus models to a very narrow area there is a need to transition from model to model to cover the whole user journey. How well this transition happens will be up to you. Also if the modality changes that will need to be designed as well. For example, if the user transitions from a laptop to mobile, it is hard for one model to hand off all the needed info to the other model, so what is important for that experience will need to be decided as part of the design.
AI does not do edge cases well. As I have talked about previously accessibility is just a group of edge cases that will get smoothed out by a ML model. Ignoring accessibility can open you up to lawsuits and avoid about 12-15% of your customers. Also ignoring them might be adding more noise to your model depending on what your model does, like muscular disabilities can throw off interaction recognition models and cognitive disabilities can throw off data answers. With all these reasons it is a no brainer is to make sure to differentiate the accessibility personas even more than it used to be.
The first thing that a researcher should do is to look over the data that is being collected to train the model. Does the data match the user’s intent? When the data was collected what was the reason for collecting the data? Will that affect data accuracy for how the data is being used to train the model? You will need to watch for holes in the data when you compare it to the data from the field. Otherwise, the model could be completely accurate but not for the customer you are trying to target with the app.
For UX researchers, again the more things change the more they stay the same. The User journey is hugely important. More so now than previously. Machine learning is there to automate the boring stuff. So find out how to let people do the actions they are passionate about. Do not automate the areas that people enjoy. To know what those areas are it takes research. Like I said previously, machine learning has problems with humanness so knowing the motivation for behavior and pain points is info coming from the UX researcher.
Probably the biggest problem for AI is trust. To know to do the right thing at the right time requires knowing the user journey and covering the different context changes that affect the user journey. Also to build up trust, as a researcher, you need to find out which steps in the user journey are the important steps. Where is the accuracy needed to be? Where is ok if the model is only right 70 or 80% of the time. Being able to differentiate between the importance of the different steps of the user journey will help the developers know where they need to focus their work.
Researchers need to know all the different personas. Up until now the guiding knowledge has been to try to boil down personas into 3 or 4 archetypes. This needs to change because of the narrowness of the AI. The data being used to train the model needs to be gathered from similar populations that you are planning to target. Otherwise there could be problems with all the holes in the data are where your customers fit in.
For internationalization every target market will need its own model. So, you will need to know how are those target markets different? How will the developers need to build the models differently for specifics of each group? The answers to these questions will need to be added to your reports.
First, benchmark current product. benchmark competing products. A lot of the methods that I have talked about to adapt to process changes. With the workings of ML models being unknowable the new process is to compare metrics to a baseline. So first decide on what the important metrics are then benchmark it. Models have a way to becoming un-runnable fast. Libraries get updated quickly and trying to fire up an old model for comparisons might be impossible, so do it now.
Managing the process also has some extra things that will need to be covered. The biggest probably being ethics. Should you be building it? I’m not talking about if the problem can be built without using AI (but you will need to know that answer too). If the product is built, are there any unintended consequences? Is this product going to be the best thing for the user? how are you influencing the actions of the users?
There are also new risks, Machine learning models can be deceived. If some users can trick the model it can cause a worse experience for other users. You will need to make sure the model isn’t getting exploited by knowing the data going in and comparing it to the data output. This allows bad users to find exportable shortcuts within the model. Other points of security include the training data, data sources, algorithms interacting with the models, along with the models themselves. It is also a good idea to make sure to know how to return to a known good state if something does go wrong and to make sure the model can not alter itself.
Transparency is something that you want to think of even if the users are not asking for it. Most likely it just has not been thought of yet, and when they do the opinion can change quickly if the app lacks transparency. How to expose the process will differ depending on the app, as well as the amount and types of info you want to give. Just keep it as part of the design process that any time there is data used to answer a question or complete a task you have to answer the question: Can we reveal to the user where the data came from and how we process it? Since AI tries to do boring tasks for the user, a good time to help transparency is telling the user when you helped them.
AI safety seems to be an area that is being handled by UX too. There are different types of safety: for the businesses know that AI can be unexplainable, so if the app involves government or regulated industries like banking it can cause problems with regulators. Also mission critical systems can’t test 1000 iterations and hope that one works, so designers will need to create safety scaffolding for the ML model to operate within to keep it within boundaries.
AI safety for users means you need to recognize context when getting the answer wrong could cause harm in those situations design into the experience a way to hand off to human intervention or shutdown interaction.
There is an increased need for recognizing human biases. Since data comes from people, and people are biased. So is the data. Training data, labeling data, the way the data is collected, the way the data is cleaned, the format the data is output can all be tainted by bias. A good way to verify is to take the needs found in the research and turn them into stories. Look at how the data ingestion process at every transformation and see if it matches with the intent of the story. It makes it easier to find bias.
I tried covering all the areas I can think of that are changed so I’m going to end on a caveat. There are a lot of predictions in this episode. Like most strategy plans I can only say what is happening in the next three to five years. After that time, this industry is moving so fast it gets hard to know what is coming, but that is what makes it interesting.
Another thing I didn’t talk about is AGI (Artificial General Intelligence). If you are unfamiliar with the term it is basically the AI in movies that can think for itself as opposed to the narrow AI we have now which does one task well. Since there is so much controversy on if AGI will ever be possible, all I can add to this controversial topic is If ever there was a real need for making sure design is human centered, this is it. All the topics of AI transparency, ethics, and safety are important to build into the tools that build the ML models. Even without the movie scare of AGI, machine learning is hugely powerful. I’ve got a blurb on my website about AI being like nuclear power. It can be super powerful, but only if designed correctly. Starting with understanding the intent on why data was collected and translating that into something helpful will require much better tools than the beginnings we have now. And better design is how we will get there.
and on that note, how do you think UX design will improve AI? and how do you think AI will improve UX?
That’s all we have for this episode, but I would love to hear back from you on how you think things will change
, use your phone to record a voice memo,
That is also an awesome way to let me know what you like and would like to hear more of, or If you have questions or comments record a message for those too.
If you would like to see what I am up to, you can find me on Twitter at @DesignForAI
Thank you again
and remember, with how powerful AI is,
lets design it to be usable for everyone