Select Page

6-AI personality

Episode 6

Why creating a personality for your AI is important, be it recommendation system or AGI. We cover the steps needed to evaluate your system and come up with the best personality for your users..

Music: The Pirate And The Dancer by Rolemusic

Background research links

Transcripts

Today’s episode is about personality,
So I thought it best to start with a scenario: For example, you are in the market for finding a lawyer, and like most people looking for a lawyer you need to watch your money. you’ve heard good things about some companies providing virtual lawyer services. You download one since it was the top rated since it was so  friendly. You get started telling it about the background and back and forth is full of jokes from the lawyer. But the jokes just seem off. Then you need to find some more info and take the device down into the basement the virtual lawyer says it lost its network connection and just starts laughing maniacally. Maybe somebody finds this funny, but if they messed up this bad on the humor, you have no confidence that they got the legal part right.
Delete that one, obviously friendly was not the way to go. You download the next one rated totally professional. You start the process but it is taking forever. You have to go through one question at a time. This thing feels like it is reading war and peace off of a DMV form. You find yourself getting lost in the monotony and realize you skipped over the most important nuance. This isn’t professional, this is fingernails slowly scraping a blackboard. Ugh, there is no way you’ll make it through the process and remember everything.
Another failure, money wasted, and you still need to talk to a lawyer. Lets make sure this doesn’t happen.
Today we are covering personalities for AI
This is design for AI
a podcast to help define the space where Machine learning intersects with UX. Where we talk to experts and discuss topics around designing a better AI.
music is The Pirate And The Dancer by Rolemusic
Im your host Mark Bailey
Lets get started
Today we are discussing how to design your AI personality.
We will cover the process step by step for what is important and what to avoid.
Some people associate finding the right personality with something hippy or new age.
This is not that. If you want the book answer, the personality is the distinctive tone, manner and style in which your app will communicate. It is defined by a set of attributes that shape how it will look, sound and feel. The right language, and tone that embodies your app and differentiates it from the competitors.
Look, there is a good chance your app and company already have a personality. Your current web or app design already defines the personality of the company. Color choice, type choices, UI layout, documentation, errors all make up the brand.
Basically, it’s the company personality that dictates the brand. So the next step is to use that personality,
that up to now has been used for the brand, and to translate that over to training the AI. There are some companies that don’t have a personality right now. The reason being is a lot of companies might not have defined a personality up to this point is because of they’ve used a template for their site or app. There are a lot of templates for websites or using default frameworks for building the widgets for apps.
There just isn’t a template for this yet in AI. So going to the trouble of creating a personality has to be done on a case by case basis still.
Because the world does not need another Clippy. It was an avatar that tried to keep it light by telling jokes along with the help it gave. The problem was the brand for Microsoft Word is much more corporate which created anger at the unexpected behavior. Jokes or wacky interface quirks can only increase user’s interest or desire to explore in the application if it what they are expecting.
Personality sells though, so it will pay for itself if you get it right. People can tell when a company has enthusiasm and passion for what they’re doing. The tide will turn soon enough where the AI will stick out like a sore thumb when it is bland. To follow best practices will earn you a spot in the middle of the pack. The problem being most user are not happy with their app as “not terrible”.
If it’s important when hiring employees, why wouldn’t it be important when creating an AI personality? AI centered companies are already working in this. Google is hiring creatives to bring humor and storytelling to human-to-machine interactions, and Microsoft Cortana’s writing team includes a poet, a novelist, a playwright, and a former tv writer. Skills to build a personality can come from writers, designers, actors, comedians, playwrights, psychologists and novelists.  Not the normal job descriptions you would expect for tech companies.
The integration of these skills into tech roles have sprung terms such as conversation designer, persona developer, and AI interaction designer.
So now that we have established a the need lets talk about the creation process for a personality. If you want some long term planning here is some predictions. At some point in the future, companies will probably have many personalities to let people choose their preferred voice or body depending on the AI UI. Different personalities will become popular similar to material design from Google or metro from Microsoft. Which will lead to templatizing a personality similar to how wordpress templates exist now, and it’s only a matter of time before one company sues another for copying their AI personality similar to brand infringements today. Personally I am waiting for the days when enough UX research has been done, that we know which custom AI personality works best for interaction modality. So while a it might sound silly that the best way to get legal information from someone is if they are talking to a salty sailor, there is no way to know which personality can become associated with interaction modalities without creating one for your use case first.
What should you not do
The biggest question to avoid is usually, shouldn’t I just use my own personality? or the founders personality? There are a couple of problems with this. In true UX fashion remember, you are not your customer. The ability to create a company does not always translate over to good customer interaction for a variety of reasons. Another reason why that usually does not work is because you can’t measure your own personality. Most people only associate with their positive traits, unfortunately there are usually blind traits that can go along with them.
So how do we find what personality would work best for the users?
Well, we ask them. Poll people to select descriptive words. Use standardized list, like from Microsoft’s word association test work best,
because it takes time to balance positive and negative words, and making sure all the areas are covered. I think the word associations are the easiest and fastest to move forward with but if that doesn’t sound good for you I have also heard of other people that have successfully used Myers-Briggs to describe character traits. I know it has been debunked because it over-simplies personality types. But that actually helps since it is simplifying choices that need to be made. Another way to gather the information is something called Spectrum.
It was Five Factor Model created by Ari Zilnik It defines personality as a combination of: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism
Be aware that answers usually skew towards positive so pay more attention to the negative feedback. Basically you are asking users to choose the words they associate with your brand, company, and app. The 5 different areas to measure are
  • Awareness – How aware is the user of the company, product, needs for the product?
  • Consideration – perception of quality, value of the product. misunderstand or can’t find features.
  • preferences – How do features differentiate product from competators
  • Action – Getting stuff done.
  • Loyalty – Will the user want to use your app again?
Depending on the actions that the AI is created to help with sound, haptics, visuals, or AR/VR,  can be aspects of your interface for the user. If anything like this exists for the current interaction get feedback on that as well. Sound Associations can be done by comparisons to companies that have patented their sounds like Porsche or Harley Davidson.
When talking to the users to get the word association get a really good sense of your customer’s personality, what are their goals, what stage are they at in their life, and most importantly who do they aspire to be. This will come into play later. The next step is to run the same word association tests with internal people, but from the aspect of aspiration. Where are the decision makers trying to take the product. This is perfect since this is what PMs and stakeholders are thinking about anyway.
Now comes the comparison.
How you are currently perceived vs how would you want to be seen in the future? Take stock of responses, how do they stack up to expectations. You shouldn’t expect the word associations to be exactly alike but they shouldn’t be too far apart either. If there is too much drift from the customers perception then there either needs to be frank discussion about where your company is for product excellence
Or there needs to be a whole lot of work on the fundamentals. The reason you don’t want to reach too far, is that it comes off as untrustworthy. I mean, you are who you are. Also too far of a reach and the chance of getting it wrong starts to skyrocket.
If you get it wrong this is a lot of work that is going to waste.
Once you have the aspirational view of how you are perceived. Check the goals against what the customer’s goals. These should also be close.
A good example is to go into a teen clothing store. The employees tend to be clones of the people from their ads. It’s not a coincidence, stores are choosing to mirror their target aspirational demographic for their customer interactions.
  • So from that point of view what would your employees look like?
  • How do your customers align with their peers?
  • What motivates the people to do what they do?
Areas of personality that need to be defined include
  • Professional vs casual
    • Don’t want to take all personality out if you are going full professional.
    • Also be aware that if you are going casual, informality changes for different groups, cultures so make sure to gather information all your markets.
  • Humor level and type
    • Do you use dry humor or silly humor?
    • The best example I can come up with why this is a difficult question is: I want you to name 2 comedians with the same style humor?
  • Generalist or a specialist
    • Are you trying to reach a conversion quickly and effectively? Or is the whole bot experience crafted to engage long term as part of a larger creative campaign?
  • Brief or long discussions
    • Unless the destination is the personality you don’t want to slow down the interaction. Aim for minimal clutter and fuss.
  • Understated vs extroverted
  • Cautious vs go where others fear to tread
  • Individual creativity vs group consensus
  • Strong opinions vs easy going
Now none of the word associations need to be shown to the developers. Trying to paraphrase brand guideline, and it always gets reduced down to BS words like innovative and progressive. You will not have the brand guidelines next to you when writing dialogs.
Developers won’t have them next to them when writing code.
It needs to to be easier.
If your organization was a famous person who would it be?
Since you already have all the personality traits and aspirations, who does it describe. It can’t be Stephan fry or Barrack Obama. Those are not good choices. It is the equivalent of boiling everything down to the words innovative and progressive. You want to choose a different personality for at least the 5 main touch-points: Awareness, Consideration, preferences, Action, and Loyalty. Your app might be more specialized so your areas might differ depending on your needs.
Now that you know who the AI should act like in different situations let’s talk about some of the things to avoid. First lets talk about humaness or better titled as, When to convey an bot is an bot. Most companies have a code of conduct of how employees interact with customers. I am surprised the same companies will many times not put the same thought into the personality of their AI that is the first interaction point with their company. Right now, conversational AIs are good enough to pass off for human. If you need an example, I’ll link to how Google wowed people with their 2018 IO demo of Google Duplex (https://www.theverge.com/2018/5/8/17332070/google-assistant-makes-phone-call-demo-duplex-io-2018) but the next week the articles tone changed quickly saying the voice was trying to trick people (https://www.wired.com/story/google-duplex-phone-calls-ai-future/) . Nothing changed in the demo, just people talking. Current culture was caught off gaurd with the quality of the humaness, and it is human nature to try to the label someone as tricky when you are caught off guard.  Sooner or later, with the current tech your AI is going to fall into the wrong side of the uncanny valley. So it is easier to not try to claim humanness when asked, but also don’t try to deny it. In the principles of Google Assistant’s personality – don’t shutdown conversation by denying humanness. Don’t lie and claim preferences. Using the artful dodge
The next hurdle is to make sure to take into account for Internationalization. Currently for websites type choices and layout carry over cultures so this can catch some people off guard. humor does not cross borders well, or even across regions. I know China has different formats for standup comedy depending which city you in. Cues for informality changes for different groups, cultures. An example of this is the sound your mouth makes when your brain is processing. In the US depending on region it can be “uhm” or “ahh” in China it is “nega”
putting the wrong pause word for a region in a search to be more casual can put you on the wrong side of the uncanny valley.
The third topic is situational awareness
For example, how your AI should act in offline vs online How does your interaction change when it isn’t connected to the network. The level of interaction also depends on how many cognitive abilities the user has to devote to the interaction. If you can detect they are driving, your responses will probably be shorter. There is a lot of complexity and nuance to this It helps that the AI can detect more info What is the emotional context of how the person feels at that point
    • how they feel right now
    • What can you detect in voice and body language?
    • What can you know from context of user journey?
    • What do you know from user profile?
The last topic I want to cover is errors. When the server is down, humor makes the problem worse. People do not feel like they are taken seriously. They will lose trust in your AI since it is not acting appropriately. Instead of humor, try to empathize Acknowledging and validating an emotion is often enough to make customers feel understood and release negativity of the bad situation caused by an error.
So once you have created and implemented your personality how do you know it is working? Let’s talk about testing personality success. You are trying to find out: Do decision-makers select products and services more or less? Currently there is a lot of counting tweets or Instagram pictures. I would recommend against that. They are hard to quantify because of the high noise. Ways that I would measure
  • sentiment analysis with AI
  • Measure the brand strength through qualitative and quantitative surveys
  • AB testing choices are good to compare against the baseline. This is a good time to pull out the brand values. You can get word associations for the changed personality to see how it affects word choice.
  • and of course, keep track of your analytics for unsure answers. Does the personality help the confidence level go up for what the AI is making decisions on from the info gathered from the user?
So that’s all we have for this episode, If you have questions or comments, use your phone to record a voice memo, then email it to podcast@designforai.com If you would like to see what I am up to, you can find me on Twitter at @DesignForAI
Thank you again
and remember, with how powerful AI is, lets design it to be usable for everyone

5 Michelle Carney, founder of MLUX

Episode 5

I talk with Michelle Carney, founder of MLUX. Lecturer for AI design at d.School at Stanford, and Sr UXR at Google AIUX group. We discuss resources available and needs that have not been filled yet

 

Upcoming Events:

 

Machine Learning and UX (MLUX) Meetup Resources:

Music: The Pirate And The Dancer by Rolemusic

Transcripts

Coming soon

4 – Improving the UX of conversational UIs

Episode 4

We cover all the steps needed for creating a conversational UI like a chatbot or Siri, Alexa, Google voice, and Cortana. We make sure to cover making a plan so a good user experience is the top priority.

Music: The Pirate And The Dancer by Rolemusic

Transcripts

Hello and welcome to Design for AI, I’m Mark Bailey, Welcome to episode 4
Let’s talk conversational UI.
A lot of people think chatbot, other people think Siri, Alexa, Google Voice, or Cortana.In the current gold rush climate that is AI right now, it seems that the first step a lot of companies dip their toe in. Sounds like a good topic to cover to me. So I’ll cover the steps that need to be covered to avoid mistakes.

1st step: start with a plan.
If you want to have a conversational interface you need a plan. Think of a good plan as a stop off point on way to voice interaction that everyone says is just around the corner. More likely though is to think of the plan as a list of immediate needs, then turn that around and look at it from the users point of view. Who uses a conversational UI? People using voice interfaces right now, they don’t want to be bothered. They don’t want to be bothered to wait to talk to a live person, bothered by downloading your app, bothered by opening their computer, not even bothered to get off the couch. Your UI needs to make their life more convenient. the way to think of your plan is, how will you get what you need AND make it more convenient for the user?

The first part of the plan is how this benefits your company. What is your motivation for building the interface. Your reason will be specific to you. So I can only cover the general cases. It could be for improving media buy by understand customers, or reducing call center time. There are a lot of Industry specific choices. Conversational UIs are easier to apply to certain industries more than others. Some of the easy industries for this kind of interface. If you are running a CRM, then reduce call center times. For IP established media, the personality is already there. There are set expectations on what to expect so it makes the personality a lot easier.

The next part of coming up with a plain is deciding what to measure. Again this is very specific to your industry. Do you want to know the length of engagement? should it be higher or lower? Do you want to increase return users A lot of the time you will be getting some analytics about the user. Do you want to compare info gathered through your UI to analytics info in user profile? What can you add to the user profile? Do you want to increase the number of recommendations made to other people. No I’m not talking about Net promoter score. I’m talking about use referral codes to get real numbers. You can even measure emotions on users leaving.

Once you have your plan
The next step you need is a DMP – Data management platform, to store the info you are collecting from your app. If you do not have one, now is the time to create it. You probably want to hire a data scientist because DMPs can have a high noise level. Because really, to get any usefulness out of them you will need to be running experiments with the data. DMPs work better better when cross referencing information against each other instead of straight search. Also now is the time to try rolling your own natural language processing project, known as a NLP. Siri, Alexa, Google voice, Cortana all have their own sandboxes that are not compatible with each other. You can try developing for a couple of them to see the differences between the systems. Or a good one open source one to get started with is called Mycroft.

So now that you have a plan and a platform to move forward with, what’s next? You need to create a personality. This is going to depend heavily on your company brand, and what you are trying to accomplish. Think of what is going to be the motivation for the AI you are building. What their motivation is will affect how they answer and guide the conversation. It also depends of the situation your users will be in while having the conversation. You don’t want a mechanic in the middle of a job getting asked 100 questions to get the response they want just so they don’t need to clean off their hands . It might sound like we are designing a person, and there is an argument that goes back and forth on how human you should make your your AI. It is too much to talk about here so I will cover that in a future episode.
short story is don’t fake being a real person. Also know that personality and humanness are different. In this case we want a strong personality, so what’s motivating your AI to give the answers it gives is important. A strong personality is important because it helps to hide the holes in the AI, but not in the way you think. Technology is not at the point yet that conversational AI can answer any question, and people really like to test the limits of conversational AIs. Using a strong archetype personality takes the fun out of pushing the limits
You wouldn’t ask an auto mechanic plumbing questions but you would take joy in asking a know it all a question they didn’t know the answer to. So a strong personality helps people from poking areas you can’t think of.

Once you have the written down all the important aspects of your personality, the next step is to create the golden path. You don’t want to get into the AI yet, and we are not thinking of edge cases either. The golden path is perfect non-interactive conversation. The user asks all the perfect questions, your AI knows all the answers and questions needed to get the information needed to get to the goal. Once you have the golden path, you can start breaking down the conversation into dialogics.

Dialogics
For a description of what dialogics are think of it as the interchangeable small parts of the script. There is a stream of conversation that gets broken down into a trigger, then the steps 1,2,3 and so on until you reach the goal. This is the part that is UX. It is the use cases,
and since the personality dictates the dialogics use cases, that’s why you need to work on the personality first.

This is when you create the script. What do you want to know? Take your golden path conversation and atomize into spreadsheets.
Figure out the use cases, breakdown the conversation into the smallest bits possible, test it by talking to another person, then once you have the use cases broken down as small as possible, create conversational points around the user journeys. These conversation points are where the analytics will plug in, so you know how well the conversation is going the way you expect it to go. One problem to be aware of when you are testing the conversations, know that users will altering their behavior to fit the AIs requirements. The best example I can think of is it is similar to over pronunciation when voice to text first came out. So when testing the conversations, make sure the person you are testing with don’t know what the goals or conversational points are. This is something a lot of people have problems with because they want to test it with co-workers first. Co-workers know the goals of your company. They will subconsciously try to move towards the goal or purposely artificially move away from your goal. Neither of these are real world situations.

Next step is the machine learning.
You want to create the algorithms to get it better. This depends on the use cases you came up with in the previous step. I’ll leave it up to your developers group to handle this step.

Once you have done all the machine learning you are ready to release. This is the final step? Not even close. This is where you start to specialize the training on release. You need to look at the analytics. Where are the the conversations getting killed, where are they lasting longer. You can create multi-variant tests for different script choices. For this first release it is not expected to be the final form. It is good to start beta-testing as a game on kik or facebook, or you can create a conversation bot on reddit. If you want to do a branded beta of your app that will work too, but you need to advertise the beta, or you won’t get any training data. The reason for this beta release is to train the AI.
Expect for it take about 3 months to get ahead of the open source text libraries.

The real final step is entering the cycle. Since AI is more like an employee instead of a machine you have to keep checking on it, otherwise data moves, the model moves, everything moves and your AI gets worse. There will always be tweaks you can make to the conversation run smoother. The reality is that the technology isn’t quite there for the AI to understand unstructured conversation. Think of it like you are perfecting the telescope that your AI is looking through to see. Basically there will be some kludges to cover over the holes in your model. I’ll talk more about the development cycle in future episodes.

Thank you again
and remember, with how powerful AI is, lets design it to be usable for everyone

3- How to use privacy to improve the UX of your AI apps

Episode 3

I talk about how to get privacy to improve the UX through federated learning.

Music: The Pirate And The Dancer by Rolemusic

Transcripts

Hello and welcome to Design for AI
Im Mark Bailey, Welcome to episode 3

Today we will be talking about federated learning.
There is a good chance some of you are wondering what it means,
don’t worry it’s still considered a pretty new topic in AI.
Even the word isn’t pinned down, Apple calls it ‘Differential Privacy’.
so I’ll jump right in to explaining what it is and why it’s important to UX.

The old way, or I guess I should say the normal current way,
most models store data used for machine learning
is to round up all the data you think you’re going to need + data attached to it
then all gets uploaded and stored on your servers.
This is the centralized model
There is the saying going around that data is the new oil,
because the more data you can get your hands on
then the better the accuracy is for your model.
Which means you’re at the front of the line for the gold rush,
right?…

Well, not so fast
There are problems
Some people refer to data as the new plutonium, instead of the new oil
There is a high liability for personal data
Releasing an app over the internet is global.
But, laws and regulations change by country.
The new EU privacy laws like the GDPR conflict with the laws in authoritarian countries where they want you to share all your data.
In steps the idea of federated learning
As a quick side note, I am using Google’s term federated learning,
instead of Apple’s term Differential Privacy.
Differential Privacy is a little more inclusive of making things outside of machine learning models private,
so in the interest of keeping things as specific as possible I’ll use the term federated learning
to keep things as specific as possible.
I’ve included links for both Apple and Google’s announcements in the show notes.

Anyway, it is easiest to think of it in terms of using a cell phone,
because that is where all of this got its start for both companies
On device storage is small and there is too much data to upload over a slow network
The phone downloads the current AI model.
Then it improves the model by learning from all the local data on your phone.
Your phone then summarizes the changes as a small update.
Only this small update is sent back instead of all the data.
For a non-phone example think of Tesla building their self driving cars.
Every car that Tesla is currently making records 8 different cameras every time that car is driving.
Those video feeds help to train the model Tesla is trying to create for the car to drive itself.
To date Tesla has sold over 575,000 cars since 2014 when they added the cameras needed for self driving.
multiple 575,000 by 8 then multiply that by the number of miles all those cars drive.
It becomes obvious that is just too many video feeds to send over their wireless network
much less to record and store on central servers somewhere.
More importantly, no one wants everywhere they have driven,
and every mistake they made to come back to haunt them.
federated learning allows Tesla to push the model out to their cars.
Let the model be trained by data collected in the car,
then the training corrections are sent back to Tesla without needing to send hours upon hours of video.
Privacy and data bandwidth are preserved.
As a side note, Tesla does upload some video of a car’s driving for things like accidents.
We talk about outliers and making which parts you keep private later.

So, federated learning allows for global results from local data.
Basically train on the local device and send aggregated results back
It allows to keep the sensitive data on device
and if you can promise, and deliver, privacy to the user of an AI model
then you have taken care of one of the biggest fears users have for machine learning.
Think about it, keeping my data private is one of the biggest complaints against people wanting to use AI.
It is right up there with robots taking over the world,
If we can solve real fears now, we can start working on the science fiction fears next.
This is why it is important to UX
All the benefits of privacy for your customers,
plus all the benefits for the company of well trained models.
Of course offering privacy to your users is a selling point but what are the trade-offs?

For the drawbacks I am not going to sugar coat it.
There might be some pushback from developers because it does add an extra layer of abstraction.
There is a good chance the developers have not created a model using federated learning,
so there will be learning involved.
Also, the models created from federated learning are different from the models created from a central database because the amount data and types of data collected are usually different.

As far as the benefits
You don’t have to worry about getting sued for accidentally leaking information you never gathered.
really though the biggest benefit is usually better more accurate models which may seem counter intuitive.
Since all the data stays local you can collect more data.
Also since the model is trained locally the model is better suited for the person using it which is a huge UX benefit.
There are benefits even if your business plan keeps all of your machine learning models centralized,
instead of the models being on your customers computers or phones.
Because data is siloed instead of in one central location
It is a whole lot easier to comply with local regulations like medical
You don’t need to worry about the cost of transferring large amounts of data
It is easier to build compatibility with legacy systems since they can be compartmentalized
and you can have joint benefits by working between companies,
with each company able being their strengths to the table without revealing their data.
Still since privacy is one of the main benefits, from the UX side of it,
it is important to let people using your app know about the privacy you are offering for peace of mind.
This is not easy since machine learning is already a difficult enough topic to convey to your customers.
For example, this is one of the main selling points Apple uses for their iPhone,
that they protect your privacy is a big marketing point for them.
They are probably one the biggest users of this concept be it Differential Privacy or federated learning.
But I’m guessing that the majority of iPhone users have no clue
that most data for all the machine learning stays on their phone.
And, if Apple, the design focused company,
is having this much trouble conveying the message of one of their main selling points,
it’s obvious it is not an easy thing to accomplish.
The easiest way to convey to the user that you are keeping their privacy
is through transparency inside the app.
Show all the things using federated learning.
Break it down by which features use federated learning
Show user where the data goes, or really doesn’t go.
For example one of the limiting factors of federated learning can be turned into one of the selling points
Since federated learning needs to keep labels local,
it gives you a chance to explain why when you have people correct predictions.
For example choosing who the picture is of on your phone
or choosing which word auto-correct should have chosen.
You can let the user know,
they are doing this is to keep their own data private
Now if privacy is important to your business model,
if it is the thing you are showing as a benefit to using your app.
Then it does need to be designed into the app from the beginning.
First, I won’t go into the math involved,
but merging multi-device information can still expose privacy
You need to make sure when the app is designed that the company can’t see individual results,
only the aggregate
Next, the model, over time can also, possibly, learn identifiable info
When you design the app make sure that the model limits influence of individual devices
Another important thing you will need to pay attention to is outliers
normally you only want to be paying attention to the difference to the average
There is a difference between the global model vs personalized model
How much do you want to allow local data to alter the global model behavior?
That is a decision you need to make based on your use case.
The next big part of improving the UX is deciding how much to split your use cases into different personas
usually each persona get’s their own model
The best example I can think of is for a language model
train different models for different languages
that helps to reduce the outlier information
This is where accessibility fits in too.
Make sure not to forget it.
Since AI models try to average everything,
accessibility needs can be averaged out as outlier data.
Make sure to work any accessibility needs into specialized personas and models,
to reduce the noise for the model and get a better user experience for those with and without accessibility needs.
Outliers also influence how often the app should send back information.
Like I was talking about earlier, usually a model stores up enough information
before it sends it back, either to save on bandwidth costs or to ensure privacy.
If the app is getting a lot of outlier data though,
you probably want to want to know about it as soon as possible.
To be able to adapt the model as needed to give a better user experience.
You will need device to say when it has unusual data,
so the transfer can happen sooner.
Well thank you for listening
and I hope you found this episode interesting
I would love to hear feedback on this this topic and
which other topics you would like to hear about
To leave feedback, since this is a podcast,
use the voice recorder app on your phone,
  and make sure to give your name
then email it to podcast@designforai.com

If you would like to know how to help,
Well your first lesson in ML is to learn how to help train your podcast agent,
by just clicking subscribe or writing a positive review on whatever platform you use to listen to this podcast.

Thank you again
and remember, with how powerful AI is,
lets design it to be usable for everyone

1- Hello! What to expect from ‘Design for AI’

Episode 1

Instead of an interview, the first episode covers what to expect from coming episodes.

Music: The Pirate And The Dancer by Rolemusic

Transcripts

Hello and welcome to Design for AI, I’m Mark Bailey. Welcome to episode 1

These podcasts will normally be about interviews and talking about how to address the design problems of machine learning but for the first episode I wanted to talk about what to expect in this podcast.

For those listening who work in this area you are familiar with the current landscape for developing ML is still in the gold-rush stage. There is no one dominant player so a lot of big companies and countries are all trying to competing for any edge possible. The main focus is on getting their app out as quickly as possible. Luckily we are still in the grace period era of new a technology where the capabilities of ML still impress people enough to overlook a whole lot of rough spots.

With all the focus on getting the product out no one is looking at how ML changes the User’s experience. Thats where I come it. So Why me? Well, no one else is. I’ve searched. The podcasts I’ve found so far are talking about the technology side, usually how to develop an AI model there are even some podcasts about the business side of ML. So I’m starting this podcast to talk about design, it is something I find super interesting and I’m surprised No one else is talking about how to make machine learning work better for people.

Well, That is what I hope to accomplish with this podcast anyway. I want to talk to experts in the field to find out how they are dealing with the design challenges of the extra hassles and taking advantage of the extra capabilities that come along with AI.

But I can’t do this alone. I’m going to need your help. Like anything else creative it is better to get started than to get it perfect, I’m an expert in designing software for AI not podcasts so on that note, just like any iterative design. I need your feedback to get better. You, yes I’m talking to you, as the listener, are part of the discussion. I need to know what you are interested in hearing about, what questions do you have? What can I do better? I need you to let me know.
To leave feedback use the voice recorder app on your phone  and make sure to give your name then email it to podcast@designforai.com

To give a little bit of back story of what my motivation for this podcast is. Years ago when I was working for IBM research, I was really enjoying designing for accessibility because of the extra complexities required to solve some of the universal design puzzles Universal design is no joke when you are trying to design for every group of people of course the deeper you dive the more groups with more and different needs that can compound the problem
or the needs can conflict with each other. Sooner or later, there gets to be too many exception cases to juggle enter machine learning for customizing the UI. Of course, It was early so machine learning didn’t work well but it was enough to peak my interest. I had to teach myself anything user experience related for machine learning. Since  all of the attention AI has been on getting the technology better and now the technology is finally getting there.

So who is the Target audience for the podcast? Who do I think will find it interesting? I want to Help the developers who get stuck developing for a ML product without any UX help I want to Help ux designers learn the AI specific problems that are not issues with normal software development cycles. I want to Help PMs know where AI projects can get derailed and what to keep an eye out for. But really anyone interested in ML should find something interesting. I will try to explain any terms I need to any time I need to dive into a more technical area. One caveat is that  I use AI and ML interchangeably.

Pretty much every ML development podcast I’ve listened to explains the difference so I’ll leave it to their better explanations.

That leads us to the things we will be talking about.

For the UX designers out there

Trust, how to build it, how to lose it. How AI can help ux processes for better answers. How to improve the jobs you already do as a UX practitioner with AI. How AI affects the GUI in terms of Interaction design. What to look for in user tests to see if the AI is helping or if you are getting users adapting to the test environment. How your user personas affect which models you should build.

For any developers

Special problems AI presents to software development process. What to do when you are designing a chatbot, recommendation system . How to choose the right AI algorithm for a better user experience. How different development choices, like the number of layers, can affect the experience. How to ensure consistency of the experience by tracking data, trainings, and models.

For the PMs who are listening some of the topics are.

How to have an AI design strategy for your company. How human should you design the AI to be when people interact with it. How to design around biases in data, people, and AI models. How to optimize AI for marketing campaigns. How and why to create an ethics plan for make sure AI is improving the users experience. How to tie in UX of AI into the business value so it’s not just a flashy word.

…and most importantly for everyone

Identifying the pitfalls that need to be avoided and of course when is it OK to be lazy for the unimportant stuff and when the software needs extra attention to be designed well.

Because right now as you know there is a lot of mistrust of AI around privacy. Can you trust motivation of the app? or the company? Because It is so easy for a company to lose all customer trust, in the blink of an eye if these things are not designed right.

That leads me to my ultimate goal. It has to do with trying to solve the black box problem. For those not familiar with the black box problem. It is the idea even when all the code is known all the input data is know, all the hardware and training methods are known there is still no way to know how the AI will react in every situation. That is the black box. The AI can give unknown results at the most critical moments.

Now, I think it is possible to solve this problem.

We have the info, it’s a computer. Reduce it down and it is basically just doing a lot of math as fast as possible. This is the perfect problem to be solved by UX. It is taking an overflow of information and prioritizing it then presenting it in an understandable way Basically it is one of the worlds hardest visualization problems. If you are familiar with machine learning, you know the first two AI winters were caused by over promising what AI can do. To avoid a third AI winter the black box problem, I think, needs to be solved. Otherwise people will keep falling back to the troupe of AI from movies because there is no way to know if they can trust machine learning.

What can you do to help?

Your first lesson in ML is to learn how to help train your your podcast agent by clicking subscribe or writing a positive review.

I hope you found this first episode interesting and that there was topics I mentioned you have been wondering about too. If not, well send me feedback on what you are interested in. If you did hear topics you were interested in, let me know which ones you want to cover first.

Also since this is a podcast, if you would like me to use your voice use the voice recorder app on your phone and make sure to give your name then email it to podcast@designforai.com

and remember

ML is such a powerful tool, it only makes sense to design the AI to help people as much as possible.