Select Page

Episode 11

In this episode we look at AI principles different companies have implemented, and which ones are the most popular. Then we cover how to implement them for your company so they are followed.

Music: The Pirate And The Dancer by Rolemusic


Have you ever had this happen to you? You want to create the best AI product ever, and you start talking about it with co-workers. You try to get everyone onboard, but everyone has their own definition of what “best” means; and  some of their ideas are worrying you on how your customers will react to them. If you can’t even agree on principles, how are you supposed to implement them? Let’s find out.
This podcast is called design for AI
It is here to help define the space where Machine learning intersects with UX. Where we talk to experts and discuss topics around designing a better AI.
music is by Rolemusic
Im your host Mark Bailey
Lets get started
In my last episode we talked different fears people have with AI. Obviously this is a problem even if you have the best intentions. So what do you point to when your customers come asking? How do you make sure your products don’t destroy your company?
They can be called principles, guidelines, company charter, values. There just needs to be a way to come up with them and also a way to make sure the company follows them. From a UX standpoint coming up with these AI principles help to drive the goals you create, and the metrics to measure them by. Most of the companies AI principles we are going to talk about in this episode are pretty lofty and vague. Vague in this case is actually OK. Everything that has to do with machine learning is still moving so fast. It would be almost impossible to come up with hard rules that could be followed that would not be obsolete 6 months from now.
On the other hand how do you keep from being too vague, or basically just marketing terms that look good but don’t mean anything. The short answer is to implement them. If you can implement the AI principles then they are defined enough to follow.
First we will cover creating them. Now creating AI principles is not as difficult as it sounds. It is not as bad as creating the brand like we talked about in a previous episode. It also helps if you have have already created the brand because your company brand will influence which AI principles you adopt.
There was a paper ( that compared different associations and companies principles. There is a good chance that you will use similar key issues as other companies so first we’ll cover the list from most used by companies to least used, then we will cover what the different big companies state specifically.
  • privacy protection
  • accountability
  • fairness, non-discrimination, justice
  • transparency, openness
  • safety, cybersecurity
  • common good, sustainability, well-being
  • human oversight, control, auditing
  • explainability, interpretabiliy
  • solidarity, inclusion, social cohesion
  • science-policy link
  • legislative framework, legal status of AI systems
  • responsible/intensified research funding
  • public awareness, education about AI and its risks
  • future of employment
  • dual-use problem, military, AI arms race
  • field-specific deliberations (health, military, mobility etc.)
  • human autonomy
  • diversity in the field of AI
  • certification for AI products
  • cultural differences in the ethically aligned design of AI systems
  • protection of whistleblowers
  • hidden costs (labeling, clickwork, contend moderation, energy, resources)
The big companies that have AI principles of some sort we are going to cover are:
  • Open AI
  • Google
  • Microsoft
  • Deepmind
  • Facebook
The companies people might expect but are missing are Apple, and Amazon. They are part of the “association Partnership on AI”. I think abstracting principles out to being part of an association makes following the principles that much harder to integrate. So since this episode is on how to integrate principles into practice I will not be covering any of the association’s individual principles.
Open AI Charter-
I decided to start with them since they seem like the most altruistic.
  • Broadly Distributed Benefits – Basically the output of what they want to produce from the AI created has to benefit the general public and not just a few company owners. This worry comes from as AI putting more and more people out of jobs it will benefit fewer and fewer people.
  • Long-Term Safety – The thinking they have behind this is how to keep AI from hurting anyone or seeing people as the problem. When your company’s goal is to create an AI smarter than humans, this is an important goal to have.
  • Technical Leadership – Not a surprise, they want to lead AI development.
  • Cooperative Orientation – This one just means they are willing to work with other companies.
Google AI principles –
  • Be socially beneficial- For Google this means business areas including healthcare, security, energy, transportation, manufacturing, and entertainment
  • Avoid creating or reinforcing unfair bias – Problems like bad data makes for biased AI. They are watching out for race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
  • Be built and tested for safety – This means to try to design out unintended consequences.
  • Be accountable to people – Allow for products to get feedback, relevant explanations, and appeal from users.
  • Incorporate privacy design principles – To provide appropriate transparency and control over the use of data.
  • Uphold high standards of scientific excellence – One of the problems with machine learning is it is near impossible to reproduce results.
  • Be made available for uses that accord with these principles – This means to keep the applications true to what it was intended to do.
  • AI applications we will not pursue. This was interesting in that this was the only company that set out areas that was off limits.
    • Technologies that cause or are likely to cause overall harm – The benefits  need to outlay the risks.
    • No weapons or other technology to injure people.
    • Technologies that gather or use information for surveillance.
    • Technologies to get around international law and human rights.
  • Fairness – AI systems should treat all people fairly
  • Inclusiveness – AI systems should empower everyone and engage people
  • Reliability & Safety – AI systems should perform reliably and safely
  • Transparency – AI systems should be understandable
  • Privacy & Security – AI systems should be secure and respect privacy
  • Accountability – AI systems should have algorithmic accountability
  • Social purpose – socially beneficial purposes and always remain under meaningful human control.
  • Privacy, transparency, and fairness – protecting people’s privacy and ensuring that they understand how their data is used
  • AI morality and values – different values, make it difficult to agree on universal principles. Likewise, endorsing values held by a majority could lead to discrimination against minorities.
  • Governance and accountability – new standards or institutions may be needed to oversee its use by individuals, states, and the private sector.
  • AI and the world’s complex challenges – They want to make sure they can uncover patterns in complex datasets that haven’t found before.
  • Misuse and unintended consequences – Again to make sure products are not repurposed in unethical or harmful ways.
  • Economic impact: inclusion and equality – They are worried about widespread displacement of jobs and alter economies in ways that disproportionately affect some sections of the population.
Facebook AI values –
  • Openness – AI should be published and open-sourced for the community to learn about and build upon.
  • Collaboration – share knowledge with both internal and external partners and cultivate diverse perspectives and needs.
  • Excellence – focus on the projects that we believe will have the most positive impact on people and society.
  • Scale – Products must account for both large scale data and computation needs.
Next we are going to cover implementing AI principles once you have decided which ones you will use. The simple truth is that there are a lot of people out there that think all you need is to come up with the principles. Surely, you can sell your startup before anyone realizes it is all marketing. No one will find out, right?
The problem is, making AI principles public means people dig into them. If your principles, don’t match your actions someone will know. All it takes is one person to leak, or you lock down so hard the Streisand effect happens. Your company will sink and sink fast if people don’t trust your AI. The distrust of AI is already high so it only takes one little doubt to spook your customers. So how do you implement principles?
Well, it comes down more to how to make sure they are being followed. According to a recent paper (, asking if the guidelines have an actual impact on human decision-making in the field of AI and machine learning? The short answer is: No, most often not, and the trust level of companies show it.
Then how do you get a company to follow principles? Every company is different. We are talking directly to how company politics affects what gets done. How do you implement other directives that have an effect on your bottom line?
Since every company is different I can only speak to what has worked for me. The first thing is to make sure they are hard baked into the model metrics. I guarantee the developers are focusing on accuracy for their models. What ever principles you or or company come up with, when you are sitting at the weekly meetings on what models are getting developed make sure that besides accuracy, how well the model follows the principles is one of the metrics you use to decide on which model to use.
The final and easiest way is to tie the principles to money. This requires buy-in from higher ups, but if you can get the principles tied to how people get raises then people will go out of their way to find ways to tie what they are doing to the principles.
and on that note, what principles work for your company?
Unfortunately, that’s all the time we have for this episode, but I would love to hear back from you on how you were able to create AI principles for your products.
, use your phone to record a voice memo,
then email it to
That is also an awesome way to let me know what you like and would like to hear more of, or If you have questions or comments record a message for those too.
If you would like to see what I am up to, you can find me on Twitter at @DesignForAI
Thank you again
and remember, with how powerful AI is,
lets design it to be usable for everyone