Tech

A.I. 'bias' could create disastrous results, experts are working out how to fight it

Key Points
  • While machines are theoretically neutral and without prejudice, there have been cases in recent years that show even algorithms can be biased. Some prejudices held in the real world can filter into AI systems.
  • "Having access to large and diverse data sets helps to train algorithms to maintain the principle of fairness," according to Antony Cook, Microsoft's associate general counsel for Corporate, External and Legal Affairs for Asia.
A robots race took place in Toulouse, France.
Alain Pitton | NurPhoto | Getty Images

Artificial intelligence is projected to shape the world's future as everything from cars to legal systems embraces truly smart technologies.

Some science fiction has predicted that artificial intelligence could one day take over the world and turn on humans, but experts warn there's a far more immediate risk, so-called biased AI. That is, when programs — which are theoretically neutral and without prejudice — rely on faulty algorithms or insufficient data to develop unfair biases against certain people.

Recent cases show that such a concern may be a problem of the present.

For one example, facial recognition technology has made headlines for not being racially inclusive. Nearly 35 percent of images for darker-skinned women faced errors on facial recognition software, according to a study by Massachusetts Institute of Technology. Comparatively lighter-skinned males only faced an error rate of around 1 percent.

Bias was also at the center of Google's decision to block gender-based pronouns from its Smart Compose feature — one of its AI-enabled innovations.

The potential problems of AI prejudice go much further, though, and demonstrate how some of the biases held in the real world can influence technology.

Biased outcomes

AI programs are made up of algorithms, or a set of rules that help them identify patterns so they can make decisions with little intervention from humans. But algorithms need to be fed data in order to learn those rules — and, sometimes, human prejudices can seep into the platforms.

"Having access to large and diverse data sets helps to train algorithms to maintain the principle of fairness," according to Antony Cook, Microsoft's associate general counsel for Corporate, External and Legal Affairs for Asia.

However, "the issue of bias is not solely addressed by the generation of large amounts of data but also how that data is used by AI systems," he said.

Olly Buston, CEO of consulting think tank, Future Advocacy explained that machines often reflect human biases.

"For example, if an algorithm used to shortlist people for senior jobs is trained on data that reflects the fact that historically, more senior jobs have been held by men, then the algorithm's future behavior may reflect this, locking in the glass ceiling," said Buston.

Experts have called for more diversity in the AI field, saying it would help overcome biases.

"When we're talking about bias, we're worrying first of all about the focus of the people who are creating the algorithms," Kay Firth-Butterfield, head of AI and machine learning at the World Economic Forum told CNBC earlier this year. "We need to make the industry much more diverse in the West."

'Multi-disciplinary approach'

Stakeholders from various fields need to constantly engage in discussions of what constitutes inclusive AI — a human concern that should not be handled only by experts in technology, said Microsoft's Cook.

A "multi-disciplinary approach" is needed "to make sure that you've got the humanists working with the technologists. That way we'll get the most inclusive AI," he said. "Human decisions are not based on ones and zeros ... (but on) social context and social background."

The debate around the right ethical rules to apply to AI should involve technology companies, governments and civil society, Cook added.

Moral implications

Biased AI can have serious life-altering consequences for individuals.

It was reported in 2016 that the COMPAS program — or Correctional Offender Management Profiling for Alternative Sanctions — used by U.S. judges in some states to help decide parole and other sentencing conditions, had racial biases.

"COMPAS uses machine learning and historical data to predict the probability that a violent criminal will re-offend. Unfortunately it incorrectly predicts black people are more likely to re-offend than they do," according to a paper by Toby Walsh, an artificial intelligence professor at the University of New South Wales.

While biases in AI exist, it is important that certain decisions are not left to software, Walsh told CNBC.

That's especially when such decisions can directly harm a person's life or liberty, he added.

Examples of those decisions include the possibility of AI being used in hiring decisions — or used during military conflicts as part of autonomous weapons.

"If we work hard at finding mathematically precise definitions of ethics, we may be able to deal with bias in AI and so be able to hand over some of these decisions to fairer machines," Walsh said. "But we should never let a machine decide who lives and who dies."

We need to make the AI industry more diverse in the west: AI expert
VIDEO2:4002:40
We need to make the AI industry more diverse in the west: AI expert

Loopholes in data

AI software is only as good as the data it is trained to analyze. If a company only plugs in data points about one part of the world, then the resulting program will not be able to function as well in other places.

"There is a risk that an AI that is trained on data from one population will perform less well when applied to data from a different population," Buston said.

For example, he said, there is a chance "some AI apps that are developed in Europe or America will perform less well in Asia."

Meanwhile, one expert noted Asian countries' increasing progress in AI means more examples of bias problems are likely to arrise from the region.

"So you could imagine, for an example, data that comes from China and India — with combined population of 2.6 billion people when that data becomes widely be available and used — there will be biases that we might not see in the West but may be very salient or very sensitive in our part of the world," said Eugene Tan Kheng Boon, associate professor of law at Singapore Management University.

WATCH: Microsoft says chatbot use in Asia has soared

Microsoft says chatbot use in Asia has soared
VIDEO2:2302:23
Microsoft says chatbot use in Asia has soared

— CNBC's Saheli Roy Choudhury contributed to this report.