Tech

A.I. has a bias problem that needs to be fixed: World Economic Forum

Key Points
  • As artificial intelligence becomes more ubiquitous, ethical considerations around privacy, bias, transparency and accountability need to be taken into account, according to Kay Firth-Butterfield, head of AI and machine learning at the World Economic Forum.
  • The spotlight is now on ethical issues around AI because there have been "obvious problems" with some of the algorithms, she said.
  • Experts have said that biases sometimes creep in on programs because human bias influenced those algorithms when they were being written.
We need to make the AI industry more diverse in the west: AI expert
VIDEO2:4002:40
We need to make the AI industry more diverse in the west: AI expert

Artificial intelligence has a bias problem and the way to fix it is by making the tech industry in the West "much more diverse", according to the head of AI and machine learning at the World Economic Forum.

Just two to three years ago, there were very few people raising ethical questions around the use of AI, Kay Firth-Butterfield told CNBC at the World Economic Forum's Annual Meeting of the New Champions in Tianjin, China.

But ethical questions have now "come to the fore," she said. "That's partly because we have (the General Data Protection Regulation), obviously, in Europe, thinking about privacy, and also because there have been some obvious problems with some of the AI algorithms."

Theoretically, machines are supposed to be unbiased. But there have been instances in recent years that showed even algorithms can be prejudiced.

A few years ago, Google was criticized after its image recognition algorithm identified African Americans as "gorillas." Earlier this year, a Wired report said that Google has yet to fix the issue, and simply blocked its image recognition software from recognizing gorillas altogether.

"As we've seen more and more of these things crop up, then the ethical debate around artificial intelligence has become much greater," Firth-Butterfield said. "One of the things that we're trying to do at the World Economic Forum is really find a way of ensuring that AI grows exponentially, as it is doing for the benefit of humanity, whilst mitigate some of these ethical considerations in privacy, bias, transparency and accountability."

Experts have said that biases sometimes creep in on programs because human bias influenced those algorithms when they were being written.

Firth-Butterfield agreed.

The composition of the tech industry that is creating those algorithms is not perfect.

Silicon Valley has a long history of being criticized for its lack of diversity.

Last year, a U.S. government report found that the U.S. tech sector is behind others in terms of diversity of its workforce. The report also said that female, black and Hispanic workers make up a smaller proportion of the technology workforce — in occupations related to mathematics, computing and engineering — compared to their representations in the general workforce.

"When we're talking about bias, we're worrying first of all about the focus of the people who are creating the algorithms," Firth-Butterfield said. "We need to make the industry much more diverse in the West."

One problem is that for years, women and other non-White folks have stayed away from science, she said, adding that there aren't enough AI engineers around to meet the demand from the industry. China's push to train more AI engineers and data scientists could fill some of the talent gap in the sector, she said.

Artificial intelligence is a rapidly growing technology. Worldwide spending on AI and cognitive systems is set to grow to about $52.2 billion in 2021, according to research.

On Monday, the World Economic Forum predicted that while machines will overtake humans in terms of performing more tasks at the workplace by 2025, there could still be 58 million net new jobs created in the next five years.

— CNBC's Eric Rosenbaum contributed to this report.