Skip to main content
RBC
It's time to confront bias in AI.

Ahead of the next RBC Disruptors event on May 23, “Battling Bias in AI,” our Thought Leadership team is examining the societal and ethical implications of artificial intelligence. In this interview series, John Stackhouse asks the Executive Director of CIFAR’s Pan-Canadian AI Strategy – Elissa Strome about the difference that diversity could make. Their conversation has been edited for length and clarity.

John: AI is reshaping society in many ways, both good and bad. What are you most concerned about?

Elissa: Currently, I am most concerned with the lack of equity diversity and inclusion in AI as it’s being developed around the world.

In Element AI’s Global AI Talent Report, they assessed the demographics of AI researchers around the world, and the quality and quantity of AI research that’s being undertaken by different countries.

There are essentially only five countries in the world that are advancing the science and applications of AI.

John: Who are the five?

Elissa: It’s Canada, the US, the UK, China, and Australia. These are the countries that are heavily involved in the development of AI so it’s very North American-centric and Western European-centric.

China is a huge leader in this space, but there’s a lot of the world that is not as deeply involved and engaged in developing and advancing AI. If we really expect that these technologies are going to have a positive impact on global society, then global society needs to be involved and active in their engagement.

The other area where there’s a real problem around equity, diversity, and inclusion in the development of AI is gender balance. If you look at the number of researchers who were authors at the major international conferences in AI, only 18 percent of those researchers are women.

John: Do you know what that number is in Canada?

Elissa: It’s probably right around that, maybe slightly higher. In the Canada CIFAR AI Chairs program, we currently have 20 percent women named as Chairs. CIFAR is working with partners across the country in a lot of different training programs and learning opportunities, particularly for young women in AI. This is one of the ways that we are addressing the gender gap in Canada.

We work with the Invent the Future program at Simon Fraser University for instance that is focused on engaging high school girls and giving them exposure through a two-week summer camp. It provides them with an understanding of what AI is, and what the future opportunities are for them and their careers.

We also recently announced a new partnership with the OSMO Foundation in Montreal to advance the AI for Good Summer Lab. This is a program that was developed by Doina Precup, who is one of our Canada CIFAR AI Chairs. It is a seven-week training program for undergraduate women in AI. The women enrolled in the lab gain exposure to training and networking opportunities that will serve as a foundation for their future careers.

There's a bit of a nerdy coding guy false image of what it means to be an AI researcher. We have a lot of work to do.

John: How did we get to this point where four out of five AI scientists are male?

Elissa: It’s a historical problem. In the early 90s, the rates of female computer science students in universities was actually more like 30 percent. Some of the enrollment rates of women in computer science were higher in the 90s than they are today.

I think there were a variety of reasons for why women became less interested, less encouraged, or less mentored. As the number of women leaders in the field started to decline, fewer women enrolled in these training programs.

I think it’s also sort of a cultural thing within computer science. There’s a bit of a nerdy coding guy false image of what it means to be an AI researcher. We have a lot of work to do.

John: What would a Canada AI researcher look like if they’re not a coding nerd?

Elissa: Today, an AI researcher can look like anybody in almost in any discipline. That’s the great thing about AI. It’s a mathematical and computational approach to understanding and leveraging data. And so every discipline of science and research actually has the opportunity to leverage AI, which goes back to my point about the need to increase diversity.

We need to be encouraging students in other disciplines to develop interests and expertise in AI. Not necessarily in coding but thinking about how machine learning can be applied to business questions, or questions in law, humanities, biological sciences, engineering, or physics.

It's not just that the people who are developing these systems may not represent diverse perspectives, but also the data sets themselves that have problems in their representation.

John: Can you help us understand how all of this matters? If I’m thinking of AI recommending something to me on Netflix, why does it matter who is doing the coding?

Elissa: It matters because these algorithms, approaches, and recommendation systems will and already are starting to be applied to many areas of our life right now.

When you’re applying for a mortgage at the bank, when you’re applying for insurance, when you are applying for a job, recommendation systems are already being used without an average person knowing about that or being aware of what the issues are.

Recommendation systems are being designed by people who have a very specific perspective on life and the world, they come from a very homogeneous subset of the population, and their perspective is implicit in the development of the systems.

The other important underlying factor is where the data sets are being drawn from. If the data sets that are used to develop recommendation systems are drawn from samples that already have biases in them, whether they’re gender bias, racial bias, bias against disability, bias against any sort of underrepresented group, then those biases get amplified.

It’s not just that the people who are developing these systems may not represent diverse perspectives, but also the data sets themselves that have problems in their representation.

John: There’s been an explosion of concern about responsible AI including here in Canada. Did science take a wrong turn in the early years or was that just a function of maturing that there was suddenly this concern about responsible AI?

Elissa: I wouldn’t say that science took a wrong turn.

I would say that the technology, science, and the adoption and commercialization of the technology happened very quickly. And because they were both advancing at the same time, it’s just taken a while for the policy side of things to keep in step with the development of the technology.

At this point in the growth and adoption of AI, we as a society must take a very hard look at these technologies, and ask questions about the impact of bias. It’s absolutely critical that we invest a lot of effort and resources in research on the societal implications of AI.

At CIFAR, it’s the fourth pillar of the Pan-Canadian AI Strategy to advance leadership, research, and knowledge around what are the societal implications of AI – social legal ethical economic questions around AI

John: In 2018, we get the Montreal Declaration for Responsible Development of AI – is that sufficient?

Elissa: It’s a great start and it’s absolutely wonderful that there’s a sort of a Canadian pioneering path towards engaging the public in this conversation in this discourse.

And it is one of the few examples worldwide where the public were so deeply engaged as were researchers across many different disciplines, not just computer scientists but also social scientists and humanists as well. But it’s not sufficient.

John: Who should hold science accountable?

Elissa: I think it’s more of a question of holding society accountable. Governments have an incredibly important role to play, both in developing strong domestic policy and regulations around the use and adoption of AI. They also have a role to play in international cooperation, relations, and international policy on this topic, and again Canada is a leader in this space.

Last June, Prime Minister Trudeau and President Macron announced a joint Canada-France initiative on an International Panel on AI, and the first international symposium will be in Paris this fall.

This work will engage G7 countries within the umbrella of the G7 cooperation, but it will be an international collaboration to monitor, observe, and understand best practices around the societal issues related to AI.

The world is looking to Canada right now to take a leadership position, and we are out there and we are doing that.

John: Are you confident governments can get their heads around this?

Elissa: I am confident that their intentions are good, and I think they understand both the risk and the opportunity. The challenge with governments is the short duration of their terms.

But this is an issue that crosses party lines. This is an issue that affects all Canadians no matter what your political stripes are. It affects people all across the world, and so it’s something that governments, no matter what their sort of ideology is, really have to have to take a leadership role in.

John: What role can Canada play?

Elissa: Canada has a privileged position on the international stage in advancing our responsible use development and adoption of AI.

Much of that is based on our long history of pioneering the science and research, and our track record of research leadership. We also have a strong reputation for our work on the world stage in advancing humanitarian issues around justice, social rights, and freedoms. We are respected internationally both on the science and our democratic values.

The world is looking to Canada right now to take a leadership position, and we are out there and we are doing that.