Intive Blog

Diversity and Inclusion in Technology

One of the most recurring topics of conversation during this pandemic has to do with technology as an enabler for work. This is related to the fact that, under these unusual circumstances, certain sectors have had their activities restricted in one way or another.

The pandemic has also sparked debate about the way in which these technologies affect us outside work. Right now, technologies are valued as a tool to build a post-pandemic world that is more equal and inclusive.

However, the desire to rely on staggering technological progress to build a better world is confronted by the fact that this progress might be dangerous if we just let it happen without questioning its ethical and humane elements.

One of the most complex elements has to do with biases in algorithms or mathematical models. What do we mean by “bias”.

What Is the Meaning of the Word “Bias”?

According to the Merriam-Webster dictionary, one of the definitions of the word “bias” is: A systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others. That means that there is bias when data that encourages certain models is selected, introducing given prejudices, beliefs and world visions, whether that’s deliberate or not.

One of the best definitions on this issue is given by Cathy O’Neil in her book Weapons of Math Destruction:

Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics.

Algorithms, Anybody?

In the words of Joy Buolamwini, we often believe machines are “neutral”, but her research about racial bias shows bias is also embedded in systems, for example, in AI and facial recognition programs from tech giants such as IBM, Amazon and Google.

Joy herself tells the story of being an MIT student and having to put on a white mask to make certain facial analysis software work, since they couldn’t work out who she was due to her skin color.

Her research also shows that, when having to guess the gender of a face, all companies performed better when it was a male face, with error rates lower than 1% for white men. The error rate went up to 35% with dark-skin, female faces, and they even failed to classify famous people, such as Michelle Obama or Serena Williams.

Similar bias affect voice recognition systems, which are becoming more and more widespread, whether we refer to apps that turn voice into text, video subtitles, hands-free computing and virtual assistants.


And… What about AI voice assistants? Have you noticed anything special about them?

A 2019 UNESCO report, named “I’d Blush If I Could”, analyses the impact of having female voice assistants, like Alexa, in Amazon, and Siri, in Apple, that reinforce and enhance already existing gender biases.

“Siri’s ‘female’ obsequiousness –and the servility expressed by so many other digital assistants projected as young women– provides a powerful illustration of gender biases coded into technology products, pervasive in the technology sector and apparent in digital skills education,” researchers suggest.

According to the report, “Today, women and girls are 25 per cent less likely than men to know how to leverage digital technology for basic purposes, 4 times less likely to know how to program computers and 13 times less likely to file for technology patents.”

In this previous post, Mariana Silvestro told us about the situation of technology and inequality in Argentina nowadays. At a point when all sectors are turning to technology, this gap should raise an alarm for everyone (politicians, educators and society as a whole).

As we’ve seen, these types of mathematical models are used to establish people’s identities, but they’re also present in more and more environments: they estimate risk of criminal occurrence and assist when imposing a sentence, they establish citizen creditworthiness and classify the quality of teachers’ classes. Some automatized systems for decision-making, digital profiling, predictive analysis and other AI tools have an impact on our lives, to a greater or lesser extent

Not Everything’s Lost

After all this discouraging information, we should ask ourselves: Can technology be our ally in our quest to transform our society into a more inclusive one?

The answer is: Definitely! The fact that these topics are discussed widely and spread in mass media (instead of being confined to academic environments) is a big step in the right direction. In Buolamwini’s words: “At least, now we’re paying attention to it.”

Many efforts are devoted to modifying these conditions, both in the private and public sectors.

  • At an international level, the UNESCO report calls for an aggressive intervention of all agents to encourage education and digital skills for women and girls; this, in turn, generates more social participation and creates stronger households, communities and economies with better technologies.
  • This year, the World Economic Forum launched a toolkit for leaders: “Diversity, Equity and Inclusion 4.0: A toolkit for leaders to accelerate social progress in the future of work”. It describes how technology can contribute to reducing bias in hiring processes, diversifying talent pools and establishing reference points for diversity and inclusion in companies.
  • In Argentina, we’ve recently found out about the creation of the Observatorio de Impactos Sociales de la Inteligencia Artificial OISIA UNTREF (OISIA UNTREF Observatory of Social Impact of Artificial Intelligence), which is great news and shows that, in our region, there are people surveilling the impact of AI in all aspects of social life.
  • Organizations such as Chicas en Tecnología (Girls in Tech) work to narrow the gender gap in the technology sector in the region.

My grandma on my mother’s side used to say something like “Not to see is not to know” (she grew up in the countryside, so she had a saying for everything). I believe that, in the case of these topics, which may seem complex, the first thing we should do is try to see what they’re about, just reading, investigating, paying attention. We shouldn’t let them go unnoticed.

We have the responsibility to understand and stir debate, we shouldn’t turn our heads, our contribution should be positive. The more, the merrier.

Claudia Cabrera

Claudia Cabrera is UX Designer at intive since october 2019. Graphic Designer graduated in Universidad de La Plata, Claudia is volunteer in Interaction Design Association.

Claudia is mother of one mother and two cats. Also she is fan of kpop and science fiction.

Add comment