Why AI ethics and accountability is everyone’s responsibility

25-11-2019
Bcorp Logo
Ethics in AI - robot looking at camera

The mysterious allure of artificial intelligence and machine learning is becoming impossible for businesses to ignore. Although these emerging technologies are still in their infancy, the promise to introduce transformational change to every industry is forcing every organisation to think bigger.

Stats show that AI has already arrived in the business space. But blindly implementing technology without thinking about the repercussions could create more problems than it solves. You may remember, three years ago Microsoft launched an AI chatbot called Tay to improve the understanding of conversational language. In under 24 hours, the global Twitter community transformed Tay into a nasty racist embarrassing Microsoft into pulling the plug.

Things get darker yet when governments consider weaponising autonomous technologies, the spreading of disinformation, and Malware attacks strong enough to take down a power grid.

So what have we learned in the last few years about unleashing powerful technology into the world without thinking about the consequences?

"As positive as we are about AI, we're also aware of its potential for unintended consequences. So, we must design, develop, and deploy AI with a huge amount of care to ensure everyone can benefit from these advances. After all, people will only use AI if they trust it." - Cindy Rose, CEO of Microsoft UK, for The Big Issue

Elsewhere, marketers can be found getting increasingly excited about the personalisation of everything. If Netflix and Spotify can discover in excruciating detail what we want to watch and what music we enjoy (and when), naturally other retailers will want a slice of the pie too. Consumer data is beginning to usher in a new era of customisation where brands will know what product we want and what device we are most likely to buy it on.

As our insatiable desire for instant gratification increases, getting more of what we want can only be a good thing, right? Gartner highlights that a combination of AI identification of emotions and the use of our personal data could lead to exploitative or irresponsible practices. The research firm also predicts that by 2024, the World Health Organisation will see online shopping identified as an addiction.

Here in 2019, marketers are focused on changing the world through digital experiences. But how many are thinking about the more significant challenges they could create in the process? The problem is not with the technology itself but the lack of responsibility and accountability that businesses are taking when automating processes.

There needs to be a bigger conversation around the digital ethics of every chat-bot and algorithm that we implement. In many ways, technology acts as a black mirror that reflects the good and bad side of humanity.

"The main issue is defining an ethical and regulatory framework to which we can hold AI systems and companies accountable." - Daniel Hulme, CEO of Satalia

Removing human bias from the recruitment process is another big conversation that needs to be brought to the forefront. We can all agree that recruitment bias has always existed. But the jury is still out on whether AI will be the cure or a cause.

Why AI ethics and accountability is everyone’s responsibility

Even early-adopting and enthusiastic teams at Amazon was forced to backtrack and dump an AI recruitment tool that showed a bias against women. But once again, this revealed more about the data inputted by humans than the technology itself.

AI Bias becomes a corporate social responsibility issue from the moment that engineers feed data into ML engines. As algorithms coldly downgrade individuals based on their qualification, age, and experience, you are already on the rocky road of dismissing talent based on a few keywords.

Unilever also faced a backlash when it revealed it was using AI recruitment technology to analyse the facial and linguistic information of candidates. Businesses must have a workforce that reflects the diversity of their audience if they are serious about meeting their unique requirements.

This means that employees with Asperger's or autism can teach teams a thing or two when it comes to innovation. It also means that there is room for Introverts and extroverts in every team. AI algorithms must reflect this need in the workplace.

Sure, AI could play a valuable role in eliminating bias in hiring. But there must be a considerable focus on understanding and removing the factors that contribute to bias and prejudice. As repetitive and mundane tasks become automated, it is also vital that organisations invest in training rather than replacing their staff to ensure that everyone has a role in the digital age.

Why AI ethics and accountability is everyone’s responsibility

"The key must be retraining the workforce. This must be the responsibility not just of the government, which can provide subsidies, but also of corporations and AI's ultra-wealthy beneficiaries." - Kai-Fu Lee, Ex VP at Apple, Microsoft and Google, On Job Displacement in AI

We cannot blame everything on male-dominated engineering teams when it comes to our relationship with technology. Yes, they might be programming submissive and flirty responses from Alexa, but have you stopped to think about the bigger impacts on how you and your family speak to your digital assistant?

A study published by the United Nations suggests that we could be unwittingly teaching our children that women are nothing more than subservient eager-to-please helpers. Having seen children at a dinner table speak to a female as they would Alexa without pleasantries or manners, we could be creating more serious problems than we realise.

Many children that are still learning to communicate are talking to digital assistants in their bedrooms. Female assistants such as Alexa and Siri are at their beck and call as they shout orders without the need to say please or thank you. Could we be raising an Alexa generation that passes on this rudeness towards people they meet in everyday life?

Ironically, a digital world dominated by AI, machine learning, and automation will demand social skills from future workforces. For children to succeed as adults, they will need to hone people skills and have the natural ability to show politeness and empathy to their colleagues.

Why AI ethics and accountability is everyone’s responsibility

Fans of Spider-Man will already know that with great power comes great responsibility. But many of the problems we are creating are caused by teams naively playing with emerging technology like a small child blissfully unaware of the consequences and dangers they could be creating.

From AI engineers to children asking a digital assistant a question, we must all play a part in taking AI ethics seriously. We need to ensure that our children are polite when talking to Alexa and play an essential role in removing bias from algorithms in the workplace and recruitment process.

However, we need to move away from demonising tech and begin embracing accountability for our actions. The mess we have created is very much a human problem too.

(Reference: The Alan Turing Institute: Understanding artificial intelligence ethics and safety - A guide for the responsible design and implementation of AI systems in the public sector Dr David Leslie)

Share this post on:

We would love to hear from you

Get in touch

or call us on 020 3137 3920

Get in touch