fbpx

Ethics and AI

Ethics and AI
Reading Time: 5 minutes

It’s easy to get caught up in the excitement around the current AI boom we’re experiencing in the tech world. There’s no doubt about it: artificial intelligence is bringing lots of new opportunities to the market in the form of analytics, devops automation or even driverless automobiles.

These opportunities do come with some ethical challenges related to decision making, bias, data protection, cybersecurity or even data protection.

In this blog post, we’ll look at how AI might affect the employment market, robot rights and the underlying ethical issues associated with specific use cases.

By the time you’ve read this blog post, you might look beyond the next AI headline and take a minute to consider the potential underlying ethical issues associated with AI tech!

Download our summary sheet to learn about the ethical considerations about artificial inteligence.

Decision Making and Safe Guarding Against Mistakes

One of the worrying aspects for some when it comes to artificial intelligence is the lack of transparency in terms of how an underlying algorithm goes about making its decisions.  It’s a valid concern to have, for example, when it comes to use-cases where AI is used to control vehicles such as driverless cars.

Nvidia

In 2016, researchers at the chip maker Nvidia developed an autonomous vehicle – it operated like a black box of sorts where, under the hood, no-one knew how the car made its driving decisions.

When a human makes a mistake, or worse, causes a road traffic accident, there’s no ambiguity about who made the mistake.

[bctt tweet=”AI is only ever as good as its training data” username=”GAPapps”]

In the case of a driverless car, the AI programming or chipset might be programmed to protect the driver, or alternatively, it might be programmed to protect the people in the other car, but who is responsible for the fate of the people involved in a potential crash? Is it the manufacturer? Is it the owner of the car?

It’s an ethical question that researchers and scientists have been exploring and one that has many papers written on it.

Discrimination, Bias and Inequality

Artificial intelligence can process data at speeds far greater than humans will ever be able to and is improving business workflows and devops automation around the world. That said, the machine is only ever as good as its training data. With biased training data, it’s possible for artificial intelligence algorithms to produce results that discriminate against certain groups. Google experienced this when its Photos service was deployed.

The AI in Google’s Photo service can identify people, objects and scenes in images but the underlying AI algorithm responsible for inferring these data points missed the mark on racial sensitivity a few years ago.

Amazon

Amazon is another tech giant that has experienced AI “gone wrong” when they tried to utilize devops automation and build an AI tool to hire new employees.

A team in Edinburgh, Scotland was set up and built 500 computer models to mine through past candidate’s CV’s and surface about 50,000 key terms. The new system would then crawl the web to recommend new candidates.

Download our summary sheet to learn about the ethical considerations about artificial inteligence.

One part of the problem was that male-dominated resumes were used to train the algorithm which ultimately affected how decisions were made. In the end, Amazon had to shut down the project in early 2017 because it was discriminating against women.

How can humans stay in control?

The reason humans are on top of the food chain is not due to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.

This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us? We can’t rely on just “pulling the plug” either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth.

[bctt tweet=”Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status” username=”GAPapps”]

Robot Rights

While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.

Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful “survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider the deletion of genetic algorithms a form of mass murder?

Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of “feeling” machines?

Download our summary sheets to learn about the ethical consideration about artificial intelligence

Summary

In this article, we’ve looked at some of the ethical issues you might want to consider if you’re working on a new artificial intelligence initiative or considering deploying artificial intelligence into your business.

We’ve looked at some of the ethics that are worth considering when it comes to AI decision making and how AI can even discriminate job seekers. We’ve also explored how humans can remain in control and considered if robots should have rights!

Hopefully, by reading this article you’ve got an appreciation as to why it might be good to pause and consider the underlying ethics that are often associated with AI technology.

Here at Growth Acceleration Partners, we have extensive expertise in many verticals, including devops automation and AI.  Our nearshore business model can keep costs down whilst maintaining the same level of quality and professionalism you’d experience from a domestic team.

Our Centers of Engineering Excellence in Latin America focus on combining business acumen with development expertise to help your business.  We can provide your organization with resources in the following areas:

  • Software development for cloud and mobile applications
  • Data analytics and data science
  • Information systems
  • Machine learning and artificial intelligence
  • Predictive modeling
  • QA and QA Automation
  • Devops automation

If you’d like to find out more, click here to arrange a call with us.