Ethics and AI

Ethics and AI

By Mikaela Berman, October 19, 2018      Categories: AI & Machine Learning      Tags: ,

It’s easy to get caught up in the excitement around the current AI boom we’re experiencing in the tech world. There’s no doubt about it: artificial intelligence is bringing lots of new opportunities to the market in the form of analytics, automation or even driverless automobiles.

These opportunities do come with some ethical challenges related to decision making, bias, data protection, cybersecurity or even data protection.

In this blog post, we’ll look at how AI might affect the employment market, robot rights and the underlying ethical issues associated in specific use cases.

 

Want the short version? Download our free “Ethical Considerations in AI” cheat sheet!

 

By the time you’ve read this blog post, you might look beyond the next AI headline and take a minute to consider the potential underlying ethical issues associated with AI tech!

 

Decision Making and Safe Guarding Against Mistakes

One of the worrying aspects for some when it comes to artificial intelligence is the lack of transparency in terms of how an underlying algorithm goes about making its decisions.  It’s a valid concern to have, for example, when it comes to use-cases where AI is used to control vehicles such as driverless cars.

 

Nvidia

In 2016, researchers at the chip maker Nvidia developed an autonomous vehicle – it operated like a black box of sorts where, under the hood, no-one knew how the car made its driving decisions.

When a human makes a mistake, or worse, causes a road traffic accident, there’s no ambiguity about who made the mistake.

 

AI is only ever as good as its training data Click To Tweet

 

In the case of a driverless car, the AI programming or chipset might be programmed to protect the driver, or alternatively, it might be programmed to protect the people in the other car, but who is responsible for the fate of the people involved in a potential crash? Is it the manufacturer? Is it the owner of the car?

It’s an ethical question that researchers and scientists have been exploring and one that has many papers written on it.

 

Discrimination, Bias and Inequality

Artificial intelligence can process data at speeds far greater than humans will ever be able to and is improving business workflows around the world by way of automation. That said, the machine is only ever as good as its training data. With biased training data, it’s possible for artificial intelligence algorithms to produce results that discriminate against certain groups. Google experienced this when its Photos service was deployed.

The AI in Google’s Photo service can identify people, objects and scenes in images but the underlying AI algorithm responsible for inferring these data points missed the mark on racial sensitivity a few years ago.

 

Amazon

Amazon is another tech giant that has experienced AI “gone wrong” when they tried to build an AI tool to hire new employees.

A team in Edinburgh, Scotland was set up and built 500 computer models to mine through past candidate’s CV’s and surface about 50,000 key terms. The new system would then crawl the web to recommend new candidates.

 

Want the short version? Download our free “Ethical Considerations in AI” cheat sheet!

 

One part of the problem was that male-dominated resumes were used to train the algorithm which ultimately affected how decisions were made. In the end, Amazon had to shut down the project in early 2017 because it was discriminating against women.

 

How can humans stay in control?

The reason humans are on top of the food chain is not due to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.

This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us? We can’t rely on just “pulling the plug” either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth.

 

Once we consider machines as entities that can perceive, feel and act, it's not a huge leap to ponder their legal status Click To Tweet

 

Robot Rights

While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.

Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful “survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider the deletion of genetic algorithms a form of mass murder?

 

Want the short version? Download our free “Ethical Considerations in AI” cheat sheet!

 

Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of “feeling” machines?

 

Summary

In this article, we’ve looked at some of the ethical issues you might want to consider if you’re working on a new artificial intelligence initiative or considering deploying artificial intelligence into your business.

We’ve looked at some of the ethics that are worth considering when it comes to AI decision making and how AI can even discriminate job seekers. We’ve also explored how humans can remain in control and considered if robots should have rights!

Hopefully, by reading this article you’ve got an appreciation as to why it might be good to pause and consider the underlying ethics that are often associated with AI technology.

Here at Growth Acceleration Partners, we have extensive expertise in many verticals.  Our nearshore business model can keep costs down whilst maintaining the same level of quality and professionalism you’d experience from a domestic team.

Our Centers of Engineering Excellence in Latin America focus on combining business acumen with development expertise to help your business.  We can provide your organization with resources in the following areas:

  • Software development for cloud and mobile applications
  • Data analytics and data science
  • Information systems
  • Machine learning and artificial intelligence
  • Predictive modeling
  • QA and QA Automation

If you’d like to find out more, then visit our website here.  Or if you’d prefer, why not arrange a call with us?

The following two tabs change content below.
Mikaela Berman is a Senior Marketing Manager at GAP. Throughout the past 8 years, Mikaela Berman has worked as a consultant for companies from startups to multinational enterprises, helping define their business models and marketing strategies. She also sits on the board of two Austin startups. She has a BA from the University of Maryland and a MS in Technology Commercialization from the University of Texas at Austin.

Latest posts by Mikaela Berman (see all)

Comments


Leave a Comment

Your email address will not be published Required fields are marked *

Subscribe to our Newsletter