image
image

Businesses that use and sell technology increasingly encounter complex ethical issues. Managed badly, these can cause immense reputational and financial damage. Here, we look at three ethical issues – data privacy, data bias and misinformation – then give businesses three ways to address them.

Privacy and bias are businesses' top ethics considerations

Q. Which of the following ethical issues do you consider most important when developing and deploying technology?

Privacy and bias are businesses' top ethics considerations

Q. Which of the following ethical issues do you consider most important when developing and deploying technology?

Privacy and bias are businesses' top ethics considerations

Q. Which of the following ethical issues do you consider most important when developing and deploying technology?

Data privacy

When we asked which ethical issues are most important in the development and deployment of technology, survey respondents mention privacy concerns most frequently.

This reflects three things:

  • Many jurisdictions have tightened their data privacy regulations in recent years.
  • Consumers are increasingly focused on their privacy rights.
  • Failing to comply with applicable laws comes with significant reputational and financial consequences.

It also highlights businesses’ concerns about consumer trust – that it will be eroded if businesses use their data in ways that are not anticipated by or beneficial to the consumer, even if they comply with data regulations.

“The way we use data is not just determined by the law, but also by ethical considerations.”

Matthew Owens | Global Head of Legal, Digital, Novartis

Data bias

45%

of businesses do not vet technology supplied to them for bias

Bias in data and programming is survey respondents’ second-most important ethical issue, and it is easy to see why.

Created by humans, technology can reflect the biases – conscious or subconscious – of its creators, and sometimes biases only become apparent after the technology is deployed.

Discrimination often comes up in relation to the use of algorithms and AI technology to scan and review CVs in the recruitment process. There are concerns that the algorithms underpinning this software incorporate biased logic and discriminate against people who live in particular areas or have certain names.

If you purchase a technology rather than develop it, you may not know whether it contains biases. The very least you should do is seek warranties and assurances that procured software does not contain biases, and conduct due diligence to check it.

“Businesses purchasing software should, if relevant, ask the provider what they have done to eliminate bias against certain population groups. These conversations happen a lot in the U.S., and it’s starting to pick up in Europe.”

Desmond Hogan | Head of Global Litigation, Arbitration and Employment, Hogan Lovells

Another problem is a lack of representative data, which can cause technology-enabled products to perform badly for some sections of the population.

For instance, studies in the U.S. have found the error rates for facial recognition software developed by multiple companies to be much higher for African American and Asian faces than for Caucasian faces. In another example, consumer reviews and media reports say that certain brands of wearable health devices monitor the heart rates of people of color far less accurately. That does not just create an inferior product; it could also entrench bias further if data from these wearable devices is used to inform the development of other products.

image
image
image
image
image
image

Misinformation

9%

of businesses identify the spread of misinformation as an important ethical issue to address when investing in technology

The spread of misinformation is the ethical challenge that businesses consider least important when developing and deploying technology.

Why? Maybe because only companies in the media and technology sectors feel directly affected by misinformation and responsible for it. Companies like camera company Snap Inc., which kept misinformation front of mind as it developed its multimedia messaging platform, Snapchat.

Companies like camera company Snap Inc., kept misinformation front of mind as it developed its multimedia messaging platform, Snapchat. “Fighting the spread of misinformation is important to us,” says Dominic Perella, Snap’s Deputy General Counsel and Chief Compliance Officer. “Our platform design doesn’t allow misinformation to spread because much of the interaction on our platform is on a one-to-one or small group communication basis, and because of the way we designed our content platform. You can't forward things – there's no virality.”

Dominic Perella | Deputy General Counsel and Chief Compliance Officer, Snap

Companies outside technology and media might not be directly responsible for the spread of misinformation, but they can take steps to halt it. In June 2020, for example, a number of well-known brands paused advertising on all social media platforms because of concerns that they were propagating misinformation and hate speech.

Three ways to address tech’s ethical challenges

Addressing the ethics of technology should be a top priority for management. If they don’t, the financial, reputational and litigation-related costs could be considerable.

Three simple steps you can take:

Establish ethical principles that govern technology use

When investing in technology that raises ethical challenges, establish and publish principles that govern how it will be used. This increases customers’, employees’, and other stakeholders’ trust that innovative technology will be deployed within a clear framework.

“The company is currently putting together our position on the ethical use of AI, and will likely publish it internally and externally. It reinforces how committed we are to being transparent about how we use the technology, how we are limiting or mitigating bias, and how we are building in safety, security and privacy by design.”

Matthew Owens | Global Head of Legal, Digital, Novartis

Ensure that the entire business discusses ethical issues

Establishing an ethical position on the use of technology cannot be left to one team within your business. It must be directed by management and involve a variety of business functions, including legal and product teams.

Hold suppliers to the same ethical standards

In the context of AI bias, this means seeking assurances that AI technology does not contain biases. Once the technology has been deployed, make sure it continues to be used in a way that adheres to the company’s ethical principles.

image
image

© 2021 Hogan Lovells. All rights reserved. "Hogan Lovells" or the “firm” refers to the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses, each of which is a separate legal entity. Attorney advertising. Prior results do not guarantee a similar outcome.

image
image

© 2021 Hogan Lovells. All rights reserved. "Hogan Lovells" or the “firm” refers to the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses, each of which is a separate legal entity. Attorney advertising. Prior results do not guarantee a similar outcome.