Visit Investors & Advisors Site | Support |
  • Australia

  • Austria

  • Denmark

  • Finland

  • Germany

  • Iceland

  • Italy

  • Luxembourg

  • Netherlands

  • Norway

  • Spain

  • Sweden

  • Switzerland

  • United Kingdom

  • United States

  • Location not listed

My Account
Sustainable Investing

Artificial Intelligence: Ethical Concerns and Sustainability Issues

We unpack the ethical, legal and environmental challenges of artificial intelligence in the modern world.

By  Jake Hense
11/17/2023

Key Takeaways

Artificial intelligence (AI) represents great opportunities but involves serious reputational and cybersecurity risks, creating human capital management minefields.

AI poses societal risks, including data privacy violations, reduced access to financial services, malicious misrepresentations and intellectual property abuses.

AI’s enormous appetite for energy and reliance on water to cool its servers have significant environmental consequences.

Our series on the opportunities presented by artificial intelligence (AI) has explored the transformational potential of this technology and how it is being integrated into the global economy. While exciting and valuable, we must evaluate AI’s potential drawbacks and benefits thoughtfully. AI’s power and wide-ranging applicability mean its impacts on society, both good and bad, are likely to be far-reaching. The adage, “With great power comes great responsibility,” is especially true for AI.

AI, Sustainability and Risk Management

At American Century Investments, we integrate sustainability analyses based on financial materiality into our risk management framework. This perspective is crucial in understanding the risks associated with AI. By considering this technology's societal, corporate governance and environmental aspects, we can identify ways to support its responsible, sustainable use. Ignoring them could open the door to a range of abuses that could cause actual harm to the economy, individuals and the environment.

In this article, we evaluate AI’s potential reputational, ethical, legal and societal impacts and the new cybersecurity risks and environmental concerns it creates.

Human Capital and Brand Reputation: An AI Minefield

In a 2022 survey, MIT Technology Review Insights asked participants to identify the tangible business benefits of adopting responsible technology practices. The top three responses were:

  • Better customer acquisition and retention.

  • Improved brand perception.

  • Preventing negative unintended consequences and associated brand risk.1

While AI can support these efforts, without proper oversight, it could also seriously damage a company’s reputation, harm profits and even threaten its long-term sustainability.

These risks aren’t just hypothetical. In 2018, Amazon scrapped an AI recruiting tool after determining that the algorithm, trained on data about prior job applicants who were primarily male, created a clear bias against female applicants.2 Any company that uses a potentially biased algorithm in hiring faces similar risks, potentially tarnishing its reputation and hurting its ability to attract and retain talent, which could reduce productivity and innovation.

AI’s human capital management risks go beyond biased hiring. Training an AI model often requires human beings who affect and are affected by the process of training the model. For example, OpenAI, the company behind ChatGPT, needed to train its algorithm to recognize and avoid toxic speech.

This work was performed by a firm that hired workers in Kenya, Uganda and India to view and label graphic descriptions and images of sexual abuse, violence and other deeply disturbing actions over nine-hour shifts. Many of them experienced nightmares, depression and other mental health problems.3 Making matters worse, they were paid roughly $1.32 and $2 per hour,4 below the World Bank’s lowest-income poverty line.5

Companies that use AI will face increased scrutiny as it becomes more widespread. C-suite executives, managers and investors must be aware of the human capital element of the technology and understand that it could have unintended, harmful effects on employees and a company’s reputation.

Intersection of Artificial Intelligence Ethics with Legal Concerns

Companies face various ethical and legal questions arising from the impact of AI on individuals. For example, as AI becomes more common in health care, its use in medical diagnoses, treatments and patient care will be largely invisible to patients. The Food and Drug Administration has approved more than 130 AI-based clinical decision-support tools in different therapeutic areas over the past five years.6

While AI algorithms “learn” from their mistakes and AI-based diagnostic tools have shown a remarkable ability to support human judgment and free up more time for patients, doctors shouldn’t just hand over their stethoscopes to these algorithms.

Who Is Liable for AI’s Mistakes?

A recent study found that clinicians’ answers to medical questions contained inappropriate or incorrect information only 1.4% of the time. In comparison, 18.7% of the responses from Google’s medicine-specific large language model (the type of model used in ChatGPT, Bard and others) were inappropriate, and 16.1% were incorrect.7 Furthermore, the model’s answers displayed evidence of incorrect information retrieval and reasoning, which could have life-threatening consequences in practice.8 Of course, human physicians aren’t infallible, but there could be a tendency to trust technology in a way that hasn’t been earned.

Given the potential for AI to make mistakes, who would be liable if an algorithm generated an incorrect diagnosis or prescribed a harmful treatment? The company that developed the algorithm or the entity (e.g., hospital, medical professional) that chose to apply it? Similarly, if a driverless car causes an accident, who is responsible — the AI software developer or the owner of the fleet of cars?

Denied Claims, Privacy Violations and Intellectual Property Rights

The uncertainties and potential liabilities arising from AI-based decision-making will pressure businesses to use AI transparently and ethically. However, AI models are typically black boxes — even the people who build them don’t know how the models reach their conclusions. This could have far-reaching and troubling consequences.

For example, insurance providers’ use of AI to review claims more efficiently has left many patients with unexpected medical bills. Over two months in 2022, health insurer Cigna denied over 300,000 requests for payment using an AI tool that spent an average of 1.2 seconds on each case.9 Such practices may invite regulatory scrutiny because insurance regulations in many states require doctors to review claims before health insurers can reject them.

AI could lead financial services firms to violate consumers’ privacy. The U.K.’s Financial Conduct Authority found that credit card data had been mined to detect when the cardholders sought marriage counseling; an AI algorithm reduced the cardholders’ credit limits based on a correlation between marital difficulties and credit card defaults.10

AI algorithms may cause a bank to charge different loan rates or deny credit based on personal characteristics, including religious or political affiliations and shopping habits. These biases, which may result from inaccurate information, could lead to greater financial exclusion and distrust in the technology, especially among society’s most vulnerable.11

Deploying AI also introduces a range of new legal and regulatory issues, resulting in lawsuits related to intellectual property rights and questions about the fair use of content in creative industries. In one case, a group of artists filed a federal class-action lawsuit against Stability AI, Midjourney and DeviantArt for alleged violations of the Digital Millennium Copyright Act for their use of AI, focusing on the rights of publicity and unlawful competition.12 Companies that use third-party content to train AI algorithms must address the legal and regulatory risks surrounding intellectual property, particularly in using generative AI that creates new images and written content.

Potential Threats of Artificial Intelligence in Cyber Security

The proliferation of AI is part of the digitalization wave in every industry. While a robust digital infrastructure supports economic growth and provides access to essential services, rapid growth in the digital economy makes cybersecurity a sustainability issue. Cybersecurity firm Barracuda found that ransomware attacks on municipal, health care and educational entities quadrupled from August 2021 to July 2023.13

The risk that AI will be used in cyber threats is real and growing. Generative AI used in chatbots and deepfake technologies helps bad actors create more accurate and convincing scams. A phishing attack used Facebook Messenger chatbots to impersonate the company’s support team and was able to steal the credentials used to manage Facebook pages.14

Given the increased use of customer service chatbots, cybersecurity measures will have to address these risks, or companies will likely face embarrassing and financially costly AI-based attacks. In short, while AI’s increasing sophistication represents new opportunities, it also creates financially material cyber risks to almost every business.

Generative AI also creates serious concerns about the legitimacy, accuracy and reliability of the information that we access online. Deepfake technology uses AI to manipulate videos, images and audio to show people saying or doing things they didn’t say or do. The implications, both financial and societal, are frightening. Suppose you saw a photo or video showing a politician, celebrity or CEO doing or saying something offensive or criminal. Would you stop to think that it could be a fake?

Back in 2020, Panda Security estimated the average cost associated with a deepfake scam to be at least $250 million. With the technology still in its early stages, the damages will likely keep growing.15 While some industry groups and governments are considering requiring AI-generated content to be labeled “generated by AI,” enforcing such rules would probably be difficult, and perpetrators with malicious intent would likely ignore them.

AI’s Environmental Footprint

Although AI software doesn’t have physical properties, it can significantly impact the physical world. It requires servers to run the algorithms that produce analyses and create content. All these servers and the physical infrastructure to support them require electricity, which generates carbon emissions unless it comes from green energy sources.

Estimates show that to satisfy requests over a typical 24-hour period, ChatGPT consumes 260.42 megawatt-hours (MWh) of power.16 By comparison, the average three-bedroom house in the U.S. consumes 11.7 MWh per year.17

ChatGPT consumes more than 20 times more power in a single day than a three-bedroom home uses in an entire year.

OpenAI found that the computing power needed to train the largest AI models doubled every 3.4 months between 2012-2018. Compare that to the period between 1959-2012, when it took two years for the power used by AI models to double. In other words, the power needs of today’s AI models are doubling at least seven times faster than ever before.18

According to University of Massachusetts researchers, training several common large AI models can generate more than 626,000 pounds of carbon dioxide-equivalents — nearly five times the lifetime emissions of the average American car, including the emissions from manufacturing the car itself.19

And that’s not all. This massive use of computer servers and electricity generates heat that must be removed to keep those servers functioning. Currently, most data centers use cooling towers that require significant amounts of fresh water. For example, a training cycle for GPT-3 using Microsoft’s state-of-the-art U.S. data centers uses an estimated 700,000 liters of water. It would take three times that amount if the training were done in the company’s data centers in Asia.20 Microsoft recently disclosed that its global water consumption increased by 34% from 2021-2022;21 third-party researchers tie the increase to Microsoft’s AI development. And it’s not just Microsoft; tech giant Alphabet consumed 5.2 billion gallons of water at its data centers in 2022.22

Consider AI’s energy consumption in the context of the growing demand for electricity to help reduce the use of fossil fuels. AI’s water use worsens water security worldwide in areas already under stress due to droughts that are intensifying due to climate change. In August 2022, over 37% of the U.S. was in a severe drought condition or worse, and summer 2023 was the hottest on record.23, 24

Combined with the pressure on electric grids to power homes and the growing fleets of electric vehicles, power consumption associated with AI poses a material risk to the world’s scarce resources. Businesses and communities seeking to operate sustainably must consider these costs and the trade-offs they involve.

Addressing AI Risks and Integrating Sustainability

Artificial intelligence represents transformational opportunities for the world economy, health care, education and other aspects of society. However, as the technology becomes more widespread, the potential risks increase. Managing these risks will require effective governance, oversight, standards and best practices to advance AI responsibly and sustainably. In some cases, the AI industry wants to address these issues.

Capgemini, a multinational information technology services and consulting company, created an internal Code of Ethics for AI that guides its approach and commitment to developing trustworthy and ethical AI tools.25 Employees are trained to apply these principles in their work, and the company uses the framework as a compliance mechanism in its third-party relationships.

Adobe created an Ethics Advisory Board to oversee how AI development requirements are implemented and respond to ethics concerns and whistleblowers.26

Microsoft offers resources such as its AI Business School to help companies create an effective AI strategy, enable an AI-ready culture and innovate and scale the technology responsibly.27 IBM is developing a cybersecurity solution using AI to improve monitoring capabilities. A third-party study showed these potential benefits for a company utilizing IBM’s tool: a return on investment of 239%, a net present value of $4.31 million, a 90% reduction in analyst time spent investigating incidents, and an overall 60% reduction in the risk of experiencing a significant security breach.28

Responding to the environmental impact of AI development, the Internet Initiative of Japan uses outside air cooling techniques to reduce the amount of energy and water required to operate its data center. It is also developing an AI tool to optimize its air conditioning settings based on fluctuating weather conditions.29 Separately, semiconductor company AMD offers a product that reduces customer costs by one-half and reduces power consumption by about 30-40%.30

These efforts suggest that the tech industry realizes the importance of managing AI’s risks and potential liabilities. Still, governments aren’t leaving it to the industry to self-regulate. Legislative bodies worldwide are moving to address concerns about AI’s potential impacts on society.

In June 2023, the European Union passed the AI Act to “make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.” The EU Parliament said AI systems should be “overseen by people, rather than by automation, to prevent harmful outcomes.”31

In September 2023, the U.S. Senate held a hearing with tech titans Bill Gates, Mark Zuckerberg, Elon Musk and others to discuss AI risks. The hearing acknowledged the tremendous benefits AI could bring to the world, but also its potential to create new forms of discrimination, threats to national security and other risks.

As our series of articles on AI demonstrates, this technology is transforming how people work and businesses operate. But while the potential benefits are immense, we must also address its legal, societal and environmental risks. In our view, this calls for robust governance and oversight measures, a commitment to transparency, and regulations designed to manage these risks and societal impacts.

Authors
Jake Hense
Jake Hense

Sustainable Research Analyst

Explore Our AI Series

Read our latest articles about artificial intelligence.

1

MIT Technology Review Insights, “The state of responsible technology,” January 2023.

2

Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 10, 2018.

3

Billy Perrigo, “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic,” Time, January 18, 2023.

4

Perrigo, “OpenAI Used Kenyan Workers.”

5

World Bank, “Understanding Poverty,” accessed October 13, 2023.

6

Simon P. Rowland, J. Edward Fitzgerald, and Matthew Lungren, et al., “Digital Health Technology-Specific Risks for Medical Malpractice Liability,” NPJ Digital Medicine 5, 157 (2022).

7

Emily Harris, “Large Language Models Answer Medical Questions Accurately, but Can’t Match Clinicians’ Knowledge,” JAMA 330, no. 9 (2023): 792-794.

8

Harris, “Large Language Models.”

9

Patrick Rucker, Maya Miller, and David Armstrong, “How Cigna Saves Millions by Having Its Doctors Reject Claims Without Reading Them,” Pro Publica, March 25, 2023.

10

Artificial Intelligence/Machine Learning Risk & Security Working Group, “Artificial Intelligence Risk & Governance,” Wharton AI & Analytics for Business, accessed October 12, 2023.

11

El Bachir Boukherouaa, Ghiath Shabsigh, and Khaled Alajmi, et al., “Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance,” International Monetary Fund Departmental Paper No. 2021/024, October 22, 2021.

12

Benj Edwards, “Artists File Class-Action Lawsuit Against AI Image Generator Companies,” Ars Technica, January 16, 2023.

13

Fleming Shi, “Threat Spotlight: Reported Ransomware Attacks Double as AI Tactics Take Hold,” Barracuda, August 2, 2023.

14

Bill Toulas, “Malicious Messenger Chatbots Used to Steal Facebook Accounts,” Bleeping Computer, June 28, 2022.

15

Panda Security, “Deepfake Fraud: Security Threats Behind Artificial Faces,” August 10, 2021.

16

Zodhya Tech, “How Much Energy Does ChatGPT Consume?” Medium, May 19, 2023.

17

Sam Wigness, “What’s the Average Electric Bill for a 3-Bedroom House?” Solar.com, October 3, 2023.

18

Karen Hao, “The Computing Power Needed to Train AI Is Now Rising Seven Times Faster Than Ever Before,” MIT Technology Review, November 11, 2019.

19

Karen Hao, “Training a Single AI Model Can Emit as Much Carbon as Five Cars in Their Lifetimes,” MIT Technology Review, June 6, 2019.

20

Pengfei Li, Jianyi Yang, and Mohammad A. Islam, et al., “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models,” ArXiv, October 25, 2023.

21

Microsoft, “2022 Environmental Sustainability Report,” May 10, 2023.

22

Google, “2023 Environmental Report,” July 24, 2023.

23

Pengfei Li, et al., “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models.”

24

NASA, “NASA Announces Summer 2023 Hottest on Record,” News Release, September 14, 2023.

25

Capgemini, Code of Ethics for AI, accessed October 16, 2023.

26

Adobe, AI Ethics, accessed October 16, 2023.

27

Microsoft, AI Business School, accessed October 17, 2023.

28

IBM, IBM Security QRadar SIEM, accessed October 17, 2023.

29

Internet Initiative Japan, “Data Centers: Societal Role and Challenges,” accessed October 18, 2023.

30

AMD, “Northern Data Takes HPC to New Affordability Levels with AMD,” 2021.

31

European Parliament, “EU AI Act: First Regulation on Artificial Intelligence,” June 14, 2023.

References to specific securities are for illustrative purposes only and are not intended as recommendations to purchase or sell securities. Opinions and estimates offered constitute our judgment and, along with other portfolio data, are subject to change without notice.

The opinions expressed are those of American Century Investments (or the portfolio manager) and are no guarantee of the future performance of any American Century Investments' portfolio. This material has been prepared for educational purposes only. It is not intended to provide, and should not be relied upon for, investment, accounting, legal or tax advice.