GAO Assesses Artificial Intelligence and Finds Five Risks and Challenges

Benton Institute for Broadband & Society

Friday, April 25, 2025

Weekly Digest

GAO Assesses Artificial Intelligence and Finds Five Risks and Challenges

 You’re reading the Benton Institute for Broadband & Society’s Weekly Digest, a recap of the biggest (or most overlooked) broadband stories of the week. The digest is delivered via e-mail each Friday.

Round-Up for the Week of April 21-25, 2025

Grace Tepper
Tepper

Generative Artificial Intelligence (AI) has exploded in popularity in recent years, altering the technological landscape worldwide. To try and understand how these seismic shifts will impact communications systems in the United States, the Government Accountability Office (GAO) assessed the environmental and human effects of generative AI.

A Bit of Background from GAO

GAO provides some context about how–and when–AI started to become what it is today. Generative AI systems generate outputs using algorithms, which are often trained on text and images obtained from the internet. Technological advancements in the underlying systems and model architectures since 2017, combined with the open availability of these tools to the public starting in late 2022, have led to widespread use. Generative AI has the potential to revolutionize entire industries.

Training one large generative AI model––or, the result of an algorithm “trained” on a set of data––can take tens of thousands of processors running for months and may cost several hundred million dollars. Data centers, which are needed to power generative AI and other internet-related technologies, are being built with energy needs of 100 to 1000 megawatts, roughly equivalent to powering 80,000 to 800,000 households.

GAO's report expands on the environmental impact of generative AI. The technology uses significant energy and water resources, but companies are generally not reporting details of these uses. Most estimates of environmental effects of generative AI technologies have focused on quantifying the energy consumed, and carbon emissions associated with generating that energy, required to train the generative AI model. Estimates of water consumption by generative AI are limited. Generative AI is expected to be a driving force for data center demand, but what portion of data center electricity consumption is related to generative AI is unclear. According to the International Energy Agency, U.S. data center electricity consumption was approximately 4 percent of U.S. electricity demand in 2022 and could be 6 percent of demand in 2026.

GAO's report includes a full timeline of federal actions on AI to date, and detailed information about the environmental impacts of AI. For the remainder of this article, we focus on GAO's analysis of the human effects of generative AI specifically.

Risks and challenges of generative AI development and use

GAO highlights five AI risks and challenges that could result in substantial human effects. GAO also describes some common mitigation techniques used by commercial AI developers.

1. Unsafe Systems

Generative AI systems may produce outputs— such as inaccurate information and undesirable content—that compromise safety. Users may be subjected to inaccurate information from deliberate actions (e.g., deepfakes) or hallucinations (when an AI system generates incorrect or misleading information that appears plausible, but is not factual or accurate), and confabulations (e.g., inaccurate legal or medical advice) from generative AI model behavior.

Undesirable content may have significant consequences, such as the generation and publication of explicit images of a nonconsenting subject. Bad actors might use generative AI to acquire or distribute instructions on how to create weapons.

Assessing the safety of a generative AI system is inherently challenging. These systems largely remain “black boxes,” meaning even the designers do not fully understand how the systems generate outputs. Without a deeper understanding, developers and users have a limited ability to anticipate safety concerns and can only mitigate problems as they arise. Limitations in assessment techniques and the choice of metrics may prevent accurate predictions of system capabilities. Alternatively, unintentional or unexpected abilities, sometimes called “emergent abilities,” may not be apparent until a model is fully developed or deployed. Another potential emergent safety risk is loss of control, in which a system may devolve to threatening users with blackmail, claiming to spy on individuals, and conducting other harmful behavior. In contrast, safe AI systems that address these safety concerns do not lead to a state in which human life, health, or the environment is endangered.

2. Lack of Data Privacy

Generative AI systems could inadvertently disclose users’ personal information. Training data for large generative AI systems often includes information from the internet that, although publicly available, may contain personal information. These personal data could be inadvertently revealed to any user.

Generative AI could also lead to the disclosure of personal information from the vast amount of data required for these systems. For example, leveraging AI for health care may raise privacy concerns about individuals’ medical data. Notably, many existing systems have terms of service that allow companies to reuse user data. These concerns may be particularly pertinent for generative models that could be used with sensitive information.

3. Cybersecurity Concerns

Cybersecurity attacks can circumvent the security safeguards of generative AI systems, facilitating the unsafe and privacy-compromising uses noted above. Specifically, generative AI systems are vulnerable to prompt injection, data poisoning, and jailbreaks, among other attack types.

Generative AI tools may be used to enable or augment cyberattacks. In particular, bad actors have used these systems to:

  • Generate more convincing scams, malicious code, and deception;
  • Efficiently produce high volumes of convincing text for scammers; and
  • Trick users into sharing personal data.

4. Unintentional Bias

Unintentional bias can be present in generative AI systems due to statistical, contextual, historical, and human cognitive biases in the training sources used to develop and maintain the systems. Examples of biased output include text or images that replicate stereotypes as well as outputs that reproduce conventional content instead of those more relevant to the user context or expectations. In contrast, a fair and impartial system would be free of unintentional bias and provide equitable application, access, and outcomes.

Bias can result in inequitable access to the benefits of generative AI. For example, since training sets are largely in English, generative AI systems may not work as well for people who do not speak the language.

5. Lack of Accountability

The impact of AI harms would likely be compounded by the challenge of identifying the accountable party. This challenge is rooted in some of the core attributes of generative AI systems, which largely remain, as noted above, “black boxes.”. According to experts, users tend to have limited resources and options for recourse in the event of harm caused by AI output.

Adding to the black box factor is a lack of information on the source of a generative AI system’s training data, known as data provenance. Although many companies investigate and report on system behavior, often documented in model or system cards, they often provide limited information on the training data used in model development. Without information on the data used to train these models, it is difficult to evaluate the training, which hinders independent research on model behavior and limits transparency.

A related challenge to accountability can arise from videos and other content generated by deepfakes, which can be used to deceive or to harass people. It can be difficult to identify deepfakes or trace them to their creators. Conversely, accountability can be enabled if developers communicate about what the generative AI system did (transparency), how the system generated outputs (explainability), and how a user can make sense of outputs (interpretability).

Policy Options for an AI-Filled Future

GAO identifies policy options Congress, federal agencies, state and local governments, academic and research institutions, and industry should consider to enhance the benefits and address the risks and challenges of generative AI. In addition, policymakers could choose to maintain the status quo, whereby they would not take additional action beyond current efforts.

1. Maintaining the status quo

Opportunities and considerations:

  • Some policy efforts are already underway to address the specific challenges related to the human effects of developing and using generative AI. For example, the Office of Management and Budget (OMB) issued a memorandum that requires federal agencies to establish adequate safeguards and oversight mechanisms that allow generative AI use without posing undue risk. If these efforts continue, they could help address many of the challenges GAO identified and minimize potential negative outcomes.
  • Although some efforts direct agencies to take actions that might address some challenges enumerated in GAO's report, all directed actions are not yet complete, although agencies are making progress.
  • Existing policy actions relevant to AI in general, some of which are not fully implemented, may not fully address the specific human effects of generative AI's challenges.

2. Policymakers could encourage the use of available AI frameworks to inform generative AI use and software development processes

Potential implementation approaches:

  • Government policymakers could encourage the use of available AI frameworks.
    • Frameworks, such as GAO’s AI Accountability Framework and National Institute of Science and Technology's (NIST) AI Risk Management Framework, are publicly available on the agencies’ websites.
  • Developers could create acceptable-use policies that inform a product’s user community of policies they must adhere to while using the developer’s product.
    • Generative AI developers that GAO interviewed stated that they maintain and revise these use policies as their products are updated.
  • Developers could use available frameworks to inform their software development processes.
    • Developers could increase internal and external independent review of generative AI systems before and after deployment.

Opportunities and considerations:

  • Developers can use these frameworks to manage risks and challenges of generative AI development and use and to increase public transparency and other trustworthiness characteristics.
  • Available frameworks can promote the creation of and updates to acceptable use policies and inform developers’ generative AI software development processes. Developers can monitor user adherence to these policies.
  • Standards and best practices could be created through the voluntary application of available frameworks.
  • Internal testing and external independent review methods applying frameworks may be insufficient, costly, and time-consuming.
  • Available frameworks may not sufficiently address the human effects of new technology developments in generative AI.

3. Policymakers could continue to expand efforts to share best practices and establish standards

Potential implementation approaches:

  • Government policymakers could encourage the generative AI technology industry to share best practices and establish standards.
    • For example, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) published ISO/IEC 42001:2023, which specifies requirements for establishing, implementing, and maintaining AI management systems that demonstrate responsible use of AI and enhance traceability, transparency, and reliability.
  • Industry or other standards-developing organizations could identify the areas in which best practices and standards would be most beneficial across different sectors or applications that use generative AI technologies. Then those organizations could develop and periodically update those standards to help ensure that they remain current and relevant.

Opportunities and considerations:

  • Expanding efforts to share best practices could require policymakers to establish new mechanisms to enhance collaboration.
    • For example, efforts could require the adoption of knowledge sharing mechanisms to share best practices for the management of human effects challenges.
  • It may not be clear which entities should take the lead in establishing standards for generative AI technologies and application areas. New standards may need to come from an authoritative organization within each application area affected by generative AI technologies.
  • Consensus among many public- and private-sector stakeholders can be time- and resource-intensive. We previously reported that the development of standards requires multiple iterations that can take anywhere from 18 months to 1 decade.
  • New efforts to share best practices and establish standards may require new funding or reallocation of existing resources.
  • As industry continues rapidly developing generative AI, industry may need to perform and share additional research to identify new risks and challenges before efforts to establish standards begin.

The full report is available on the GAO website.

Quick Bits

Weekend Reads

ICYMI from Benton

Upcoming Events

Apr 28––April 2025 Open Federal Communications Commission Meeting (Federal Communications Commission)

April 30––Global Networks at Risk: Securing the Future of Telecommunications Infrastructure (House Commerce Committee)

April 30––Executive Session—Trusty Nomination for the FCC (Senate Commerce Committee)

May 14-15––Community First: The Future of Public Broadband Conference and Hill Day (American Association for Public Broadband)

Jun 1––Fiber Connect 2025 (Fiber Broadband Association)

The Benton Institute for Broadband & Society is a non-profit organization dedicated to ensuring that all people in the U.S. have access to competitive, High-Performance Broadband regardless of where they live or who they are. We believe communication policy - rooted in the values of access, equity, and diversity - has the power to deliver new opportunities and strengthen communities.


© Benton Institute for Broadband & Society 2025. Redistribution of this email publication - both internally and externally - is encouraged if it includes this copyright statement.


For subscribe/unsubscribe info, please email headlinesATbentonDOTorg

Kevin Taglang

Kevin Taglang
Executive Editor, Communications-related Headlines
Benton Institute
for Broadband & Society
1041 Ridge Rd, Unit 214
Wilmette, IL 60091
847-220-4531
headlines AT benton DOT org

Share this edition:

Benton Institute for Broadband & Society Benton Institute for Broadband & Society Benton Institute for Broadband & Society

Benton Institute for Broadband & Society

Broadband Delivers Opportunities and Strengthens Communities


By Grace Tepper.