Video summary of the White House executive order on AI

Video summary of the White House executive order on AI

I read the executive order on AI from the White House, wrote a summary and used an AI-powered video generator to create the appearance of me presenting it. In accordance with suggestions in the order, the video is clearly labeled.

🗣️
Interested in trying out the HeyGen tool? Sign up using this link to give me a bump in credit. 😊
🗒️
Note: Everyone calls this an executive order on AI, but I think it deserves to be noted that it really addresses all types of automated systems. I don't say this clearly in the video, but you can ascertain as much from some of the wordings.

Here is the full script with added subheadings to assist readability:

Introduction

Here is my brief summary of the White House executive order on AI.

The White House recognises that AI holds potential for both good and harmful outcomes.

The executive order is a Federal Government-wide approach to governing the development and use of AI safely and responsibly.

The intent is to establish guidelines and best practices, and promote consensus industry standards for developing and deploying safe, secure, and trustworthy AI systems.

Developer responsibilities

Developers of AI will be expected to perform red-teaming tests and regularly report back to the government on details such as large-scale computing clusters, connections with foreign personnel and suppliers, as well as specific capabilities – such as biological weapons.

Addressing synthetic content

To reduce risks posed by synthetic content, it should be subjected to labeling, including watermarking.

Methods should also be put in place for detecting and establishing the authenticity and original source of digital content.

Specifically, generative AI should be prevented from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals.

Private sector consultation

To inform and encourage well-founded decision-making around dual-use foundation models, input should be solicited from the private sector, academia, civil society, and other stakeholders.

The consultation process should be public and collect input on potential risks, benefits, other implications, and appropriate policy and regulatory approaches.

Talent acquisition

To promote innovation and competition, efforts will be put into attracting AI talent to the United States, including streamlining the processing of visa petitions and applications.

A guide for experts in AI will be published, in multiple languages, on AI dot gov.

Innovation and intellectual property

With regards to innovation, steps will be taken for updated guidance on patent eligibility in AI and critical and emerging technologies, but also to address copyright issues raised by AI.

There should be dedicated personnel for collecting and analyzing reports of AI-related IP theft.

Efforts in safety, risk-mitigation, fairness and preparedness will also apply to healthcare, electricity provision, and climate-related threats.

Competition

To ensure fair competition in the AI marketplace, and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI, the Federal Trade Commission (FTC) is called upon to exercise its existing authorities.

Special attention is given to competition and innovation in the semiconductor industry, given that semiconductors are critical to AI competition.

Worker rights

Within the area of worker rights there will be investigations into supporting workers displaced by the adoption of AI.

To support wellbeing in the workplace, principles and best practices will be developed to help employers mitigate potential harms to employees, and maximize potential benefits.

This will address job displacement risks, AI-related collection and use of employer data, as well as compensation when work is monitored or augmented by AI.

Justice system and law enforcement

In order to advance equity and civil rights, technical training and assistance should be provided to State, local, Tribal, and territorial investigators and prosecutors on the topic of civil rights violations and discrimination related to automated systems, including AI.

With respect to the use of AI in the criminal justice system, a report will address usage within sentencing, parole, probation, risk assessments, police surveillance, predictive policing, prison-management tools and more.

This will also identify areas where AI can enhance law enforcement efficiency and accuracy, consistent with protections for privacy and civil rights.

Best practice recommendations will be provided to law enforcement agencies on safeguards and use-limits.

Addressing discrimination and bias

To strengthen AI and civil rights in the broader economy there will be guidance on non-discriminatory hiring when tech- and AI-enabled hiring systems are involved.

This will also address discrimination and biases against protected groups in housing markets and consumer financial markets.

Specifically it must be ensured that people with disabilities benefit from the promise of AI as well as are protected from its risks.

The order gives examples of unequal treatment resulting from the use of biometric data, such as gaze direction, eye tracking, gait analysis, and hand motions.

Protecting consumers

Consumers in general must be protected from fraud, discrimination, and threats to privacy or financial stability.

Hence it should be clarified how existing regulations apply to AI, including requirements related to transparency and the ability to explain the workings and usage of AI models.

This should contribute to ensuring safe and responsible deployment in healthcare, public-health, and human-services sectors.

Overall, equity principles should be embedded when AI is used in the health and human services sector, including the monitoring of algorithmic performance to prevent discrimination and bias.

This also presumes the incorporation of safety, privacy, and security standards into the software-development lifecycle.

Users should be able to determine appropriate and safe uses of AI in local settings based on the availability of documentation.

Strategies and guidance are likewise expected when AI is present in drug-development processes, transportation and education. The latter taking into consideration the impact on vulnerable and underserved communities.

Robocalls and robotexts are called out specifically, and these should be more easily blocked so as not to be exacerbated by AI.

Government agency practices

To advance federal government use of AI, agencies will be provided guidance on effective and appropriate use, including innovation and risk management.

All agencies will be required to appoint a Chief Artificial Intelligence Officer, responsible for the risk assessments, promotion and coordination of AI usage.
A process for external testing, including AI red-teaming for generative AI, will be developed in coordination with the Cybersecurity and Infrastructure Security Agency.

General blocks on agency use of AI is discouraged, and instead agencies should establish guidelines and limitations on the appropriate use of generative AI, encouraging personnel to safely learn and experiment within routine tasks that carry a low risk of harmful impact.

Taking the lead

Finally, the White House wants the United States to lead efforts outside of military and intelligence areas to expand engagements with international allies and partners. The goal is to establish a strong international framework for managing the risks and harnessing the benefits of AI.

A plan for global engagement on AI standards may include terminology, best practices for data capture and privacy, trustworthiness and verification processes, as well as risk management strategies.

The White House executive order is far-reaching and comprehensive. With this summary I hope to have provided insights into the many nuances of regulating a technology as elusive as AI.

Learn more

If you're interested in more coverage of AI and tech, related to human rights and ethics, you are welcome to follow me on LinkedIn, read my blog or listen to my podcasts. I would love to hear your impressions and reflections after watching and listening to a summary where my likeness is synthetic.

You will find all my details and links on axbom.info.

Also read

Benefits and risks of synthetic video and audio
In the one-minute video below I am speaking seven languages. In truth, I can speak two of those. None of the audio is actually me speaking, even if it sounds very much like my voice. And the video? Despite what it looks like, that’s not me moving my mouth.

Comment