Skip to main content

AI Regulation at a Crossroads

Mar

25

2024

Time to read

Read so far

Written by: 

Jenn Markey

Time to read

Written by: 

dark-haired woman looking down and typing on tablet

Ever since ChatGPT debuted in November 2022, the hype and hysteria surrounding artificial intelligence (AI) has continued to accelerate. Indeed, rarely can you read an article or watch a news clip without AI being inserted into the conversation. With AI-enabled deepfakes, AI-displaced workers, alleged AI theft of intellectual property, and AI-fueled cyberattacks, the raging debate is not only if and how to regulate AI, but also when and by whom.

Global Legislative Developments and Directives

Early calls for AI governance and industry self-regulation seem to be giving way to more rigid and enforceable legislative efforts. After all, world leaders are loath to repeat the unregulated social media experiment of the past 20 years that led to such unforeseen consequences as the rampant dissemination of misinformation and disinformation, fueling political and social upheaval.

To wit, the European Union is on the verge of passing the first comprehensive piece of legislation with the AI Act, which promises to set the global benchmark for AI, much like the General Data Protection Regulation (GDPR) did for data privacy protection. The AI Act provides prescriptive risk-based rules as to when AI can and cannot be employed, with severe penalties for non-compliers that include up to 7 percent of an enterprise’s global annual revenue.

Meanwhile, the White House issued the Safe, Secure, and Trustworthy Artificial Intelligence Executive Order this past fall, which is more expansive than the EU AI Act, contemplating everything from consumer fraud to weapons of mass destruction. The order demands more transparency from AI companies on how their models work and provides labeling standards for AI-generated content. However, an executive order is not a legislative act and the U.S. has already started down a decentralized path with individual states proposing their own legislation, including California’s Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, which aims to protect consumer privacy and promote ethical standards in the use of AI. Among provisions for transparency, accountability, and public engagement the draft rules would require companies to conduct regular assessments of their AI systems to ensure compliance.

Perspectives and Considerations on AI Regulation

Proponents of AI legislation cite the protection of health, safety, fundamental human rights, democracy, rule of law, and the environment as paramount. However, others are concerned that legislation will hobble their domestic industry, ceding profits and AI supremacy to others, especially bad actors. On this note, the UK has taken a decidedly contrarian position with a pro-innovation AI approach that provides a policy of non-binding principles. Still others feel that AI does not warrant new or different regulation than traditional software, stating that the only difference between the two is the ratio of data-driven to rule-driven outcomes, which does make AI behavior less transparent but not less deterministic.

Then there is the conversation around AI ethics and empathy, or rather the lack thereof. Those favoring a more laissez-faire approach to AI regulation assert that regulating empathy and ethics is not really an AI problem per se but embedded in the historical data on which the large language models (LLMs) are trained. And this will take time to resolve with or without AI, and with or without regulation.

It seems regulators are damned if they do and damned if they don’t. No one wants to be the hapless bureaucrat that inadvertently enabled Skynet in the Terminator movie series, or the overeager regulator that quashed domestic innovation on the eve of the Fourth Industrial Revolution, ceding AI global leadership and economic prosperity for generations to come.

The path forward will be a balancing act, but the guiding star should be one in which AI is beneficial to all of humanity, regardless of country, status, or any other factor. A popular framework in this regard is that of Harmless, Honest, and Helpful AI. Initially proposed by the team at Anthropic, this approach focuses on reinforcement learning from human feedback and supervised fine-tuning to align a model that roots out inaccuracy, bias, and toxicity. This more curated approach can also help ensure that the AI is more secure as it can avoid eliciting a harmful and untrue output, and flag vulnerabilities.

jenn-markey-headshot
Jenn Markey
Advisor, Entrust Cybersecurity Institute
Jenn Markey is a content advisor and thought leader with the Entrust Cybersecurity Institute. Her previous roles with Entrust include VP Product Marketing for the Payments and Identity portfolio and Director Product Marketing for the company’s Identity and Access Management (IAM) business. Jenn brings 25+ years of high tech product management, business development, and marketing experience to the Entrust Cybersecurity Institute with significant expertise in content development and curation.
View all of Jenn's Posts
Facebook