Guardrails or Leash: Two Approaches to Regulating AI

TECH & SCIENCETECH & SCIENCE2 weeks ago36 Views

In the world of artificial intelligence (AI), the metaphor of guardrails is used to illustrate how risks from this technology should be prevented. If good guardrails, or acceptable limits, are designed, AI will not go off track. This expression and concept are widely recognized among both designers of AI systems and regulators, who find it appropriate to legislate ex ante to prevent potential harms from a technology capable of both great and terrible outcomes. An article published today in the journal Risk Analysis proposes shifting from guardrails to leashes used for walking dogs. The imagery aims to convey the concept of flexible regulation supervised by a human, much like pet owners ensure their pets behave appropriately.

“The issue with the traditional guardrails approach to conceptualizing AI regulation is that, given this technology’s dynamic and variable nature, the path of any metaphorical road—where such regulatory guardrails could be installed—cannot be easily predetermined,” states Cary Coglianese, professor of law at the University of Pennsylvania and lead author of the study. “AI is highly heterogeneous, advancing along more paths than even the best-informed regulators could delineate in advance.”

On roads, guardrails establish boundaries and mark the possible paths allowed for travel. They reduce risk by keeping behavior (in this case, vehicle control) on a predetermined path. When discussing AI guardrails, the authors of the article say, “we refer to a set of immutable rules that keep technology on an acceptable course, ensuring that users remain safe and that society does not plummet toward a metaphorical cliff.” Guardrails impose prohibitions and mandatory standards on developers to prevent specific outcomes.

The European AI Regulation, for example, prohibits several AI applications, such as social credit systems or automatic real-time biometric identification systems. However, it also incorporates elements of the approach proposed in the article, requiring constant audits.

Some AI tools are designed for specific purposes, such as detecting skin cancer or making movie recommendations. Others, like large language models (LLMs), serve as the basis for generative or general-purpose AI tools that can be used for various tasks, including programming, writing, and text synthesis. AI systems can have very different architectures, ranging from neural networks to more linear models. Additionally, factors like data sources used in training, as well as the chosen training system, can significantly affect the results produced by an AI. According to the authors, subjecting such different tools to the same rules implies, almost by definition, a blurring of the problem.

The regulation based on human management of AI, the article argues, offers several key advantages over the guardrail approach, as it “better responds to the novel uses and problems of AI and allows for greater exploration, discovery, and technological change.” Nevertheless, the management systems required by the regulation “still provide a controlled framework that, like a leash, can help prevent AI from going ‘off the rails.’ This model emphasizes the need for continued human oversight throughout the AI lifecycle (training, validation, and testing) and reinforces the importance of ongoing human monitoring and accountability.”

Pros and Cons of Each Model

The proposed AI risk management model in the article, however, depends on continuous human oversight. “Management-based regulation seeks to ensure that companies developing and using AI in ways that may pose risks to consumers and society exercise constant supervision, holding a firm control over the leash, even as its flexibility allows space for exploration and innovation.”

This implies that AI developers should undergo audits, impact studies, and continuous updates to prevent deterioration in risk management. In other words, more transparency. “This would eliminate the need for regulators to have the same level of knowledge about AI as each company and avoid setting unrealistic barriers to technological advancement,” says Coglianese.

In the case of the self-driving car that struck and killed a pedestrian in Arizona in 2018, the article suggests that an external audit might have recommended more nighttime testing, potentially preventing the accident. And in the case of the British girl who took her own life partly due to social media, as established by authorities, the leash approach might have led Pinterest and Instagram, the platforms where she consumed suicide-inducing content, to create a risk map and analyze their contribution to mental health issues.

So, are leashes better than guardrails? “In my opinion, none of the ways we have to regulate AI are good,” says Lorena Jaume-Palasí, an expert in the philosophy of law applied to technology. “To understand the proposal of the article, one must consider that its author comes from the U.S., where there is a very different culture and legal architecture than in Europe. American professors generally do not understand that the way they regulate does not work with the legal structures we have,” she adds.

For instance, across the Atlantic, cost-benefit analysis prevails: if the benefits outweigh the costs, it seems legitimate to, for example, release a medication, even if it has adverse effects. In Europe, we are stricter. “Here you must understand and identify the risks your product entails and demonstrate that you have already attempted to mitigate them. If you haven’t done so, you cannot market your product.” Without guardrails, no leash will suffice.

Leave a reply

Donations
Comments
    Join Us
    • Facebook38.5K
    • X Network32.1K
    • Behance56.2K
    • Instagram18.9K
    Categories

    Advertisement

    Loading Next Post...
    Follow
    Sign In/Sign Up Sidebar Search Trending 0 Cart
    Popular Now
    Loading

    Signing-in 3 seconds...

    Signing-up 3 seconds...

    Cart
    Cart updating

    ShopYour cart is currently is empty. You could visit our shop and start shopping.