People are asking how we should regulate AI not why we should do it?*-49GFKF3R8Ut63u9

Original Source Here

People are asking how we should regulate AI not why we should do it?

Image From Unsplash

Regulators face distinct challenges when dealing with the emergence of advanced artificial intelligence (AI) systems like ChatGPT. They must address two contrasting issues: malfunctions in the technology and its proper functioning.

The rise of generative AI, which autonomously produces text or images, has introduced a unique set of concerns due to its remarkable power and uncontrollable nature. This potent combination poses significant challenges for those aiming to mitigate potential harm.

Before the advent of ChatGPT, AI regulation primarily focused on controlling the application of the technology, particularly in high-risk domains like healthcare. However, the emergence of a versatile chatbot capable of disrupting various aspects of human activity has prompted a different question: Should AI models themselves be subject to regulation?

General-purpose technologies, such as AI, present a formidable dilemma for regulators. Distinguishing between benign and malicious uses of AI proves challenging. Moreover, AI developers acknowledge their inability to fully explain the inner workings of the technology or predict how specific prompts will lead to particular outputs.

Encouragingly, substantial efforts are underway to address the unique challenges posed by this technology. Governments worldwide now face the decision of whether to support these endeavors through formal regulation.

The large language models that underpin generative AI services like ChatGPT exhibit a fundamental flaw in terms of effectiveness: they struggle to clearly specify their intended objectives and measure their achievement. Their performance lacks repeatability, and assessments of their output remain highly subjective.

In the United States, the National Institute of Standards and Technology collaborates with experts to establish standards for the design, testing, and deployment of these systems.

Transparency also emerges as a crucial factor, potentially enabling external scrutiny of the models. However, comprehending the workings of a learning system like a large language model is not as straightforward as exposing the code in traditional software.

Companies such as OpenAI and Google, concerned about the apprehension surrounding such powerful yet opaque technology, strive to enhance openness. Following a recent visit by the CEOs of four leading AI companies to Washington, the White House announced their commitment to subjecting their models to external evaluation at the annual Defcon cyber security conference in August.

Setting safety standards, increasing transparency about the models’ inner workings, and allowing external experts to assess them can all enhance trust in large language models. However, the question remains: What form should formal regulation take, and how should it restrict models considered threatening?

One potential approach, suggested by Alexandr Wang, CEO of Scale AI, involves treating AI similarly to the GPS positioning system, limiting the most powerful versions for specific applications. Yet, imposing such constraints in a competitive technology market proves challenging. Another recommendation, proposed by OpenAI CEO Sam Altman, involves subjecting large language models to direct regulatory oversight. A licensing system could ensure adherence to safety standards and appropriate vetting.

However, this approach has evident drawbacks. It risks creating a separate market of regulated models controlled by a handful of companies with the necessary resources to operate in a highly regulated environment.

Furthermore, the rapidly evolving nature of AI development poses additional difficulties. The most advanced models today inevitably become commonplace software tomorrow. Simultaneously, some capabilities currently exclusive to large all-purpose models like ChatGPT may soon be present in much smaller systems designed for narrower tasks.

Unfortunately, no easy solutions present themselves. Nevertheless, with technologists themselves advocating for oversight of intelligent bots, some form of direct regulation appears inevitable.

More content at

Sign up for our free weekly newsletter. Follow us on Twitter, LinkedIn, YouTube, and Discord.


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: