Paper clips, parrots and security vs. ethics

Sam Altman, CEO and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing on Tuesday, May 16, 2023 in Washington, DC, USA. Congress will debate the potential and pitfalls of artificial intelligence as products like ChatGPT rise, asking questions about the future of the creative industries and the ability to discern fact from fiction.

Eric Lee | Bloomberg | Getty Images

Last week, OpenAI CEO Sam Altman charmed a roomful of politicians over dinner in Washington, DC, and then spent nearly three hours testifying about the potential risks of artificial intelligence at a Senate hearing.

After the hearing, he summarized his stance on AI regulation, using terms not commonly known to the general public.

“AGI security is really important and boundary models should be regulated,” Altman tweeted. “Regulatory coverage is poor and we shouldn’t mess with sub-threshold models.”

In this case, “AGI” refers to “Artificial General Intelligence”. As a concept, it means a much more advanced AI than is currently possible, one that can do most things as well or better than most humans, including self-improvement.

“Frontier models” are a way of talking about the AI ​​systems that are most expensive to build and analyze the most data. Large language models like OpenAI’s GPT-4 are frontier models compared to smaller AI models that perform specific tasks like identifying cats in photos.

Most people agree that as the pace of development accelerates, there must be laws governing AI.

“Machine learning, deep learning, has developed very rapidly in the last decade or so. When ChatGPT came out, it evolved in ways we never imagined could happen so quickly,” said My Thai, a computer science professor at the University of Florida. “We worry that we’re getting into a more powerful system that we don’t fully understand and can’t anticipate what it can do.”

But the language of this debate reveals two major camps among academics, politicians and the tech industry. Some are more concerned about what they call “AI security.“The other camp is concerned about what they call”AI ethics.

Speaking before Congress, Altman avoided technical jargon for the most part, but his tweet indicated that his primary concern is AI security — a stance shared by many industry leaders at companies like Altman-led OpenAI, Google DeepMind, and well capitalized start-ups share. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes that we urgently need government attention to regulate development and prevent an untimely end of humanity – an effort akin to nuclear non-proliferation.

“It’s good to hear that so many people are taking AGI security seriously,” Mustafa Suleyman, founder of DeepMind and current CEO of Inflection AI, tweeted on Friday. “We have to be very ambitious. The Manhattan Project cost 0.4% of US GDP. Imagine what a security program like this could achieve today.”

But much of the discussion in Congress and the White House about regulation is conducted from the perspective of AI ethics, which focuses on current harms.

From this perspective, governments should ensure transparency on how AI systems collect and use data, limit their use in areas subject to anti-discrimination laws, such as housing or employment, and explain why current AI technology is inadequate. The White House’s AI Bill of Rights proposal late last year contained many of these concerns.

This camp was represented at the hearing in Congress by IBM Christina Montgomery, chief privacy officer, told lawmakers that every company working on these technologies should have a focal point for “AI ethics.”

“There needs to be clear guidance on AI end uses or categories of AI-powered activities that are inherently high risk,” Montgomery told Congress.

How to understand AI jargon like an insider

See also: How to talk about AI like an insider

Not surprisingly, the AI ​​debate has developed its own jargon. It started as a technical academic field.

Much of the software discussed today is based on so-called Large Language Models (LLMs), which use graphical processing units (GPUs) to predict statistically probable sentences, images, or music, a process called “inference”. Of course, AI models must first be created in a data analysis process called “training”.

But other terms, particularly by AI security advocates, are more cultural in nature, often referring to shared references and inside jokes.

For example, AI security guards might say they’re afraid to turn into one paperclip. This refers to a thought experiment popularized by philosopher Nick Bostrom, which states that an overpowered AI – a “superintelligence” – could be given the task of making as many paperclips as possible and logically decide to kill humans that make paperclips their remains.

OpenAI’s logo is inspired by this story, and the company even made paperclips in the shape of its logo.

Another concept of AI security is that “hard start” or “quick start“, a phrase that suggests that by the time someone manages to build an AGI, it’s already too late to save humanity.

Sometimes this idea is described in terms of onomatopoeia – “curse“ – especially among critics of the concept.

“It’s like believing the ridiculous hard launch ‘Foom’ scenario which makes it seem like you have absolutely no understanding of how it all works,” tweeted Meta AI head Yann LeCun, of the AGI claims skeptical, in a recent debate on social media.

AI ethics also has its own jargon.

When describing the limitations of current LLM systems, which do not understand meaning but merely produce human-appearing language, AI ethicists often compare them to “Stochastic parrots.”

The analogy coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell in an article written by some of the authors when some of the authors were at Google emphasizes that while sophisticated AI models can produce realistic-looking text, However, the software doesn’t understand the concepts behind the language – like a parrot.

When these LLMs invent false facts in answers, they are “hallucinate.”

One issue stressed by IBM’s Montgomery during the hearing was “explainability” in AI results. This means that if researchers and practitioners cannot point to the precise number and path of operations that larger AI models use to derive their results, it could hide some inherent biases in the LLMs.

“The algorithm has to be explainable,” said Adnan Masood, AI architect at UST-Global. “If you look at the classic algorithms so far, you say: ‘Why am I making this decision?’ Now with a bigger model, they become this huge model, they’re a black box.”

Another important term is “crash barrierswhich includes software and policies that big tech companies are currently building around AI models to ensure they don’t disclose data or produce disruptive content, often referred to as “got off track.

It can also refer to certain applications that protect AI software from going off-topic, such as Nvidia’s “NeMo Guardrails” product.

“Our AI Ethics Committee plays a critical role in overseeing internal AI governance processes and creating appropriate guardrails to ensure we bring technology to the world in a responsible and safe manner,” Montgomery said this week.

Sometimes these terms can have multiple meanings, as in the case of “emergent behavior.”

A recent Microsoft Research article entitled “Sparks of Artificial General Intelligence” claimed to identify several “emerging behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a diagram programming language.

But it can also describe what happens when simple changes are made at very large scales — like the patterns birds make when flying in packs, or, in the case of AI, what happens when ChatGPT and similar products are used by millions of people such as widespread spam or disinformation.

Comments are closed.