AI researchers are calling on regulators to not decelerate improvement

LONDON – Artificial intelligence researchers argue that there is little point in regulating their development right now because the technology is still in its infancy and bureaucracy will only slow progress in the field.

AI systems are currently able to perform relatively “tight” tasks – such as playing games, translating languages, and recommending content.

But they are in no way “general” and some argue that experts are no closer to the holy grail of AGI (Artificial General Intelligence) – the hypothetical ability of an AI to understand or learn any intellectual task of a human – than they are in the 1960s, when the so-called “godparents of AI” had some early breakthroughs.

Computer scientists in the field have told CNBC that the capabilities of AI have been grossly oversubscribed by some. Neil Lawrence, a professor at Cambridge University, told CNBC that the term AI has been turned into something it is not.

“Nobody created anything that matched the capabilities of human intelligence,” said Lawrence, former Cambridge director of machine learning at Amazon. “These are simple algorithmic decision-making matters.”

Lawrence said regulators don’t need to impose tough new rules on AI development right now.

People say, “What if we create a conscious AI and it’s some kind of free will,” said Lawrence. “I think we are far from being a relevant discussion.”

The question is how far are we? A few years? A couple of decades? A couple of centuries? Nobody really knows, but some governments really want to make sure they are ready.

AI speak

In 2014, Elon Musk warned that AI “could potentially be more dangerous than nuclear weapons,” and the late physicist Stephen Hawking said that same year that AI could end humanity. In 2017, Musk reiterated the dangers of AI, saying it could lead to a third world war. He called for regulation of AI development.

“AI is a fundamental existential risk to human civilization, and I don’t think humans fully appreciate that,” Musk said. However, many AI researchers question Musk’s views on AI.

In 2017, Demis Hassabis, Polymath founder and CEO of DeepMind, agreed at a conference with AI researchers and business leaders (including Musk) that one day “super intelligence” will exist.

Superintelligence is defined by Oxford Professor Nick Bostrom as “any intellect that far exceeds human cognitive performance in practically all areas of interest”. He and others have speculated that super-intelligent machines might one day turn against humans.

A number of research institutions around the world focus on AI security, including the Future of Humanity Institute at Oxford and the Center for the Study Existential Risk at Cambridge.

Bostrom, the founding director of the Future of Humanity Institute, told CNBC last year that there are three main ways that AI could do harm if somehow much more powerful. They are:

  1. AI could harm people.
  2. Humans could do something bad to themselves with AI.
  3. Humans could do bad things to the AI ​​(in this scenario, the AI ​​would have moral status.)

“Each of these categories is a plausible place where something could go wrong,” said the Swedish philosopher.

Skype co-founder Jaan Tallinn sees AI as one of the most likely existential threats to human existence. He spends millions of dollars making sure the technology is developed safely. This includes investing early in AI labs like DeepMind (partly so he can keep an eye on their activities) and funding AI security research at universities.

Tallinn told CNBC last November that it was important to look at how much and how significantly AI development will feed into AI development.

“If one day people develop AI and people are lost the next day, I think there is very good reason to be concerned about what is happening,” he said.

But Joshua Feast, MIT graduate and founder of Boston-based AI software company Cogito, told CNBC, “There’s nothing in (AI) technology today that implies we’ll ever get to AGI.”

Feast added that it is not a linear path and the world is not gradually heading towards AGI.

He admitted that at some point there might be a “giant leap” that will put us on the path to AGI, but he does not see us on that path today.

Feast said policymakers should better focus on the AI ​​bias, which is a big problem with many of today’s algorithms. That’s because, in some cases, they have learned how to identify someone in a photo using human records that have racist or sexist views built into them.

New laws

The regulation of AI is an emerging problem around the world, and policy makers are faced with the difficult task of finding the right balance between promoting its development and managing the risks involved.

They also need to decide if they want to try to regulate “AI as a whole” or if they want to try and introduce AI laws for specific areas like facial recognition and self-driving cars.

Tesla’s self-driving technology is considered one of the most advanced in the world. But the company’s vehicles are still bumping into things – for example, a Tesla collided with a police car in the United States earlier this month

“For it (legislation) to be useful in practice, it has to be discussed in context,” said Lawrence, adding that policy makers should figure out what ‘new’ AI can do that wasn’t possible before, and then consider whether regulation is necessary.

Politicians in Europe are arguably trying more than anyone to regulate AI.

In February 2020, the EU published its draft strategy paper on promoting and regulating AI, while the European Parliament made recommendations in October on which AI rules should apply in relation to ethics, liability and intellectual property rights.

The European Parliament said “high-risk AI technologies, such as those with self-learning capabilities, should be designed to allow human control at all times.” Added that ensuring AI’s self-learning skills can be “disabled” if they are found to be dangerous.

Regulatory efforts in the US have mainly focused on how to make self-driving cars safe and whether or not to use AI in warfare. In a 2016 report, the National Science and Technology Council set a precedent that would allow researchers to keep developing new AI software with few restrictions.

The National Security Commission on AI, led by former Google CEO Eric Schmidt, released a 756-page report this month declaring the U.S. unwilling to defend or compete in the AI ​​era. The report warns that AI systems are used in the “pursuit of power” and that “AI will not stay in the realm of superpowers or science fiction”.

The commission called on President Joe Biden to reject calls for a global ban on autonomous weapons, stating that China and Russia are unlikely to abide by any treaty they have signed. “Without ubiquitous AI capabilities and new paradigms of warfare, we will not be able to defend ourselves against AI-enabled threats,” wrote Schmidt.

There are now global initiatives to regulate AI.

In 2018, Canada and France announced plans for a G7-backed international body to study the global impact of AI on people and economies while steering AI development. The panel would be similar to the international panel on climate change. It was renamed Global Partnership on AI in 2019. The US has yet to support it.

Comments are closed.