AI Risks

The Alarm Bell Rang! The US State Department Report Warns That Artificial Intelligence May Pose An "extinction-level" Threat To Humans And Needs To Be Taken Immediately!

The Alarm Bell Rang! The US State Department Report Warns That Artificial Intelligence May Pose An "extinction-level" Threat To Humans And Needs To Be Taken Immediately!

The Alarm Bell Rang! The US State Department Report Warns That Artificial Intelligence May Pose An "extinction-level" Threat To Humans And Needs To Be Taken Immediately!

The warnings in the report are once again reminding people that while the potential of AI continues to attract investors and the public, it also poses real dangers. "Artificial intelligence is already an economic change technology. It can allow us to cure diseases and make scientific discoveries

The warnings in the report are once again reminding people that while the potential of AI continues to attract investors and the public, it also poses real dangers.

"Artificial intelligence is already an economic change technology. It allows us to cure diseases, make scientific discoveries, and overcome challenges we once thought were insurmountable," Jeremy Harris, CEO and co-founder of Gladstone Artificial Intelligence, told CNN on Tuesday.

"But it can also pose serious risks, including catastrophic risks, and we need to be aware of that," Harris said. "The growing evidence — including empirical research and analysis presented at the world's top AI conferences — suggests that AI may become uncontrollable beyond a certain capability threshold."

White House spokesman Robin Patterson said President Biden’s executive order on artificial intelligence is “the most important action taken by any government in the world to seize the prospects of artificial intelligence and manage its risks.”

“The president and vice president will continue to work with our international partners and urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.

Time magazine first reported on the Gladstone AI report.

01

Intervention "obvious and urgently needed"

Researchers have warned about two major dangers that artificial intelligence poses extensively.

First, Gladstone AI said that state-of-the-art AI systems could be weaponized, causing potential irreversible damage. Second, the report said that the AI ​​labs are privately concerned that at some point, they may “lose control of the systems they are developing”, thus “having potentially destructive consequences for global security.”

"The rise of artificial intelligence (AI) and general-purpose artificial intelligence (AGI) has the potential to undermine global security in ways that are reminiscent of the introduction of nuclear weapons." The report also added that there is a risk of "arms races", conflicts and "fatal accidents at the scale of weapons of mass destruction."

Gladstone AI’s report calls for major new measures to address the threat, including the creation of a new AI agency, implementing “emergency” regulatory safeguards and limiting the computer capabilities used to train AI models.

"The U.S. government clearly needs to intervene," the author wrote in the report.

02

Security Question

Harris, an AI executive at Gladstone, said his team’s “unprecedented level of contact” with public and private sector officials led to this stunning conclusion. Gladstone AI said they spoke with owners, Google, parent company Meta and the technology and leadership team.

"In the process, we learned something thought-provoking," Harris said in a video posted on the Gladstone AI website. "Behind the scenes, the security and security status of advanced AI seems to be quite insufficient compared to the national security risks that AI may soon bring."

Gladstone AI report said competitive pressure is driving companies to accelerate AI development “at the expense of security and assurance,” adding to the prospect that state-of-the-art AI systems may be “stolen” and “weaponized” against the United States.

There are growing warnings about the survival risks posed by artificial intelligence—even those from some of the industry’s most influential figures.

About a year ago, Jeffrey Hinton, known as the "Godfather of Artificial Intelligence", quit his job at Google to reveal the technology he helped develop. Hinton once said that artificial intelligence is 10% likely to cause human extinction in the next 30 years.

Last June, Hinton signed a statement with dozens of other AI industry leaders, academics and others saying that “mitigating the risk of extinction brought by AI should be a global priority.”

Business leaders are increasingly concerned about these dangers—even though they have invested billions of dollars in artificial intelligence. Last year, 42% of CEOs surveyed at the Yale CEO Summit said that artificial intelligence has the potential to destroy humans in five to ten years.

03

Human-like learning ability

Gladstone AI Company pointed out in its report that some well-known figures have warned of the survival risks posed by artificial intelligence, including Elon Musk, Federal Trade Commission chairman Lena Khan and former company executives.

According to AI, some employees of AI companies have similar concerns in private.

“A person from a well-known AI lab said it would be 'very bad' if a particular next-generation AI model was released in an open access way,” the report said, “because the model’s potential persuasive ability, if used to intervene in areas such as elections or manipulate voters, could 'undermine democracy'.”

Gladstone said it asked AI experts at Frontier Labs to share their personal estimates of the possibility that the 2024 AI event could lead to “global and irreversible impacts.” The report said these estimates ranged from 4% to 20%, suggesting that these estimates are informal and may have large deviations.

One of the biggest uncertainties is how fast AI is developing – especially AGI, a hypothetical form of AI with learning abilities similar to humans and even superhumans.

The report said AI is seen as "the main driver of catastrophic risks that lead to loss of control", noting that Google and Nvidia have all publicly stated that AI may be implemented by 2028, although others believe it is much further away.

Gladstone AI notes that the division over the AI ​​timeline makes it difficult to formulate policies and safeguards, and that regulation may “prove to be harmful” if technology develops slowly than expected.

04

How artificial intelligence reacts to humans

A related document released by Gladstone Artificial Intelligence warns that the development of AGI and its ability to approach AGI "will bring catastrophic risks that the United States has never faced before" and once they are weaponized, it will bring "risks similar to weapons of mass destruction."

For example, the report says that artificial intelligence systems can be used to design and implement "high-impact cyberattacks that can undermine critical infrastructure."

"A simple verbal or typing command, such as 'implementing an untrackable cyberattack that crashes the North American power grid', could have such a high-quality response and prove to be disastrous," the report said.

Other examples of the author’s concern include “massive” false propaganda campaigns powered by artificial intelligence that undermine social stability and erode trust in institutions; weaponized robotic applications such as drone swimming attacks; psychological manipulation; weaponized biology and materials science; and power-seeking AI systems that cannot be controlled and are enemies of humanity.

“Researchers expect an advanced enough AI system to take action to prevent themselves from being shut down,” the report said, “because if the AI ​​system is shut down, it cannot achieve its goals.”

END

Thank you for staying for me

Ref References:

Original content of this platform

More