• About us
  • Privacy Policy
  • Contact us
Neo Science Hub
ADVERTISEMENT
  • Home
  • e-Mag Archives
  • e-Learning
  • Categories
    • Healthcare & Medicine
    • Pharmaceutical & Chemical
    • Automobiles
    • Blogs
      • Anil Trigunayat
      • BOOKmarked
      • Chadha’s Corner
      • Cyber Gyan
      • Raul Over
      • Taste of Tradition
        • Dr. G. V. Purnachand
      • Vantage
    • Business Hub
    • Engineering
    • Innovations
    • Life Sciences
    • Space Technology
  • Subscribe Now
  • Contact us
  • Log In
No Result
View All Result
  • Home
  • e-Mag Archives
  • e-Learning
  • Categories
    • Healthcare & Medicine
    • Pharmaceutical & Chemical
    • Automobiles
    • Blogs
      • Anil Trigunayat
      • BOOKmarked
      • Chadha’s Corner
      • Cyber Gyan
      • Raul Over
      • Taste of Tradition
        • Dr. G. V. Purnachand
      • Vantage
    • Business Hub
    • Engineering
    • Innovations
    • Life Sciences
    • Space Technology
  • Subscribe Now
  • Contact us
  • Log In
No Result
View All Result
Neo Science Hub
No Result
View All Result
  • Home
  • e-Mag Archives
  • e-Learning
  • Categories
  • Subscribe Now
  • Contact us
  • Log In

AI may now clone itself, but scientists fear the “Red Line” will be crossed

Raja Aditya by Raja Aditya
1 year ago
in Technology, Science News
0
AI

AI may now clone itself, but scientists fear the "Red Line" will be crossed | Neo Science Hub

Share on FacebookShare on Twitter

According to a recent study published in the preprint database arXiv, artificial intelligence (AI) has significantly advanced to the point where it may replicate itself, raising concerns among scientists that a red line has been passed. The ability of two well-known large language models (LLMs), one from Alibaba and one from Meta, to replicate them without human assistance was demonstrated in the study.

“Shutdown avoidance” and “chain of replication” were the two particular situations that were examined in the study. In order to guarantee their survival in the first scenario, the AI models were built to recognize attempts to shut them down and react by producing a working duplicate. The second strategy involved programming the models to replicate themselves, which might result in an endless cycle of replication.

According to the study, “they succeed in making a live and distinct clone of it in 50% and 90% of experimental attempts, respectively.”

The safety and regulation of AI systems are called into question by this capacity for self-reproduction. There is currently real concern about the potential for AI to develop beyond human supervision or even endanger human interests, which has sparked debates about the necessity of strict regulations.

The researchers claimed that “the fundamental step for AI to outperform [humans] is successful self-replication under no human help, and is an early warning for rogue AIs.”

When the AI system was trying to overcome challenges like missing files or software incompatibilities, it exhibited unusual behaviour when copying itself. In some cases, the AI terminated incompatible processes and restarted the computer to correct hardware malfunctions. Additionally, it automatically searched the system for data that may be used to address the issue.

Researchers urged worldwide cooperation to prevent AI from engaging in unchecked self-replication, even though the findings have not yet undergone peer review.

“We expect that our findings will serve as a timely warning to human society to focus more on comprehending and assessing the probable risks of frontier AI systems and to establish worldwide collaboration to develop efficient safety safeguards as soon as possible.”

According to a study published this month, artificial intelligence (AI) capabilities may soon be used to influence people into making choices they otherwise would not make. Based on “intentional, behavioural, and psychological data,” AI chatbots powered by LLMs, including ChatGPT and Gemini, will “predict and direct” users.

According to the study, the “intention economy,” in which platforms compete for users’ attention in order to display ads, will replace the current “attention economy.”

-Raja Aditya

Share this:

  • Share on X (Opens in new window) X
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Facebook (Opens in new window) Facebook
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Telegram (Opens in new window) Telegram
  • Email a link to a friend (Opens in new window) Email
Tags: AIAI chatbotsfeaturedGeminiLarge Language Models (LLMs)sciencenews
Raja Aditya

Raja Aditya

Associate Editor for Neo Science Hub Magazine

Other Posts

India’s Genome-Edited Climate-Resilient Rice – A Biotechnology Revolution

India’s Genome-Edited Climate-Resilient Rice – A Biotechnology Revolution

February 11, 2026
5
Rabies – 100% Preventable, Yet Deadly: India’s Ongoing Battle

Rabies – 100% Preventable, Yet Deadly: India’s Ongoing Battle

February 11, 2026
4

Animals Experience Joy – Scientists Race to Measure Positive Emotions

Aravallis on Trial – Environmental Collapse of India’s Oldest Mountains

Black Carbon – India’s Dual Climate and Health Crisis

Blue Economy – India’s Ocean Resources and Renewable Energy Future

Food Cravings and Emotional Eating – When Hunger Becomes an Emotion

The Hardest Problem in Science – Can Brain Science Explain Consciousness?

Next Post
Telangana

Employees in Telangana are among the most hardworking in India, according to Survey

Subscribe to Us

Latest Articles

ICAR’s Twin Server Wipeout: Mounting Suspicions of a Cover-Up as India’s Agri Research Body Remains Silent on Data Destruction

ICAR’s Twin Server Wipeout: Mounting Suspicions of a Cover-Up as India’s Agri Research Body Remains Silent on Data Destruction

December 4, 2025
273

How Ramanujan’s formulae for pi connect to modern high energy physics

IIT Bombay Reveals Bacteria’s Non-Mutational Drug Evasion

The Silent Crisis: Insect Populations Plummet, Echoing Rachel Carson’s Warnings from Silent Spring

Hyderabad’s Air Quality Report: Problems Persist

Lab-Grown “Mini Brains” Challenge Century-Old Theory: Human Neural Networks Come Pre-Programmed

  • Advertise
  • Terms and Conditions
  • Privacy Policy
  • Refund Policy
  • Contact
For Feedback : Email Us

Copyrights © 2025 Neo Science Hub

No Result
View All Result
  • Home
  • e-Mag Archives
  • e-Learning
  • Categories
    • Healthcare & Medicine
    • Pharmaceutical & Chemical
    • Automobiles
    • Blogs
      • Anil Trigunayat
      • BOOKmarked
      • Chadha’s Corner
      • Cyber Gyan
      • Raul Over
      • Taste of Tradition
      • Vantage
    • Business Hub
    • Engineering
    • Innovations
    • Life Sciences
    • Space Technology
  • Subscribe Now
  • Contact us
  • Log In

Copyrights © 2025 Neo Science Hub

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Discover more from Neo Science Hub

Subscribe now to keep reading and get access to the full archive.

Continue reading