The crescendo of concern surrounding artificial intelligence (AI) has recently reached an all-new high. Just a few days ago, a collective of over 300 industry leaders sounded a cautionary note, equating the potential dangers of AI to those of pandemics and nuclear war, with an ominous prediction that AI could possibly lead to human extinction.

The term “AI doomsday” undoubtedly fuels visions of a dystopian sci-fi world where robots rule. But what would a genuine threat scenario entail? According to experts, the reality might be less sudden and dramatic, and more of a gradual decline impacting the core structures of society.

Jessica Newman, the director of the Artificial Intelligence Security Initiative at the University of California Berkeley, offers her perspective. She suggests that the worry isn’t about AI morphing into a malicious entity. Rather, it’s more about the possibilities of AI being programmed to perform harmful tasks, or the inadvertent integration of inherently flawed AI systems into society’s crucial domains.

In essence, the threat of AI isn’t about an instantaneous disaster but a slow, insidious decay. The danger lies not in a cataclysmic AI event but in the gradual, harmful effects of poorly designed or misused AI systems. Therefore, vigilance, responsibility, and ethics should form the cornerstones of AI development and integration moving forward.