In a recent announcement, Cisco unveiled its new “AI Defense” initiative, spotlighting a growing concern in the AI realm: security and privacy challenges. As AI technology races forward, the fear of it spiraling out of control is not lost on anyone, including regulators and government bodies. With regulations still lagging behind, the risk of AI going rogue remains distressingly high.
Roman Yampolskiy, a noted AI safety researcher and head of the Cyber Security Lab at the University of Louisville, highlights a startling statistic—AI has a 99.999999% chance of leading to humanity’s end, according to p(doom). His stark viewpoint suggests that perhaps steering clear of building AI could be the safest approach. Yet, there’s still hope on the horizon.
AI Defense represents an advanced security measure tailored to protect the creation and application of AI-driven software, providing companies a secure way to harness AI potential.
In an intimate discussion with The Rundown AI’s Rowan Cheung, Jeetu Patel, Cisco’s Executive Vice President and Chief Product Officer, delved into the rapid advancements of AI and the related security concerns, which spurred the release of AI Defense:
“Looking ahead, companies will fall into two categories: those who thrive with AI and those that fall into obscurity. Every enterprise will engage with thousands of AI applications, and with innovation moving at breakneck speed, existing protection measures are falling short,” Patel explained. “With AI Defense, we’ve focused on safeguarding both the development and deployment of AI apps, clamping down on potential AI misuse, data breaches, and advanced threats. It’s a bold move to tackle issues existing security solutions aren’t fit to address.”
AI Defense seems to be the closest thing we have to counter the existential threats AI poses. More alarming is Cisco’s 2024 AI Readiness report, revealing that merely 29% of participants feel equipped to detect and thwart unauthorized interference with AI systems.
The complexity of the AI landscape might explain this unpreparedness. AI applications, being multi-model and multi-cloud, are more vulnerable to attacks at the app or model level.
[For more insights, check out: Here’s what AGI means to Microsoft and OpenAI]
The big question is, will Cisco keep an eye on the progress of Artificial General Intelligence (AGI)?
As top AI labs, like Anthropic and OpenAI, sprint towards achieving the AGI milestone, the timing of AI Defense’s introduction couldn’t be more crucial. OpenAI’s CEO, Sam Altman, shared that they’re confident they know the path to AGI and anticipate reaching that goal sooner than expected, as their focus shifts towards superintelligence.
In related news, there are claims that OpenAI has already achieved AGI with the release of the o1 reasoning model.
Though some worry about the implications, Altman has brushed off concerns, suggesting that society might sail past the AGI benchmark with minimal upheavals. He’s confident that security issues won’t manifest dramatically during this pivotal moment. However, recent discussions have surfaced doubts about AI’s progression hitting a bottleneck due to a shortage of premium data for training models. Industry giants like Altman and ex-Google CEO Eric Schmidt have dispelled these rumors, confidently stating that scaling laws haven’t yet hindered AI’s growth. “There’s no wall,” Altman asserts.
Although AI Defense is a significant leap forward, its widespread adoption among organizations and key AI labs remains uncertain. Curiously, even as OpenAI’s CEO acknowledges AI’s potential threat, he trusts that AI will eventually become robust enough to avert any existential crises it might create.