At a recent Google Cloud roundtable in Singapore, the blunt takeaway was stark: after decades of defensive advances, many organisations are still being breached without even realising it. A large share of incidents in Asia-Pacific are discovered not by the victims themselves but by external parties — a damning sign of how detection gaps persist despite billions spent on cybersecurity.

The persistent weak spots
Threat intelligence data consistently shows that most successful attacks still begin with the basics: misconfigured systems, weak credentials, and preventable errors. Exotic zero-day exploits capture headlines, but it is these fundamental oversights that allow attackers to slip in and operate undetected. The persistence of such flaws underscores a troubling reality: technology alone has not fixed decades-old security shortcomings.
The AI arms race: two-edged swords
Artificial intelligence is transforming this landscape, but in ways that cut both ways. Security teams are deploying AI to process vast telemetry, detect anomalies, and automate repetitive triage tasks. At the same time, attackers are using the very same technologies to scale phishing operations, craft tailored malware, and scan networks at unprecedented speed. This dual-use reality has been dubbed the “Defender’s Dilemma”: the tools that strengthen defence also expand offensive capacity.
Google Cloud and other providers argue that AI has the potential to finally tilt the balance in favour of defenders. Generative models can already support vulnerability discovery, strengthen threat intelligence, produce more secure code, and accelerate incident response. Yet, if over-relied upon, these tools can create new risks that adversaries may exploit.
Big Sleep: AI finding what humans miss
One of Google’s most promising examples is Project Zero’s Big Sleep, an initiative using large language models to uncover real-world vulnerabilities in open-source libraries. The program has identified dozens of flaws, including ones that might otherwise have gone unnoticed for years. This marks a significant evolution: AI is not just reacting to incidents but proactively uncovering the weaknesses that fuel them. While human oversight remains critical, Big Sleep illustrates how automation can extend defensive reach.
Automation roadmap — promise and peril
Google Cloud frames security operations along a path from manual to autonomous. In the assisted and semi-autonomous stages, AI handles routine analysis while humans focus on higher-level judgement. The end goal, autonomous operations, raises profound concerns: if defenders cede too much to AI, they risk creating blind spots, new attack vectors, and even dependency on systems that may themselves be compromised. The challenge is to design AI systems that reduce toil without sidelining human expertise.
Practical safeguards: Model Armor and shadow AI detection
One safeguard gaining traction is Model Armor, a filtering layer that ensures AI responses remain safe, relevant, and compliant. This system blocks personally identifiable information from leaking, screens out irrelevant or off-brand content, and ensures business-specific constraints are respected. Such filters are vital where AI interacts with customers, as even a single misaligned response could create reputational or legal fallout.
Equally important is the management of shadow AI — unauthorised tools that quietly proliferate inside enterprise networks. These unvetted systems expose sensitive data and create unmonitored risks. Proactive scanning and governance are becoming essential to close these hidden gaps.
The scale challenge: budgets, noise and workforce limits
Cyber leaders in Asia-Pacific repeatedly highlight a painful paradox: attack volumes are rising rapidly, but budgets and staff resources are not keeping pace. Even when attacks lack sophistication, sheer frequency creates an operational burden that drains teams. Organisations are increasingly seeking partners and platforms that can deliver efficiency gains without requiring unsustainable hiring increases. In this environment, automation is less about replacing humans and more about helping limited teams cope with mounting pressure.
Preparing for tomorrow: post-quantum and beyond
Looking further ahead, Google has already begun rolling out post-quantum cryptography across its infrastructure. This anticipatory step addresses the looming risk that quantum computing could one day break current encryption methods. By acting early, cloud providers reduce the long-term danger of data harvested today being decrypted in the future. It is a reminder that cybersecurity strategy must always account for threats on both present and future horizons.
What organisations should do next
To succeed in this evolving battlefield, organisations must align technology with fundamentals. Key priorities include:
- Fixing core vulnerabilities first: strong identity management, patching, and strict configuration hygiene.
- Using AI as an amplifier rather than a replacement: automating repetitive work while leaving judgement to human operators.
- Deploying runtime safeguards like filtering systems and monitoring for shadow AI.
- Stress-testing AI agents with red-teaming and adversarial simulations.
- Preparing for cryptographic transitions and securing the software supply chain.

Verdict: cautious optimism
The integration of AI into cybersecurity offers unprecedented opportunities, from automated vulnerability discovery to real-time anomaly detection. But the risks are equally real: attackers can exploit the same tools, automation can introduce new blind spots, and over-reliance can weaken human judgement. The future of cyber defence will depend on balance — adopting AI in carefully controlled ways, maintaining transparency, and doubling down on security basics.
The AI revolution in cybersecurity has begun, but victory will belong to those who use these tools thoughtfully, balancing innovation with prudence. The lesson is clear: technology alone will not save us, but technology combined with disciplined strategy just might.