From Geek to Star #45 - Enabling AI in your organisation as a Tech Leader (3/3)

how to leverage on AI in Cybersecurity

If you know the enemy and know yourself, you need not fear the result of a hundred battles.

Sun Tzu, The Art of War. 

If you missed the previous episodes, you can access them online here.

🗓️ This Week – Episode 45: Cybersecurity as a key element to enable AI in your organisation as a Tech Leader

This newsletter is the last of the three sharing my thoughts on key dimensions for an organisation to turn into an AI-enabled organisation and our role as tech leaders. After having written about the augmentation of teams across functions and the transformation into AI-driven software engineering, I believe the third key dimension is AI-augmented cybersecurity.   

It is becoming crucial for organisations to look at how to integrate AI in the cybersecurity operating model of the company in face of the exponential threats brought by AI on the other side, but also to manage the acceleration of digital assets in the company brought by AI. 

For me, this is where AI enablement becomes very real.

Because if you unleash AI in your organisation without rethinking cybersecurity, you may indeed unleash productivity, innovation and creativity… but you may also unleash risks at a speed and scale that your current operating model is not ready to absorb.

The tsunami is not only external

When we think about AI and cybersecurity, we often first think about external threats: more convincing phishing emails, deepfakes, automated vulnerability discovery, AI-generated malware.

And yes, this is clearly happening.

Google Threat Intelligence Group reported in November 2025 that adversaries are no longer just using AI for productivity, but deploying AI-enabled malware in active operations, with tools such as PROMPTFLUX and PROMPTSTEAL dynamically generating malicious scripts, obfuscating code, and creating malicious functions on demand.

And Mythos which brought a lot of concerns just a few weeks ago. This is a major shift.

But the tsunami is not only external. It is also internal.

If AI-driven engineering allows teams to generate much more code, much faster, then the question becomes: who reviews it? Who validates it? Who checks that the architecture remains coherent, that vulnerabilities are not introduced, that dependencies are managed properly, that access rights are under control? Especially if business teams start to automate their own processes with access to a number of MCP connectors with multiple SaaS solutions?

Yesterday, many cybersecurity teams already struggled to review what humans produced. Tomorrow, they may have to review what humans plus AI agents produce.

Adding more people will not be enough. And even if we wanted to, there is already a structural shortage of cybersecurity professionals. In a cybersecurity report I wrote a few months ago, I highlighted the ISC2 estimate of a global cybersecurity workforce gap of 4.76 million people in 2024, with 67% of responding organisations reporting a staffing shortage.

So the answer cannot be only: “hire more cybersecurity people”. The answer must be: make cybersecurity itself AI-augmented.

From cybersecurity as control to cybersecurity as a system

In many organisations, cybersecurity is still perceived as a control function.

The team that says no. The team that slows things down. The team that comes late in the process.

In the AI age, this model will break. If software engineering accelerates, if business users start building tools with AI, if autonomous agents start interacting with systems and data, cybersecurity cannot remain a final checkpoint at the end of the chain.

It needs to become part of the system itself.

This means:

  • security embedded in the software development lifecycle

  • automated scanning and review of AI-generated code

  • continuous visibility on digital assets

  • real-time monitoring of identities, including non-human identities

  • AI-assisted detection, response and remediation

IBM’s 2025 Cost of a Data Breach report reinforces this direction: ungoverned AI systems are more likely to be breached and more costly when breached, while extensive use of AI in security is associated with significant cost savings compared with organisations that do not use such solutions. In other words, attackers are using AI.

Defenders cannot remain manual.

Identity becomes even more critical

One point I believe deserves much more attention is identity. In the past, identity mostly meant human users: employees, contractors, partners, admins.

But with APIs, service accounts, bots, SaaS integrations, automation scripts and now AI agents, the number of non-human identities can explode.

And each of these identities can have access rights. Each can become a door. Each can become a risk. Identity is becoming the control plane of the modern enterprise and therefore a primary attack surface. The challenge is no longer only to manage employees joining, moving or leaving. It is also to govern API keys, bots, agents and machine accounts with proper lifecycle controls.

This is where many organisations are not ready.

If we struggled already to remove access rights from humans leaving the company, how confident are we that we can properly govern thousands or hundreds of thousands of non-human identities?

AI enablement without identity governance is like opening more and more doors in a building without knowing who has the keys.

Cybersecurity agents against cyberattack agents

One avenue I find very interesting is the emergence of AI-driven cyber assets.

If attackers can use AI agents, defenders also need to build and use their own.

For example:

  • blue team agents helping detect anomalies

  • red team agents testing internal systems continuously

  • agents reviewing code and configurations

  • agents checking access rights and suspicious behaviours

  • agents generating and updating security awareness content

This is not science fiction anymore. EY’s 2026 Cybersecurity Roadmap Study found that senior security leaders expect major cybersecurity areas to be increasingly run with agentic AI within two years, including advanced persistent threat detection, fraud detection, IAM, third-party risk management and deepfake defence.

Of course, this does not mean replacing cybersecurity professionals. It means changing their leverage. The cybersecurity engineer of the AI age may spend less time manually checking everything, and more time designing, supervising and improving cybersecurity agents and systems.

Cybersecurity inside the AI-driven SDLC

In the previous newsletter, I wrote about AI-driven software engineering and how tech teams may need to redesign how they build software. Cybersecurity must be part of that redesign.

If AI-generated code becomes common, then cyber cannot be a late-stage review activity. It must be integrated into the AI-driven SDLC itself.

Concretely, this could mean:

  • secure coding instructions embedded in AI development tools

  • automatic vulnerability scanning on generated code

  • AI-assisted code review before merge

  • automated dependency and secret scanning

  • risk-based review depending on the criticality of the system

  • red team agents testing new applications before release

The key idea is simple:

👉 If AI accelerates software delivery, cybersecurity must accelerate with it. Otherwise, the organisation creates digital assets faster than it can secure them.

Visibility for leadership

Another important role of tech leaders is to create visibility.

Many executive teams do not need all the technical details. But they do need to understand:

  • where the attack surface is expanding

  • which systems are most exposed

  • where identity risks sit

  • where AI is used without governance

  • where critical gaps require investment

This is where modern cyber solutions, dashboards, exposure management tools and AI-assisted risk analysis can help. It is critical to make cybersecurity understandable as a business risk. Cybersecurity should not only be discussed after an incident. It should be part of regular leadership discussions on resilience, continuity, trust and growth.

Continuous education: humans remain the first and last line

Even with the best tools, humans remain central. AI will make phishing more convincing.
Deepfakes will become harder to spot. Fake emails, fake voices, fake videos, fake documents will become easier and cheaper to generate.

So education cannot be a once-a-year compliance module anymore. It needs to become continuous, contextual, and easy to consume. This is where AI can help too.

Why not use AI to generate short internal videos or micro-learning content on the latest risks? Why not create realistic phishing simulations adapted to current threats? Why not make cybersecurity awareness more regular, more practical, and less boring?

Fortinet’s 2025 Cybersecurity Skills Gap report notes that 80% of organisations say AI tools are helping IT and security teams be more effective, while 97% are already using or planning to implement AI-enabled cybersecurity solutions. The same report also stresses that skilled and aware employees remain crucial to cyber risk management.

So again, it is not AI instead of people. It is AI to scale what people need to know and do.

What kind of cybersecurity profile rises in the AI age?

If I try to connect this back to From Geek to Star, I believe the cybersecurity professional who will rise in the AI age will need three qualities.

First, system thinking. Cybersecurity can no longer be seen as a list of tools or controls. It is a system involving people, processes, technology, vendors, identities, data, software delivery, business priorities and regulation.

Second, builder mindset. Cybersecurity professionals will need to know how to use AI themselves: to build small tools, automate repetitive tasks, create cyber agents, analyse logs, generate awareness material, test systems and improve their own productivity.

Third, business communication. Because cybersecurity will require more investment, more discipline and more behaviour change. And this cannot be achieved only with technical language. Cybersecurity leaders and engineers need to explain risks in a way that business leaders and colleagues can understand and act upon.

In SHINE language:

  • Hard skills remain critical

  • Soft skills become essential

  • Industry knowledge helps prioritise risks

  • Network helps influence behaviours

  • Experience helps detect patterns before others see them

From targets of automation to defenders using automation

To close this three-part series, I would summarise it this way.

AI enablement is not about giving everyone access to AI tools. It is about redesigning how the organisation works:

  • how people and teams are augmented

  • how software engineering is transformed

  • how cybersecurity becomes AI-augmented

The companies that will succeed will not be those that move the fastest without control. They will be those able to move fast and build the right guardrails. Because in the AI age, cybersecurity is not only protection. It is an enabler of trust.

And without trust, AI transformation will not scale.

🙏 I’d Love to Hear From You

How do you practically see Cybersecurity evolve in your organisation or how it should evolve? 

Reply to this email, I read every note.

Follow me on LinkedIn for more reflections and “behind-the-scenes” thinking between newsletters. Don’t hesitate to comment or reshare,  it’s one of the best ways to grow your SHINE 🌟. If you want to know more about how I can support you or your teams to thrive in a tech career in this AI-age, have a look at my offerings here.  

P.S. Referral Pilot 🚀

Forward this email to one engineer or tech friend who could also benefit from this newsletter: sharing is caring - a little gesture can go a long way to strengthen bonds.

✨ May the SHINE be with you!

From Geek to Star by Khang | The Way Forward

Reply

or to participate.