Preparing your Organisation for AI Cyber Warfare

Thumbnail for article

Allow me to sound a little paranoid for a moment, and remember, these are my thoughts intended to spark a conversation.

I was reminded this week of Thomas Rid’s book, The Rise of the Machines. An excellent read on the origins of cybernetics.

Thomas tells the story of Vannevar Bush, vice president of MIT and dean of the School of Engineering, who in 1939 became concerned about the “antiaircraft problem”.

Bush saw that aircraft would grow bigger, faster, and capable of flying at higher altitudes. This evolution, he understood, made it difficult, if not impossible, to bring down the machines with run-of-the-mill gunnery.

To shoot down the new bombers, ground-based artillery needed complex ballistic calculations completed faster and more accurately than humans could perform or even read off a recalculated range table. Machines needed to be invented for the task. And soon those “mechanical brains” would start to “think”. The rise of the machines had begun.

Bush’s concerns were valid. Battle for Britain began as German bombers attacked London in the summer of 1940, and cybernetics played a pivotal role in both sides’ offensive and defensive capabilities.

Fast forward 80-odd years to today, and mankind appears to be at another pivotal moment. AI has become prevalent and on the verge of great power.

I run several personal labs and projects. It satisfies my technical itch and allows me to stay current. Like any human, I’ve misdesigned things, neglected to patch and misconfigured my servers. When making some of the more critical blunders, the time I’ve typically seen it take for the server to become compromised in 2023 is between 20 min to 2 hours.

It’s not human hackers compromising the server, but instead, an autonomous crypto-mining operation which scans the internet for common weaknesses and deploys malware to harvest your electricity.

I’ve noticed similar exploitation and timelines in the corporate world. The primary difference is scale. Organisations have more moving parts. More services and more employees with various technologies, skills and experience.

Mistakes happen. When they occur, do you file a Jira ticket for an employee to pick up on Monday morning? Or do you automatically repair your organisation?

AI adversaries will accelerate the exploitation timeline. Your ability to respond must match or beat those new timelines.

My theory is:

Today’s organisations are not able to respond promptly or proportionately to an AI adversary. The speed and scale at which AI attacks will be able to deploy resources and attack techniques will overwhelm traditional technology operating models.

The threat, unfortunately, is real. We’ve seen how ransomware groups are well-funded, innovative and ruthless in their approach to revenue generation. Their adoption of AI tooling is inevitable.

My proposed solution:

An Adversarial-AI requires a Counter-AI response. (Yes, I know this is obvious)

However, the technology estates of organisations which can support a Counter-AI response will need to behave very differently.

Organisations need to start thinking about their technology estate as an organism nurtured to achieve an outcome rather than a piece of machinery directed to perform a task.

Future technology estates needs to have three fundamental characteristics:

  • The environment will have a Described-Purpose
  • The environment will be Self-Aware
  • The environment will be Self-Healing

The implementation:

The technology needed to achieve this does not exist today. Even if it did most organisations would struggle to adopt such an approach. There are significant hurdles in both the culture and processes within most operating models. Humans generally want to feel like they have more control.

Again, I’m reminded of a passage from Rise of the Machines: “Machines are about control. Machines give more control to humans: control over their environment, control over their own lives, and control over others. But gaining control through machines also means delegating it to the machines. Using the tools means trusting the tools!”

Future target operating models will leverage an extreme amount of automation. We must become comfortable with this!

Today we can help prepare our organisations for this transformation by challenging some potential culture and process pushbacks. Introducing the ‘Safe Autonomous Automation’ concept and having it become a trusted part of working.

An example: Imagine some digital ‘Roomba Robots’. A little army of them, each with objectives centred around cleaning our environment. Stale files, old permissions, unowned servers… Each robot autonomously wanders the environment to highlight areas for cleaning. Each week, I would present the findings to a crowd of decision-makers and request permission to do implement the Roomba Robots suggestions. Each week, our environment would become cleaner, quicker, and cheaper to run. All without causing an incident. It would only be a matter of time before that board of decision-makers would get frustrated with me asking for permission each week and ask to robots to “do their thing”. At that point, we would have earned the organisation’s trust and banked a few points for our autonomous future. A simple test from a technological perspective, but we’d leverage it to introduce cultural and process change.

Automation will be key, but it’s only one critical element in this complex transformation. If I were sketching out a roadmap on a napkin to move organisations in the right direction pragmatically, it would look like this…

Roadmap to Becoming a Self-Aware, Self-Healing Organisation.

Today

  • Do not f*&k up today by focusing all your energy on tomorrow. Silver bullets don’t exist, so don’t neglect Cyber Security fundamentals.
  • Organisations with the lowest-hanging fruit will be the first to be affected by early AI adversarial techniques. Organisations should apply secure-by-design methodologies, attack surface reduction, layered defence models, environment hardening, patching, user education, cyber defence and playbooks. These are your bread and butter and should never be neglected.
  • I’m still a big believer that truly exceptional Cyber Security comes from being consistent at the basics. If you can’t be trusted to do the basics, the board will never trust your robots. It’s not sexy, but it’s true!

Step 1 — Increase visibility and trust.

  • Develop an accurate inventory of your organisation. Yes, I know you’ve probably heard this before, but seriously, do it!
  • Describe your organisation. Set against your inventory, how should systems, domains, hosts, and users be able to interact with one another? What is important within your organisation? What are the trust boundaries? What is your expected exposure, and to who?
  • Record and understand your organisation’s behaviour.
  • Create a program to build credibility for autonomous automation within your organisation.

Step 2 — Challenge your assumptions, master the basics and don’t forget about your people.

  • Develop an autonomous way of running unit tests against the assumptions made within your “Describe your organisation”. Have the environment consistently challenge itself and flag where it does not meet expectations. This could be because of design flaws, misconfiguration, vulnerabilities or excessive permissions.
  • Master patching, configuration management and entitlements. Microsoft recently published a paper reiterating that 92% of breaches in 2022 were rooted in failed basics.
  • When mastering the basics, use a descriptive approach, not a directive approachDescribe what criteria a system must meet before connecting to the network, and place it in purgatory until that bar has been met. Avoid using a directive approach where hosts are requested to apply patches within a time window.
  • Generate alerts for abnormal behaviours within your organisation. Fine-tune these to flag issues which require a response.
  • Understand that social engineering will take on new approaches with the accessibility of deep fakes. User awareness training must adapt based on what’s happening in the threat landscape.
  • Remember, in a resource-constrained world, we should make decisions based on actionable intelligence first, then educated guesses. Everything else should be left to the world of fiction.

Step 3 — Develop a community response, increasing automation, automation & automation.

  • Collaboration and information sharing. As cyber threats become more advanced and AI-driven, collaboration and information sharing among organisations, governments, and industry partners will be critical to staying ahead. This trend will likely continue to grow, with increased efforts to establish platforms, alliances, and standards for sharing threat intelligence and best practices.
  • Identify what alerts could have been automatically identified and contained with the assistance of AI.
  • Increase your automation beyond patching and host configuration basics to trust boundaries and domain communication models. Have the environment identify weaknesses, suggest remediations and test services against the proposed change to ensure no disruptions.

Step 4 — Something smart that I’ve not thought of yet

  • I know I’ve missed something, and a “5-step plan” sounds better than 4-steps.

Step 5 — A Self-Aware, Self-Healing technology environment to support your organisation.

  • A Self-Aware environment understands how it’s expected to operate. Their purpose, behaviours, performance, criticalities, and more as all are described.
  • The environment will consistently iterate to meet the described purpose per the SLAs.
  • A central orchestration brain will have an army of AI Roombas directed to complete bespoke tasks while collaborating together.
  • The environment will learn to self-identify failures, malicious behaviour and attacks against systems. It would have the knowledge required to realign to the description, repair and automatically adapt. While also developing a memory of the event to respond quickly if it were to occur again in the future.
  • Knowledge of an evolving threat and how to protect the environment would be automatically shared between collaborating parties such as industry forums and critical supply chains. Each environment would automatically apply remediations to protect both the technology and users within the organisation.

Simples.




This document discusses the potential for AI to revolutionise cyber security and the need for organisations to prepare for AI cyber warfare. The author proposes a roadmap for becoming a self-aware, self-healing organisation that includes increasing visibility and trust, mastering the basics, developing a community response, and increasing automation. The author emphasises the need for a foundation and culture that embraces automation, continuous improvement, and industry collaboration.