SAP Logo LeanIX is now part of SAP

AI Security: The New Frontline In The War On Cyber Crime

Posted by Neil Sheppard on January 22, 2024
AI Security: The New Frontline In The War On Cyber Crime

AI security is vital for safely leveraging the incredible potential of artificial intelligence technology. Let's explore how enterprise architecture can map your IT landscape and show you how to protect your organization.

Artificial intelligence (AI) is an incredible innovation that is already revolutionizing the way we do business. However, the opportunities AI represents also bring threats.

AI empowers and accelerates our work, but it can also level-up the risk cyber criminals pose to your organization. Not only can hackers use AI to improve their attacks, but the improper use of generative AI tools can also open unexpected gaps in your cyber security.

The key to protecting your IT landscape from these new threats is to map the usage of AI across your IT estate so you can see threats coming and put protections in place. The best way to keep track of your digital infrastructure is with an enterprise architecture tool like LeanIX:

Request demo

Let's look more closely at the rise of AI, the dangers it presents, and how LeanIX can help.

The Rise Of The Robots

According to McKinsey, the number of companies utilizing artificial intelligence (AI) tools is 2.5 times higher than it was in 2017, now approaching 60% of businesses. With the dawn of generative AI tools like ChatpGPT, this is set to surge once again.

Despite the buzz, AI is not a new concept. It's simply an expansion of the number and type of tasks that can be entirely entrusted to computers with minimal human oversight.

AI tools began with robotic process automation (RPA) where simple tasks were carried out by a programmed computer process. As these processes became intelligent and started to be trusted to make their own decisions, they have been leveraged to assess huge amounts of data and make relevant decisions instantly in fields such as agriculture, disaster recovery, and re-insurance.

Over the last 12 months, however, generative AI tools have become available that can carry out creative work, such as image and video generation, or writing white papers. Workers across the world are now using tools like ChatGPT to create emails and documentation on a daily basis, even as their organization uses advanced AI tools across their infrastructure.

While some pundits continue to talk of an AI uprising against humanity, the real dangers of AI are far more insidious and subtle. Let's look more closely at the hidden risks that AI could pose for your business.

The Dangers Of AI

Adopting any new technology increases your cyber risk, simply by increasing your attack surface. The more types of software you use, the more routes there are to access your confidential data.

Artificial intelligence (AI) specifically, however, brings new threats to your organization. Initially, of course, this is a brand-new field of technology that we have yet to fully understand, but there are two particular features of AI that make it a greater risk:

1 AI-Powered Human Error

As ChatGPT appeared and offered to create content seemingly by magic, millions of workers rushed to try the new tool out, inputting huge amounts of data for the platform to work with. Entering that information into the ChatGPT large language model (LLM), however, taught it to the ChatGPT algorithm.

This led to a variety of high-profile cases where employees input sensitive data into ChatGPT, which could potentially then output that information to other users. More concerningly, Group IB recently tracked over 100,000 ChatGPT accounts that had compromised credentials available on the dark web.

Yet, even without users inputting data, there is potential for generative AI tools to inadvertently allow access to confidential information. As Robust Intelligence's William Zhang told Wired:

“If people build applications to have the LLM read your emails and take some action based on the contents of those emails—make purchases, summarize content—an attacker may send emails that contain prompt-injection attacks.”

If, for example, you connected ChatGPT to your email and asked it to auto-reply for you, a hacker could potentially send an email with a prompt for ChatGPT to BCC forward every email to them from that point on. Exploiting your use of ChatGPT to attack your organization is one thing, however, but far more worrying is the potential for generative AI to be used as a weapon itself.

2 AI-Powered Cyber Attack

Concerningly, among the range of generative AI tools that have entered the market since ChatGPT first introduced the world to LLMs are a group of platforms designed for cyber criminals. WormGPT, for example, is a copy of ChatGPT without any of the ethical safeguards.

This is just the beginning, however. Combining a selection of illicit AI platforms, cyber criminals can rapidly evaluate your cyber security for weaknesses, generate phishing content, collate discovered information, and even create malicious code based on the discoveries, all without lifting a finger themselves.

Key to success in a cyber attack is getting away with it before your victim can take any preventative action. As such, faster, stronger, AI-powered cyber attacks are a huge concern for IT security professionals worldwide.

LLMOps: Knowledge Is Key

So, what can be done to both protect your usage of artificial intelligence (AI) tools and also defend against cyber criminals using AI against you? To begin with, you must understand how and where AI tools are in use across your organization's IT landscape.

This has given rise to the new discipline of large language model operations (LLMOps). This new business function is responsible for detecting, monitoring, and securing the use of LLM AI tools in your company.

Once you have oversight of all the AI tools in use across your organization, you can ensure that each application and all of the data that flows through your landscape is protected. That's why Google has established its Secure AI Framework (SAIF), setting down six key principles:

  1. Expand strong security foundations to the AI ecosystem
  2. Extend detection and response to bring AI into an organization’s threat universe
  3. Automate defenses to keep pace with existing and new threats
  4. Harmonize platform level controls to ensure consistent security across the organization
  5. Adapt controls to adjust mitigations and create faster feedback loops for AI deployment
  6. Contextualize AI system risks in surrounding business processes

This, however, is just the beginning. Since AI is empowering cyber criminals to attack your organization faster than a human could, we must use AI to empower our defenses to react at the same speed.

That's why major technology companies are developing generative AI security solutions to combat threats like WormGPT. These include Google's Duet AI, powered by the Sec-PaLM 2 large language model (LLM).

Building these new security tools into your IT landscape and securing the other AI tools in your application portfolio is vital for LLMOps teams. Both activities require an enterprise architecture management tool that allows you to store detailed information on your applications.

Mapping The AI In Your IT Landscape

The LeanIX platform empowers your enterprise architects and large language model operations (LLMOps) teams to track and monitor the artificial intelligence (AI) tools in use across your application portfolio in real time. Using LeanIX, you can:

  • importing existing AI application data from Excel or through open APIs
  • leverage automated SaaS Discovery to find all the cloud-based AI applications in use across your company
  • identify stakeholders across the business to take ownership of AI applications via subscriptions
  • develop an AI business capability map to ensure your AI tools are generating value

To find out more about how LeanIX can support your LLMOps teams and secure your AI landscape, book a demo:

Request demo

Subscribe to the LeanIX Blog and never miss a post again!

Related Posts