Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

APIs: The Key to Generative AI Success and Security

APIs
(Credit: Panther Media GmbH / Alamy Stock Photo)

No one builds a house without first laying a foundation. At least no one that expects that house to stand for very long. When enterprises decide to invest—long-term—in some capability, they approach that capability in the same way. That is, they build a foundation. Or at least they should. Increasingly, application programming interfaces (APIs) are playing an important role in this area.

Why? It’s clear that enterprises are planning to invest in generative AI—and AI, in general, for that matter—for the long term. That is, most aren't rushing to grab a piece of the AI pie just yet. They're approaching it with the eyes of an investor. That is, as a strategic decision that requires a foundation that may or may not yet be laid out.

Given that, it’s beginning to become clear that any such foundation must support two things: a hybrid approach to deployment and API security. 

Our research shows both LLMs and the apps built to take advantage of them—whether advisors, agents, or assistants—will be deployed both on-premises and in the public cloud. While there appears to be a preference for public cloud, both locations are likely for all components that make up what is emerging as "AI Apps."

That's problematic for many companies because they currently struggle with the complexity of managing just "regular" apps. With too many tools and APIs spread across multiple public clouds and on-premises, surely leaders look at their hybrid estate and wonder how they’re going to add AI to that mix because the consensus is that APIs are going to be an integral part of how AI Apps are built.

Not only is that consensus, but data backs it up. A report from Menlo shows that nearly one-third (31%) of AI adopters are using retrieval augmented generation (RAG) techniques, which leverage APIs to access custom data sources and tools, and a Sequoia Capital survey found 94% were using foundational model APIs.

The Role of APIs in AI

APIs connect us to LLMs. They’re the way our chat interfaces—whether apps on our phones or in the browser—connect to that complex set of microservices that make up “AI” in the back end. They’re the way agents are built and flows across multiple LLMs are orchestrated.

APIs are a key enabler of AI. Without them, we'd likely continue to see AI mentioned in the background to improve the efficacy of security services but little else. See, APIs make complex computing accessible, and that has unlocked the power of LLMs and our imaginations.

But they need security. Oh, they really do.

Industry research documents an increasing number of organizations experiencing incidents and data breaches via APIs:

  • 74% report at least 3 API-related data breaches in the past two years (Traceable AI
  • 31% experienced a sensitive data exposure, and 17% suffered a data breach resulting from API security gaps (Salt Security)  
  • 78% of businesses reported an API security incident in 2023 (NoName Security

And that’s before we add AI and all the APIs it relies on and generates to the mix.

It should be no surprise that we asked a lot of questions about AI and APIs in our annual research and were not disappointed with the results. We weren’t all too surprised, either, to find that the top security service respondents use or plan to use with AI is…API security.

That security spans development and production with respect to AI. That means training and inferencing in the lingua franca today. API security today isn't quite ready to handle both. It needs augmentation to deal with what have traditionally been compliance capabilities. That is, data masking and PII scrubbing. While these capabilities are prevalent in development environments, they aren't as pervasive in production and typically don't run at the scale they'll need to protect companies and customers from information leaking out into the wild.

Still, API security is going to be a foundational security capability that every organization will need to employ if they're going to leverage AI in any capacity because APIs are how AI models are integrated with applications from chatbots to agents to autonomous IT systems.

So, if you haven't seriously considered your API security strategy lately, it's time because you're going to need it.

Related articles: