Picture of Mike Ichiriu
by Mike Ichiriu

Artificial intelligence (AI) exploded on the scene last summer with the launch of text-to-image AI like Stable Diffusion, and large language models, like GPT-3 from OpenAI. GPT-3 (which has already been succeeded by GPT-4) came into its own in November when OpenAI released a more natural, chat-style interface for it called: you guessed it, ChatGPT. ChatGPT made AI mainstream, capturing the imagination of the public and business executives alike with its ability to quickly parse and respond to just about any question asked of it.

This development did not go unnoticed by businesses. Overworked employees started unofficially augmenting their work using the public version of ChatGPT - one eager executive even uploaded his company’s 2023 strategy document into ChatGPT, hoping to offload an important PowerPoint deck to AI. The potential benefits of this technology are tantalizing - but the clear risks of using the public version of ChatGPT have led companies to explore using private AI instances.

white-serious-ai-humanoid-robot-ceo-looking-his-laptop-generative-ai

A private hosted AI instance has many benefits. First of all, companies don’t need to worry about the risk of leaking sensitive information through queries to a 3rd party AI. All of the learning and results can be kept within the walls of the enterprise.

Secondly, a private AI instance can be trained with documents and material that are specific to the company’s business - plans, financial results and statements, design information, etc. This allows the AI to generate results that are more specific and actionable, based on having knowledge of the specific company and its historical performance. This is some of the most sensitive data and intellectual property that the company has - all collected in one place - and must be protected with the utmost care.

Finally, it is much easier to control access to an on-premises private AI instance, compared to an instance running in a multi-tenant environment behind some public API gateway. Given the sensitivity of the information contained in the models, as well as the investment that has to go into setting up, tuning, and operating the system, this is not a toy to be left open for anyone in the company to play with. Insider threats for this type of asset can do significant damage.

Securing a private AI instance in an on-premises environment can be tricky due to changing requirements. First, the AI nodes need to be secured and segmented away from the shared corporate network. However, the AI instance needs continuous access to documents and material to support training. Those documents may exist on  far-flung repositories and file servers, and are often continuously generated by users or servers as the normal output of business and operations processes. Making the network infrastructure changes to support such connectiviy and security can require lots of tedious and custom infrastructure design effort.

Additionally, access to the training environment needs to be restricted to authorized users, whether they are working remotely or on-premises. It can be difficult to associate user identities to IP packets in the corporate network, and different user roles may require different access levels - for example, test users may only need access to a web interface, while developers responsible for the instance will need to be able to log into any node in the instance. Controlling such an environment against data leak from insiders can be a challenging task.

Zentera solves these challenges using a Software-Defined Network Security (SDNS) technology powered by CoIP® Platform. Using a NIST SP800-207 Zero Trust Architecture to segment and protect the nodes of the AI instance, CoIP Platform enables access to or from the outside based on policy, blocking everything else by default. With software-defined technology, servers and file systems with training data can be securely and flexibly connected to the AI chamber to support training without requiring any re-engineering or customization of the physical network infrastructure. All user access to the AI chamber is authenticated and controlled, with each user authorized to access only specific interfaces based on user attributes including roles, location, and client software.

The combination of these techniques provides bulletproof protection for the AI nodes and the sensitive data they contain. Essentially, the AI nodes are rendered invisible to anyone but authorized users, and only for authorized purposes and operations. And most importantly, software-defined network security can be implemented in minutes to support dynamic business needs. This powerful approach is being validated and used by customers such as Delta Electronics to secure internal AI development projects against ransomware and data leaks.

If you’re serious about applying AI to unlock the power of your business data, it’s worth taking the time to understand best practices for securing it. Reach out to our team to learn more about Zentera CoIP Platform, or to speak to our architects about how to create a NIST SP800-207 Zero Trust Architecture to secure your AI project.

How to Implement NIST SP800-207 Zero Trust with Zentera >