blog

Ackuity's Role in GenAI Security

Written by Rajat Mohanty | August 27, 2024

Where Ackuity Fits Into GenAI Security

GenAI security is complex and evolving, and so are the solutions that address it. While some GenAI security solutions position themselves as “end-to-end”, none meet that definition. Here is a simple framework to better understand the GenAI security scenarios you will likely need to address and how to do so.

The solutions covered here fall into three scenarios, and six categories.
  • Prompt level control
  • Training data control
  • RAG & Agent control
  • Application security
  • Response testing
  • GRC

Scenario 1

Using foundational models (public or private) without any grounding.

These solutions are sometimes called prompt filtering, prompt firewalls, or prompt guardrails. They either sit between the user and the GenAI application, or between the application and its foundational models. There, they filter content either in prompts (input) or in responses (output). They work on words and sentences as all communication is in natural language.

These solutions can block prompts, remove sensitive or private data, or tokenize these data. Some can also meter prompts and route queries to either a specific LLM or a specific pool of LLM, and log prompts for compliance. They can also be deployed as a proxy, as an SDK integration into applications, or as a plugin within user browsers.

The primary issue with these solutions — they can be bypassed by cleverly crafted prompts or through prompt injections.

Scenario 2

Using foundation models with training or fine tuning.

Enterprises can train or finetune LLMs with their own data to improve accuracy or results for their specific business context. These solutions monitor and filter the data sent to LLMs to train them. They can apply privacy techniques like redacting or tokenizing sensitive data. They can also check for data bias and data poisoning and generate synthetic data and drive federated learning.

However, you will still need a prompt filtering solution from scenario 1 when using one of these solutions.

Scenario 3

Grounding with RAG and adding functionalities with agents.

Enterprises are building RAG and agent pipelines to ground their GenAI deployments and add automated functions. In this scenario, Ackuity type solutions are needed. These solutions control the interactions and data retrievals between these GenAI deployments and enterprise systems. They enforce access rights and permissions, filter inputs and outputs for threats, compliance, and business rules, and log all activities for forensics and monitoring.

You will still need a training and filtering solution from scenario 2, but prompt filtering solutions from scenario 1 may no longer be needed (though are still useful to build defense in depth).

Solutions that span all scenarios

Three GenAI security solutions are needed within all scenarios.

1. Application security scanning

GenAI applications need to be scanned for vulnerabilities. These include standard vulnerabilities, and threats that target GenAI specifically, including prompt injections, jailbreaks, training data leakage, and model manipulation. These solutions include both SAST and DAST tools for AI models, applications, and data flows.

2. GenAI response testing

These solutions (sometimes a red teaming service) test for various privacy issues, biases, hallucinations, and prompt injections by crafting a wide range of prompts and evaluating their responses. The evaluation can use both LLMs and humans.

3. GRC for GenAI

GenAI applications must comply with various privacy standards, regulations, and best practices — including both generic frameworks and AI-specific frameworks. These solutions provide ready repositories, templates, risk scoring, and automation workflows.