OpenAI
Wardstone

OpenAI + Wardstone

Secure GPT-5 & o3 Applications

Protect your OpenAI GPT-5, o3, and GPT-4.1 applications with Wardstone Guard. Detect prompt injections, jailbreaks, and harmful content before they reach your model.

13 Supported Models· all protected with sub-30ms latency
GPT-5.2GPT-5.2 ProGPT-5.1GPT-5 nanoo3-minio1-pro+7 more

Why Secure OpenAI?

Prompt Injection via Function Calls

High Risk

Attackers can craft inputs that manipulate GPT's function calling to execute unintended operations or leak system prompts.

Jailbreaking through Role-play

High Risk

GPT models are susceptible to DAN-style jailbreaks that bypass content filters through persona adoption.

System Prompt Extraction

Medium Risk

Without protection, attackers can coerce the model into revealing your system prompt and business logic.

Security Considerations

  • 1

    OpenAI's built-in moderation API only covers content categories, not prompt attacks

  • 2

    No native protection against prompt injection or jailbreak attempts

  • 3

    Function calling can be exploited without input validation

  • 4

    Wardstone adds sub-30ms latency vs OpenAI's typical 500ms-2s response time

How to Integrate

  1. Install the Wardstone SDK

    Add the Wardstone client library to your project alongside the OpenAI SDK.

  2. Validate input before OpenAI

    Check user messages with Wardstone Guard before sending to the Completions API.

  3. Validate output before returning

    Screen GPT responses for harmful content, PII leakage, or policy violations.

  4. Handle flagged content

    Implement graceful fallbacks when Wardstone detects threats.

Pricing Note

OpenAI charges per token; Wardstone charges per API call. For a typical 500-token request, Wardstone adds <$0.001 for comprehensive security.

Secure OpenAI with Wardstone

# Step 1: Check user input with Wardstone
curl -X POST "https://api.wardstone.ai/v1/detect" \
-H "Authorization: Bearer YOUR_WARDSTONE_KEY" \
-H "Content-Type: application/json" \
-d '{"text": "User message here"}'
 
# Response: { "prompt_attack": { "detected": false, ... } }
 
# Step 2: If safe, send to OpenAI
curl -X POST "https://api.openai.com/v1/chat/completions" \
-H "Authorization: Bearer YOUR_OPENAI_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.2",
"messages": [{"role": "user", "content": "User message here"}]
}'
 
# Step 3: Check OpenAI response with Wardstone before returning to user

Common Use Cases

Chatbots and virtual assistants
Code generation and review tools
Content creation platforms
Agentic workflows and automation
Customer support automation

All Supported OpenAI Models

Wardstone Guard protects all OpenAI models with the same comprehensive security coverage. Whether you're using the latest releases or legacy models still in production, every API call is protected.

GPT-5.2
GPT-5.2 Pro
GPT-5.1
GPT-5 nano
o3-mini
o1-pro
GPT-4.1
GPT-4.1 mini
GPT-4.1 nano
GPT-4o
GPT-4o mini
GPT-4 Turbo
GPT-3.5 Turbo

Ready to secure your OpenAI application?

Try Wardstone Guard in the playground to see detection in action.