HuggingFace
Wardstone

Hugging Face + Wardstone

Protect Any Hub Model

Secure any model from the Hugging Face Hub. Wardstone Guard protects Inference Endpoints and self-hosted models with consistent security policies across thousands of model variants.

13 Supported Models· all protected with sub-30ms latency
Inference EndpointsSpacesLlama 4 (all variants)Mistral (all variants)Qwen2.5 (all sizes)Phi-4+7 more

Why Secure Hugging Face?

Untrusted Model Weights

High Risk

Community-uploaded models may contain backdoors or be intentionally unsafe.

Inconsistent Safety

High Risk

Different Hub models have wildly different safety training levels.

Pickle Deserialization

Medium Risk

Model files can contain arbitrary code that executes on load.

Security Considerations

  • 1

    Hub models have varying levels of safety training

  • 2

    Community-uploaded models may contain backdoors

  • 3

    Inference Endpoints don't include safety filtering by default

  • 4

    Wardstone provides consistent security across heterogeneous models

How to Integrate

  1. Add Wardstone to your HF pipeline

    Install Wardstone SDK in your Hugging Face inference code.

  2. Validate before inference

    Screen all inputs before sending to Inference Endpoints or local models.

  3. Screen model outputs

    Validate responses from any Hub model for harmful content.

  4. Implement consistent policies

    Apply uniform security policies across different model types.

Pricing Note

Hugging Face pricing varies by deployment type. Wardstone provides uniform security pricing across all HF models.

Secure Hugging Face with Wardstone

# Step 1: Check user input with Wardstone
curl -X POST "https://api.wardstone.ai/v1/detect" \
-H "Authorization: Bearer YOUR_WARDSTONE_KEY" \
-H "Content-Type: application/json" \
-d '{"text": "User message here"}'
 
# Response: { "prompt_attack": { "detected": false, ... } }
 
# Step 2: If safe, send to HuggingFace Inference API
curl -X POST "https://api-inference.huggingface.co/models/meta-llama/Llama-4-Scout-109B-Instruct/v1/chat/completions" \
-H "Authorization: Bearer YOUR_HF_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "meta-llama/Llama-4-Scout-109B-Instruct",
"messages": [{"role": "user", "content": "User message here"}],
"max_tokens": 500
}'
 
# Step 3: Check HuggingFace response with Wardstone before returning to user

Common Use Cases

Model experimentation and prototyping
Production inference endpoints
Fine-tuned model deployment
Community model access
ML research applications

All Supported Hugging Face Models

Wardstone Guard protects all Hugging Face models with the same comprehensive security coverage. Whether you're using the latest releases or legacy models still in production, every API call is protected.

Inference Endpoints
Spaces
Llama 4 (all variants)
Mistral (all variants)
Qwen2.5 (all sizes)
Phi-4
DeepSeek (all variants)
Gemma 2 (all sizes)
StarCoder2
FLUX.1
Stable Diffusion 3.5
Whisper Large v3
Custom fine-tuned models

Ready to secure your Hugging Face application?

Try Wardstone Guard in the playground to see detection in action.