GuardML
site

What this site is for

GuardML covers defensive AI engineering. Guardrails, content filters, model defenses, and shipping AI features without shipping liability.

By Editorial ·

GuardML exists for the engineers shipping LLM features who got handed a “make it safe” requirement with no playbook.

What we publish:

Guardrails that actually hold. Input filtering, output filtering, structured-output enforcement, refusal training, classifier-on-output patterns. What works in production, what breaks under adversarial pressure, what regresses silently when you upgrade the model.

Content moderation pipelines. Multi-stage filtering, prompt-classifier ensembles, the Llama Guard / NeMo Guardrails / OpenAI moderation API tradeoffs, building your own classifiers for domain-specific abuse patterns.

Defenses against the attacks AI Sec writes up. When AI Sec publishes a new prompt injection technique or jailbreak, we publish the corresponding defensive pattern. The two sites pair intentionally.

Safety/utility tradeoffs. Refusal rate vs helpfulness. False positive cost vs liability. Where the line goes when you can’t have both. Honest about the tradeoffs, not pretending there isn’t one.

What we don’t publish:

  • “AI safety is everyone’s responsibility” thinkpieces
  • Vendor announcements as news
  • Anything that pretends defense is solved

Pseudonymous bylines. Tips, corrections, and “this guardrail bypass works on prod” reports go to the editor.

Real content starts shortly.

Subscribe

Defensive AI — guardrails, content filters, model defenses, safe deployment. — delivered when there's something worth your inbox.

No spam. Unsubscribe anytime.