Aura AI · Defense Engine

The AI brain behind
ProtectionGrid.

Aura is not a generic LLM with a security wrapper. It's a purpose-built model trained on European attack data, optimized for sub-60-second response, and deployed entirely within EU jurisdiction.

< 60s
Threat-to-action response
6
ProtectionGrid modules orchestrated
100%
EU-jurisdiction hosting
24/7
Continuous training pipeline
The thesis

Not one model.
The right model for the job.

Most AI security products take a generic foundation model and bolt security prompts onto it. That works for chat and basic classification — but it doesn't catch what hasn't been seen before, and it certainly doesn't run inside European jurisdiction.

Aura is built differently. It's a specialized detection model trained from the ground up on European attack patterns — DNS abuse, web application exploitation, AI-generated phishing, credential theft kits, ransomware C2 traffic. It runs on our own GPUs in EU data centres. It never calls out to OpenAI, Anthropic, or Google APIs. The intelligence stays where the data stays.

System architecture

Six surfaces.
One brain.

Every module in ProtectionGrid feeds telemetry into Aura. Cross-module correlation is what makes attacks visible that any single module would miss.

Detection Surface · ProtectionGrid Modules
Edge · Customer-facing
SecureDomain
WebShield
AppShield
MailShield
HybridMail
WebAuth
Telemetry · ~10ms latency
AI Engine
Aura Core

Continuous inference pipeline. Ingests events from all six modules, enriches them with global threat intelligence, scores risk in real time, and pushes mitigation actions back to the relevant module within seconds.

01
Ingest
~10ms
02
Enrich
~50ms
03
Score
~100ms
04
Act
< 60s
Mitigation actions
Foundation · Sovereign Infrastructure
EU-jurisdiction · Own hardware
EU Data Centres
Anycast Network
Post-Quantum TLS
Threat Intel Feeds
Audit & Logs
SIEM Export
From signal to action

How Aura responds in under a minute.

Latency budget is the most important number in defense. Every step here has a measured target.

1
~10ms
Ingest

Every event from every module — every blocked request, login, email, DNS query — streams into Aura via a low-latency event bus. No batching, no waiting.

2
~50ms
Enrich

Context is added: user identity history, device posture, IP reputation, current threat intelligence feeds, related events from sibling modules within the customer's tenant.

3
~100ms
Score

The model evaluates the enriched event against learned attack patterns — producing a risk score, an attack-type classification, and a confidence level.

4
< 60s
Act

High-confidence threats trigger automatic mitigation in the relevant module — block, quarantine, step-up auth, or notify. Low-confidence events route to human review.

Training methodology

What we trained on. And what we didn't.

The most important question for any security AI is "where does the training data come from?" Here's our answer.

European attack data, by design.

Aura is trained on threat data observed across our European customer base — DNS abuse seen in DACH, phishing campaigns targeting EU financial services, ransomware C2 patterns from regional incident response. European attackers exploit European infrastructure differently than US-targeted attackers.

Critically: customer data is never used as training input without explicit opt-in. Threat patterns are extracted in aggregate, anonymized, and never linked back to individual tenants.

No customer data without opt-in

Continuous, not one-shot.

Generic foundation models are trained once and then locked. Aura runs a continuous training pipeline: new threat data is incorporated daily, new attack patterns trigger model updates within hours, and weekly evaluations measure detection rates against held-out validation sets.

When attackers shift tactics, Aura adapts before the next customer is hit. That's the entire point of running our own model in our own infrastructure.

24/7 training pipeline
Performance

Numbers that actually matter.

We don't publish detection-rate vanity stats. These are the operational metrics CISOs ask about.

Median time-to-action
< 60s

From event ingestion to executed mitigation in the relevant module — across all six surfaces. P99 latency target is set at 90 seconds; we typically run well below.

measured continuously across all customer tenants
False positive rate
< 0.5%

Aggressive blocking with low false positives is what separates production-ready security AI from research demos. Under 0.5% on legitimate traffic is our internal SLO.

measured on customer-validated production traffic
Service availability
99.99%

Aura inference runs in active-active mode across multiple EU regions. If one region degrades, traffic shifts within seconds — no waiting for failover, no operator intervention.

measured 12-month rolling, EU regions only
Hosting & data sovereignty

Where Aura actually runs.

Aura is not a hosted SaaS layer over OpenAI or Anthropic. We run our own inference infrastructure on dedicated GPUs in colocation facilities across Germany, Ireland, Portugal and Cyprus. No data ever leaves European jurisdiction.

For customers with strict sovereignty requirements (financial services, public sector, KRITIS), single-tenant deployments are available — your traffic, your model instance, your dedicated hardware.

  • No US infrastructure dependencies. Not AWS, not Azure, not GCP. Not for inference, not for training, not for storage.
  • EU personnel only have access to production systems. Background-checked, under European employment law, GDPR-compliant.
  • Audit trails for every inference. Every action Aura takes is logged with input, output, confidence, and reasoning artifacts.
  • SIEM-ready exports in standard formats (CEF, LEEF, raw JSON) for customers' own compliance and detection-engineering pipelines.
Frankfurt, DE
Primary
Dublin, IE
Primary
Lisbon, PT
Active
Nicosia, CY
Active
4 EU regions · active-active inference · sub-millisecond intra-EU routing
For technical evaluators

Questions CISOs ask first.

Is Aura a foundation model or a wrapper around one?
Aura is a purpose-built specialist model — not a wrapper around GPT, Claude, or Gemini. The architecture combines transformer-based event encoding with task-specific classification heads trained on cybersecurity-specific data. We don't depend on third-party AI APIs for inference. That's both a sovereignty decision (no data leaving EU) and a latency decision (sub-60s response isn't possible if every inference call has to round-trip to a US-hosted API).
How do you prevent customer data from leaking into training?
Customer telemetry is processed for inference only — never used for model training without explicit, separately-signed opt-in. Threat patterns we observe across customers are extracted in aggregate and anonymized: no IP addresses, no usernames, no domain names that could identify tenants. Customers who opt in to threat-sharing benefit from cross-tenant pattern detection while remaining individually unidentifiable. This is documented in our DPA and audited annually.
What happens when Aura is wrong? Explainability and override.
Every Aura decision is logged with the input event, the output classification, the confidence score, and the top contributing signals (feature attribution). Administrators can review, override, or whitelist specific patterns through the CyberHub portal. False positives feed back into the training pipeline as labeled examples — improving accuracy over time. Critical mitigation actions can be configured to require human approval for high-stakes systems.
How do you handle adversarial AI attacks against Aura itself?
AI security is a real concern. We mitigate adversarial attacks through multiple mechanisms: input validation and rate limiting at the ingestion layer, ensemble scoring where high-stakes decisions require agreement from multiple model variants, continuous adversarial training using known evasion techniques, and strict separation between the model serving inference and any customer-controllable input that could influence training. We also run regular red-team exercises specifically targeting Aura.
Can we deploy Aura in our own infrastructure?
For most customers, our managed multi-tenant deployment is the right fit — same model, continuously updated, fully managed. For customers with strict regulatory or sovereignty requirements, we offer single-tenant deployments on dedicated hardware in our EU regions. For very specific cases (defense, intelligence services), fully on-premise deployments are possible with a separate licensing agreement. Talk to us about your specific requirements.
How does Aura compare to CrowdStrike Falcon, Darktrace, or SentinelOne?
The fair answer: those products are excellent at endpoint and network detection — that's not our scope. Aura is the AI behind ProtectionGrid, which covers a different set of surfaces: domains, web applications, internal apps, email, hybrid mail architectures, and identity. We complement rather than replace endpoint-centric tools. Many of our customers run Aura alongside CrowdStrike or similar EDR. The differentiator is European sovereignty and cross-module correlation across the surfaces we own — not raw detection rate on a single layer.
What about NIS2, DORA and other compliance evidence?
Every Aura inference is logged in audit-ready format and exportable in standard SIEM formats. Detection events, mitigation actions, and human overrides all carry timestamps, input artifacts, and reasoning attributions. This serves directly as evidence for NIS2 (cyber hygiene measures), DORA (third-party risk), ISO 27001 (anomaly detection controls), and SOC 2 (continuous monitoring). Audit reports are exportable per tenant on demand or scheduled monthly to compliance teams.

Want a deeper technical session?

Our engineering team runs technical deep-dives for prospective customers — architecture review, threat modeling against your specific environment, latency walkthroughs, and live Q&A. Typically 60 minutes, signed NDA, no marketing.