Last reviewed: 8 May 2026. Public version 2.0. Aligned with EU AI Act (Regulation (EU) 2024/1689), NIST AI RMF 1.0, ISO/IEC 42001:2023.
AI Policy
We sell EU AI Act compliance training. The fastest way to lose credibility on that would be to use AI ourselves in ways we tell customers not to. This policy is the public commitment that we don’t. Every concrete AI use, every limit, every right you have, in plain English — and every claim verifiable against production code, configuration, or contract.
If anything in here drifts from how we actually behave, that’s a bug. Email hello@agenticfluxus.com and we’ll fix it or update the policy.
1. Self-classification under the EU AI Act
Operator-relevant question: which roles do we play under the Act, and what risk tier do those roles trigger? We classify ourselves explicitly so customers and regulators don’t have to.
Provider · Article 25(1)(a)
- Scope:
- Ask Flux — our public RAG chat assistant
- Risk tier:
- Limited risk (Article 50)
- Obligations:
- Transparency disclosure to users (implemented). Outputs marked as AI-generated where applicable. No GPAI obligations: we do not place a foundation model on the EU market.
Deployer · Article 25(1)(b)
- Scope:
- Internal AI tools used by Agentic Fluxus staff (coding assistants, document drafting, transcription, scheduling)
- Risk tier:
- Minimal risk for everyday operational use
- Obligations:
- Article 4 staff literacy. Internal AI System Register. Approval workflow before any new tool is added.
Not a GPAI provider · Article 51
- Scope:
- We do not train, develop, or place general-purpose AI models on the EU market.
- Risk tier:
- n/a
- Obligations:
- GPAI obligations under Article 53 do not apply to us.
2. The AI we use, with system cards
Every customer-affecting AI system we run, with model, region, risk tier, transparency, retention, and human-oversight posture. New entries land here before a feature launches; that’s a hard commitment under Section 4.
Ask Flux (public RAG chat)
- Purpose:
- Answer EU AI Act + compliance questions, grounded in the official Act text and our Knowledge Library.
- Model:
Anthropic Claude Sonnet 4.6 (claude-sonnet-4-6)- Region:
- EU region (Anthropic Frankfurt)
- Risk tier:
- Limited risk — Article 50 transparency obligation applies
- Transparency:
- Article 50 disclosure on every conversation start + per-message 'Informational, not legal advice' footer. Citations link back to source on every grounded answer.
- Retention:
- Conversation transcripts retained for 90 days for product improvement and incident triage, then automatically purged. Users can opt out and use the platform without any chat history persistence.
- Human oversight:
- No automated decisions about users. Outputs are informational; any action a user takes based on Ask Flux is the user's decision.
Embedding generation (RAG ingestion)
- Purpose:
- Convert our published content (EU AI Act text + Knowledge Library + course outlines) into vector embeddings so Ask Flux can retrieve grounded context.
- Model:
OpenAI text-embedding-3-small- Region:
- OpenAI US data centres via API. Only our published content is sent — never customer data.
- Risk tier:
- Minimal risk — no personal data, no decisions
- Transparency:
- We embed content we have authored or content already public (the Act). The corpus is documented and reproducible.
- Retention:
- Embeddings persist as long as the underlying content is published. When content is removed, embeddings are deleted within 24 hours.
- Human oversight:
- Authoring is human-led; embedding is a deterministic transformation, not a generative step.
Course content authoring
- Purpose:
- Course modules and Knowledge Library articles are written by domain experts. AI tools assist editorially (caption transcription, accessibility checks, link verification).
- Model:
Various (editorial assistants only — not in production runtime)- Region:
- Editorial workstations in the EU
- Risk tier:
- Minimal risk
- Transparency:
- Every course module is human-authored and has a named subject-matter reviewer. Where AI assisted (e.g. captions on video), a human reviews before publish.
- Retention:
- n/a — outputs become published content, not retained AI artefacts.
- Human oversight:
- Editorial veto on all AI-assisted output before publication.
Internal productivity tools
- Purpose:
- Coding assistants, calendar scheduling, document drafting, analytics queries — for staff operations.
- Model:
Approved tools listed in our internal AI System Register (Fluxus OS dashboard)- Region:
- EU region by default. US-region exceptions documented per tool.
- Risk tier:
- Minimal risk for everyday operational use
- Transparency:
- Internal use only, never with customer personal data. Same Article 4 literacy and Article 50 disclosure rules we sell to customers apply to us first.
- Retention:
- Per-tool retention defined in the AI System Register; aggregate maximum 12 months.
- Human oversight:
- Every commit, message, and decision is reviewed by a human before publish or send.
3. AI sub-processors
The third parties that perform inference or other AI processing on our behalf. Each has a signed DPA and a contractual exclusion of API traffic from model training.
Anthropic PBC
- Role:
- LLM inference for Ask Flux
- Region:
- EU region (Frankfurt) when available; otherwise US with EU SCCs
- DPA:
- Anthropic Standard DPA, signed
- Training opt-out:
- Customer prompts and outputs are explicitly excluded from training under Anthropic's commercial terms.
- Security cert:
- SOC 2 Type II
OpenAI, OpCo, LLC
- Role:
- Embedding generation only (no chat / no completion / no PII)
- Region:
- US (Standard Contractual Clauses)
- DPA:
- OpenAI Data Processing Addendum, signed
- Training opt-out:
- API traffic excluded from model training under OpenAI's commercial terms (api.openai.com endpoint).
- Security cert:
- SOC 2 Type II
Non-AI sub-processors (hosting, payments, email) are listed separately on /trust. Both lists are canonical and updated together when we add a sub-processor.
4. Hard commitments — what we will not do
Easier to commit to specific things we will never do than to make vague promises about what we will. Each item has a verifiable test or evidence trail.
We will not train AI models on customer data
Customer training-progress data, certificate metadata, support chats, account information, and Ask Flux conversations are stored for the contractual purposes only. They are not used to train any model — ours or any third party's.
Verifiable: Sub-processor DPAs (Anthropic, OpenAI) explicitly exclude API traffic from training. Available on request under NDA.
We will not use AI to make eligibility decisions about users without human review
Whether you pass a course, qualify for a refund, or have access to a feature is determined by criteria a human can audit, not by an opaque model output. Where AI assists (e.g. flagging suspicious activity), a human makes the final call.
Verifiable: All such workflows are documented in our internal Decision Log. We will produce that log to a regulator on request.
We will not deploy covert AI
If you are talking to AI on our site, you see a disclosure. Article 50 is implemented and tested on every deploy. Hidden AI in customer-facing flows is prohibited internally.
Verifiable: Automated CI test (tests/audit/a11y.spec.ts + manual review) runs on every push, asserting the disclosure renders.
We will not give customer data to non-EU AI sub-processors without notice
Sub-processors that touch customer data are listed publicly. We give 14 days written notice before adding any new one. Customers can object and have a right of termination if a new sub-processor is unacceptable.
Verifiable: Public sub-processor list at /trust + this page's section 3. Material changes are emailed to active customers and noted in this page's changelog.
We will not represent AI output as legal advice
Ask Flux outputs, course summaries, and any AI-generated text on this site are informational only. We point to qualified counsel for situation-specific decisions.
Verifiable: Disclaimer rendered on every Ask Flux response. Tested in CI.
We will not deploy AI features without a system card entry
Each AI feature added to our product gets a public entry above (Section 2) before it launches: which model, which data flow, which transparency notice, which retention period.
Verifiable: This page is the registry. New entries land before the feature launches, not after.
We will not collect or process personal data of children for AI features
Our products are not directed at users under 18. We do not knowingly collect data from minors and do not feed any such data into AI systems.
Verifiable: Account creation requires age confirmation. Reports of underage accounts trigger immediate deletion.
We will not silently change AI models
When we change the underlying model for any AI feature listed in Section 2, we publish the change in the changelog (Section 9) before or at the moment of switchover. Outputs already issued (e.g. a generated audit-pack section) remain attributed to the model that produced them.
Verifiable: Version log at the bottom of this page. Audit-pack outputs include the model identifier.
5. Customer data + AI separation
The hard rule: customer data does not flow into model training, ours or any third party’s.
- Tenant isolation at the database layer (Postgres Row Level Security) makes cross-tenant inference structurally impossible at the infra level.
- Sub-processor agreements with Anthropic + OpenAI explicitly forbid training on our API traffic. Enterprise / business endpoints with that contractual baseline.
- No customer data in embeddings. We embed our own published content (Act text + Knowledge Library + course outlines). We do not embed customer documents, chats, or certificates.
- No PII in Ask Flux prompts. The system prompt does not include account-level identifiers. User questions are forwarded as-is to the model with the disclaimer that they should not include PII; we strip basic PII patterns heuristically before sending.
- Audit logs of any AI system that produces a customer-facing output are kept for 12 months per Article 12.
6. Your rights
Six rights you can exercise at any time. None require justifying yourself.
Opt out of Ask Flux
You can use Agentic Fluxus without ever interacting with Ask Flux. Course content, certificates, audit pack, dashboards — none require AI usage. Ask Flux is a tool, not a gate.
Access + export your data
Request the complete dataset we hold about you, including any AI-derived metadata (e.g. retrieval logs from Ask Flux conversations you initiated). We respond within 30 days. GDPR Article 15.
Erasure
Request deletion of your account and all associated data. Certificates remain publicly verifiable per the audit-trail commitment, but personal account data and AI-derived artefacts are purged within 30 days. GDPR Article 17.
Object to AI processing
If we use AI in any flow that affects you, you can object to that specific use. We document objections on a per-account basis and stop the AI processing for that account where legally permitted.
Request a human review of an AI-affected decision
If you believe an AI system contributed to a decision affecting you (and we believe none currently does, but should that change), you can request human review per GDPR Article 22 + EU AI Act Article 26(11).
Lodge a complaint with the supervisory authority
Section 8 of this page lists the relevant authorities. You retain the right to lodge a complaint at any time without exhausting our internal complaints process first.
7. Our own EU AI Act compliance posture — eight commitments
The same Article 4 (literacy), Article 12 (logging), Article 26 (deployer obligations), and Article 50 (transparency) obligations we help customers meet, we apply to ourselves. Each commitment has a named evidence source.
- 01
Article 4 staff literacy — we hold ourselves to it first
Every Agentic Fluxus team member completes the Staff course (the same one we sell) plus the Manager or Director course where the role requires it. Per-learner certificates are kept on file and would be the first thing we hand to a regulator.
Evidence: Internal certificate log mirrors the public verify URL pattern. Available to customers under NDA in due-diligence packs.
- 02
Article 50 transparency — implemented and audited
Every interaction with Ask Flux shows the AI-system disclosure. We test the disclosure rendering on every deploy via automated checks.
Evidence: tests/audit/a11y.spec.ts + manual review per release.
- 03
Article 12 logging — kept long enough to be useful
Inference logs for AI features that affect customer-visible outputs are kept for 12 months per Article 12(2)(c). System-level logs (DB queries, requests) per our standard infra retention policy.
Evidence: Sentry + Vercel + Supabase logs, retention policy documented in /trust.
- 04
AI inventory + risk classification — public
Section 2 of this page is the inventory. Each entry lists model, region, risk tier, transparency posture, retention, human oversight. New entries are added here before the feature launches.
Evidence: This page. Versioned in Section 9.
- 05
Human-in-the-loop for material outputs
Audit pack content, certificate issuance, refund decisions, account deletions — all reviewed or initiated by a human. AI assists; it does not act unilaterally.
Evidence: Audit pack workflow includes a manual review checkpoint before customer download.
- 06
Framework alignment — NIST AI RMF + ISO/IEC 42001
We align our internal AI governance practice with the NIST AI Risk Management Framework 1.0 (Govern / Map / Measure / Manage) and ISO/IEC 42001:2023 (AI management systems). We are not certified to ISO/IEC 42001 — that's a 2027 target — but our practice maps to it.
Evidence: Internal AI governance handbook. Available to enterprise customers under NDA.
- 07
Testing methodology — evals + red-team where it matters
Ask Flux is evaluated against a fixed evaluation set covering Article-by-Article questions. Red-team testing is performed before every model upgrade. Failures are documented in the changelog.
Evidence: Eval set + red-team report on file. Material results published in the changelog.
- 08
Versioning + customer notice on changes
Material changes (new AI sub-processor, new model, new feature, new restriction) are published with a date + summary in Section 9. Active customers are emailed before the change goes live where the change affects their use.
Evidence: Section 9 of this page. Customer notification log on file.
8. Supervisory authorities + complaints
If you believe we are not meeting an obligation, please tell us first (hello@agenticfluxus.com) and we’ll respond within 1 business day. You retain the unconditional right to escalate to a supervisory authority at any time:
- GDPR matters (Netherlands): Autoriteit Persoonsgegevens (AP) — autoriteitpersoonsgegevens.nl
- EU AI Act matters: The national competent authority for AI in your EU member state. Designations under Article 70 are still being finalised in 2026; in the Netherlands the lead role sits with the Rijksinspectie Digitale Infrastructuur. For cross-border issues, the European AI Office coordinates.
- Accessibility (EAA 2025): See our accessibility statement for the relevant authority.
- Consumer disputes: EU Online Dispute Resolution platform — ec.europa.eu/consumers/odr
9. What this policy does not cover
- Customer-built AI systems running inside Fluxus OS or built during a Build With Us engagement are governed by the customer’s own AI policy. We provide a template (the AI Usage Policy in Fluxus OS) but the customer owns it.
- This page is a public commitment, not legal advice. For situation-specific decisions, consult counsel and refer to our terms of service.
- We do not make policy claims about partners’ AI behaviour beyond our own. If a partner’s AI usage is relevant to your engagement, we put their relevant DPA + AI policy into the engagement contract.
- The EU AI Act is phasing in; some implementing regulations are still being adopted. We update this policy as the regulatory picture sharpens.
10. Changelog
Material changes (new AI sub-processor, new model, new feature, new customer right, new restriction) get an entry here with a date and a one-line summary. Active customers are emailed.
- 8 May 2026: Version 2.0. Adds self-classification under Article 25, full system cards per AI feature (model + region + risk + retention + oversight), sub-processor section with DPA + training-opt-out details, verifiable evidence per never-do commitment, framework alignment (NIST AI RMF + ISO/IEC 42001), supervisory-authority complaints route, GPAI clarification, children’s data clause, model-change transparency commitment.
- 2026-05-09: Version 1.0. Initial publication.
Questions or concerns? hello@agenticfluxus.com. Security disclosures: /.well-known/security.txt. See also Privacy Policy, Terms of Service, Refund Policy, Accessibility Statement.

