Trust / Security & Compliance

Banking-grade controls for agentic AI infrastructure.

CoreFi runs the same European banking-control playbook — data protection, AML, audit, access, residency, resilience — across every workflow an AI agent can touch. Models propose; CoreFi enforces; humans approve where the rule says so.

See how governed agents work
Why it matters

Agentic AI raises the bar on banking controls. CoreFi is built to that bar.

A model that can draft a credit memo, prepare a payment or close a customer ticket only adds value to a bank if every action stays inside the bank's permission model, leaves an audit record, and waits for a human when policy says so. Most "AI in banking" stacks bolt a chat interface onto unsupervised tooling. CoreFi treats the agent like any other operator — scoped, gated, logged and reviewed.

CoreFi's control surface is the same whether the action originated from a customer, an operator, an external API consumer or an AI agent. The role-based access model, the policy engine, the audit log and the human-approval workflow do not change when the actor changes. That is what lets a bank, a digital lender or an embedded-finance program adopt agentic workflows without rewriting its risk framework.

This page summarises the control surfaces buyers, auditors and regulators ask about most often: data protection, AML, encryption, access, audit, residency, AI governance, human-in-the-loop and operational resilience. The certification status section below is deliberately exact about what is in place today versus what is in progress or designed to plug into the deploying institution's environment.

Certifications & attestations

What is in place, what is in progress, what is plug-in ready.

CoreFi reports certification status in three explicit buckets so buyers, partners and supervisors can size their own assurance work. We do not claim certifications we do not hold.

ISO 27001

In progress. Information-security management system aligned with ISO/IEC 27001 controls; certification activity is currently under way. Status updates are available on request to active buyers under NDA.

SOC 2 Type II

In progress. Trust-services criteria controls are operating; the Type II observation period and independent attestation are currently under way. Evidence of controls in operation is available on request to enterprise buyers under NDA.

FIPS 140-2 Level 3

Designed to be compatible / plug-in ready. CoreFi is not itself FIPS 140-2 Level 3 certified. HSM-backed key-management deployments are designed so that institutions which require Level 3 hardware can plug in a validated HSM module from their preferred vendor.

CoreFi will update this page as each item moves from in progress to attested. We will not change a status here without supporting documentation.

Data protection

GDPR by design across every agent and every workflow.

CoreFi is operated from the European Union and built around GDPR's principles of lawful basis, purpose limitation, data minimisation, integrity and accountability. The same controls apply when the actor consuming personal data is an AI agent.

Lawful basis & consent

Each workflow declares the lawful basis under which personal data is processed; consent capture and withdrawal are first-class operations, logged per data subject.

Data subject rights

Access, rectification, erasure, restriction, portability and objection requests route through a structured workflow with reviewer approval and an exportable evidence pack.

Data minimisation for AI

The control plane records which model received which fields, when, and under which legal basis. Workflows can mask, tokenise or drop fields before they reach a model.

DPA & sub-processors

A standard Data Processing Agreement is available to every customer, with the current sub-processor list and notification process for changes.

Financial-crime controls

AML and KYC built into the workflow, not bolted on.

CoreFi's onboarding, payments and case-management workflows pass through AML and KYC controls before they reach the ledger. Agent-prepared cases inherit the same checks as human-prepared cases.

Onboarding (KYC / KYB)

Identity verification, beneficial-owner extraction, sanctions and PEP screening, adverse-media checks and risk classification are wired into the Onboarding Agent and the human reviewer dashboard.

Transaction monitoring

Configurable rules and behavioural patterns generate alerts that the Compliance Agent triages with full customer history; filing decisions stay with the institution's MLRO.

Reporting

SAR/STR templates and audit-ready evidence packs are produced as structured artefacts, not free-text exports, so they round-trip into supervisory channels cleanly.

Encryption

Encrypted in transit, encrypted at rest, keys under your control.

In transit

TLS for all customer- and partner-facing traffic; mutual TLS available for institution-to-CoreFi and partner-to-CoreFi integrations on request.

At rest

Tenant data, document store and audit log are encrypted at rest using industry-standard symmetric encryption; database backups inherit the same encryption envelope.

Key management

HSM-backed key management with multi-party computation for digital-asset signing. Deployments are designed to support FIPS 140-2 Level 3 HSMs where the deploying institution requires it (CoreFi itself is not FIPS 140-2 Level 3 certified — see Certifications above).

Access control

Same permission model for humans, APIs and agents.

Every actor — operator, customer, external API consumer, AI agent — calls CoreFi through the same role-based access layer. An agent literally cannot invoke an API it has not been scoped to.

Roles & scopes

Granular, configurable roles per workflow, per market and per customer tier. Scopes are token-bound; agent identities carry their own scoped tokens distinct from human operators.

Authentication

Single sign-on via SAML or OIDC, multi-factor authentication for operator and admin access, short-lived API credentials with rotation.

Privileged operations

Admin actions (key rotation, policy changes, role assignments) require step-up authentication and are logged to the same audit trail as workflow operations.

Audit

One append-only record per workflow. Exportable for regulators.

Every CoreFi workflow — whether human-, API- or agent-driven — writes a single append-only record covering trigger, retrieved data, model context (where applicable), policy decisions, API calls, side effects on the core, escalations and human approvals.

Append-only & tamper-evident

Audit entries are append-only with cryptographic chaining so any modification is detectable; the log is exportable for internal review, external audit and supervisory requests.

AI-aware fields

When an agent participated, the record includes the model identifier, prompt and context window references, the structured plan it proposed, and the policy decision applied to that plan.

Human decisions captured

Approve, reject and edit actions by human reviewers are logged as first-class events alongside the workflow they belong to, with the reviewer identity and timestamp.

Residency

EU-resident by default. LATAM residency available on request.

CoreFi is operated from the European Union, with tenant data and processing kept inside an EU data boundary by default. For institutions operating in Latin America, deployments designed to keep tenant data in-region are available on request.

EU default

Production environments and the primary data store run in EU regions; tenant data, document store and audit log stay inside the EU boundary unless the institution opts otherwise in writing.

LATAM option

Deployments designed to land tenant data and processing in a Latin American region are available on request, scoped to the institution's regulatory and licensing footprint. Available regions are confirmed during the security review.

Cross-border transfers

Where transfers outside the chosen residency boundary are required (for example, vendor sub-processors), Standard Contractual Clauses or equivalent transfer mechanisms apply, and the transfer is logged.

AI governance

Model-agnostic, policy-gated, fully logged.

CoreFi runs governed AI workflows on ChatGPT, Claude, Gemini, your own hosted models or a mix. The control plane, audit log, policy gates and approval flows do not change when you swap the underlying model.

Model registry

Every model used in a workflow is registered with version, provider and intended scope; the audit log records which model produced each agent action.

Policy gates before any side effect

The agent's structured plan passes through role-permission, customer-consent, transaction-limit, AML, sanctions and model-output guardrails before any API is called. Failed checks stop the workflow.

Prompt and context isolation

Workflows declare which data classes a model is permitted to receive; data the model is not scoped to never enters its context window.

Override and feedback loop

Reviewer overrides feed back into model-quality reporting without giving the model uncontrolled write access to its own behaviour.

Human-in-the-loop

The human stays accountable for decisions that move money, change risk or close a customer.

Workflows declare which steps require a human. CoreFi prepares the case with full evidence — model context, structured plan, policy result, recommended action — and routes it to the right reviewer dashboard. Approve, reject and edit decisions resume the workflow and are logged.

Default-on for monetary actions

Most banks set monetary actions (payments, refunds, fee waivers, limit changes, credit decisions above policy thresholds) to require human approval by default.

Configurable per workflow

Each workflow can declare its own approval shape: single approver, dual control, MLRO sign-off, treasurer authorisation. The shape is configuration, not custom code.

Reviewer evidence pack

The reviewer sees the same context the agent saw — retrieved data, structured plan, policy decision, recommended action — so the decision is informed, not blind.

Operational resilience

Designed for the same continuity expectations as a regulated bank.

CoreFi's resilience posture is built around the operational-resilience expectations European supervisors apply to banking infrastructure: known critical functions, tested recovery, transparent incidents.

Business continuity & disaster recovery

Documented BCP and DR procedures with periodic restore exercises; recovery objectives are agreed per institution as part of onboarding.

Backups

Encrypted backups with retention aligned to the institution's regulatory profile; restore drills are part of the resilience programme.

Incident response

Defined severity classification, on-call rotation and a customer-notification commitment for incidents affecting confidentiality, integrity or availability of tenant data, in line with GDPR Article 33 where personal data is affected and any incident-reporting obligations agreed in the Data Processing Agreement.

Vendor & sub-processor risk

Sub-processors are assessed on security and resilience before onboarding and re-reviewed on a defined cadence; the current list is published to customers.

Shared responsibility

What CoreFi runs. What the institution runs. Where the line sits.

Banking infrastructure is a shared-responsibility model. CoreFi is responsible for the control plane, the audit layer, the policy engine, the encryption-in-transit/at-rest envelope and the operational resilience of the platform. The institution remains accountable for its own regulatory permissions, its risk-appetite configuration, its human-approval staffing, its customer disclosures and any AI models it elects to plug into CoreFi. As the data controller, the institution remains accountable for handling and timely response to data-subject access requests (GDPR Art. 12) — CoreFi provides the structured workflow and evidence pack; the institution communicates the outcome to the data subject.

Want the full security pack and a walkthrough?

We will share the current sub-processor list, the DPA, the encryption and key-management architecture, the audit-log specification, and the in-progress status on ISO 27001 and SOC 2 Type II — under NDA where appropriate.

Read about governed AI workflows →