PAiD Security Overview
Process AI Pty Ltd — ABN: 70 678 449 271
Last Updated: 15 April 2026
Status: Current-state disclosure — market-validation phase
1. Purpose
This document describes how PAiD handles client financial data during an engagement: what we collect, where it lives, who can see it, and what the AI agent is and is not allowed to do.
It is designed to sit alongside the Mutual NDA (in preparation with our legal team) and to inform the Data Processing Agreement (DPA) we will attach to the engagement SOW.
If anything in this document is unclear, or if your security team needs a response to a specific control framework, raise it with your Process AI engagement contact and we will address it before engagement start.
2. Executive Summary
- Each case gets its own database. For IAN (insolvency / forensic) work, every claim or case is isolated as its own pair of PostgreSQL schemas — not just at the firm or client level. If your firm runs ten liquidations with us, you get ten independent schema pairs, and no skill or query can read across them. For FAN (financial accounting) work, the same isolation applies at the client level — one ongoing engagement, one schema pair.
- Your data never leaves the database. Once data is in the PAiD database environment (Supabase + linked private object storage), it is not copied out, not emailed, not synced to operator laptops, and not sent to any third party other than the inference provider. It stays where it is until you sign off and we purge it.
- Documents come in via secure SFTP, not email. Source documents (bank statements, credit-card statements, supplier invoices, source ledgers) are uploaded directly to a private S3 bucket via a secure SFTP endpoint provisioned per case. The file is parsed server-side into your case staging schema. At no point is a source document copied to a Process AI staff member's local storage. No email attachments. No share links to public services.
- The AI agent runs against the database, not against documents in the wild. PAiD's skills query a relational database via MCP (Model Context Protocol). The agent never browses the internet with your data, and it never trains on it.
- Source-system access is one-off and read-only. Xero access (where applicable) is performed via a separate upstream mapper, OAuth read-only, single-shot, tokens revoked at case close. Other source systems follow the same one-off extract-and-stage pattern. PAiD never holds live credentials past the extraction window and cannot write back.
- We use Anthropic's Claude API today. We are moving to Amazon Bedrock (Sydney region) for AU data residency and a zero-retention inference path before we onboard clients outside the validation cohort. Note that this only affects the inference path — the database and document storage are already in AU.
3. What PAiD Is (and Is Not)
PAiD is an AI-directed accounting and investigation platform. It validates two operating models:
- IAN — Investigative Accounting Navigator. Forensic / insolvency work: classifying business vs personal spend, aged A/R reconstruction, related-party tracing, estate asset identification, debtor recovery correspondence, court-exhibit-ready outputs.
- FAN — Financial Accounting Navigator. Day-to-day accounting automation: bills, purchase orders, sales invoices, bank reconciliation, COGS, P&L — with a human review gate at every material decision.
PAiD is not a SaaS product with a multi-tenant web application. Every engagement is spun up explicitly, scoped to a named schema, and operated by a human practitioner directing agentic AI workflows against that schema, with human review at every material decision.
4. Data Flow and Trust Boundary
For IAN, the unit of isolation is a single case (one liquidation, one administration, one investigation). FAN is identical except the unit of isolation is a single client engagement. A case typically has more than one source of data — a Xero tenant if one exists, PDF statements (bank, credit card, broker, super), and any other source system the practitioner wants to bring in (CSV exports, bank export files, lender portals). They all land via the same pattern: one-off extract, into a case-scoped staging schema, then mapped into the case production schema.
A new case = a new schema pair, provisioned explicitly. There is no “firm-wide” schema, no “client-wide” schema, and no view that joins across cases. If a practitioner needs to look at case B while working on case A, they explicitly switch context — the agent session is re-scoped to the other schema, and the AI agent carries no recollection of the case it just left.
The document intake path matters. Documents (PDFs, CSVs, exports) are uploaded by the client to a secure SFTP endpoint provisioned per case. The SFTP is backed by a private S3 bucket — the file lands directly there. A server-side parser reads the file out of S3 and writes structured rows into the case staging schema. The document is never copied to a Process AI staff laptop, never emailed, and never staged on a public service. The same is true for any extracted source data: it lives in the case's database environment and stays there.
4.1 What Crosses the Trust Boundary
- Source systems → case staging schema. For each source, a one-off authenticated extract deposits raw data into the case's staging schema and disconnects. Xero is the most common source (read-only OAuth via the upstream mapper, tokens revoked at case close); PDFs come in via SFTP → S3 → server-side parse; other sources follow the same one-off pattern. PAiD does not hold live source-system credentials past the extraction window and cannot write back to any source system.
- Case staging schema → case production schema. A deterministic mapper projects the raw rows into the case's production schema — a data-warehouse-style relational model designed for query, reporting, and AI access. No external network calls. Staging and production for a case belong together and can never be joined to another case's schemas.
- Case production schema → agentic AI workflow. A named human practitioner directs an agentic AI workflow that reaches Supabase over TLS via MCP, scoped to the case's schema. The agent sees only the rows that MCP hands back from that one schema. Source documents themselves are never downloaded to a local machine; the practitioner reviews them via signed, time-limited URLs in a browser when needed.
- Agentic workflow → inference backend. Prompts and tool results are sent to Anthropic's Claude API for inference. Today this is the commercial API (US region); migration to Amazon Bedrock in the Sydney region is planned before the first non-validation engagement. The database itself never moves — only the prompt context for the specific question being asked transits to the model.
4.2 What Does Not Cross the Boundary
- Your data is not used to train any model. This is guaranteed by Anthropic's commercial terms (API data is not used for training) and will be reinforced contractually in the Bedrock path.
- Your data is not sent to any third party beyond the inference provider and Supabase.
- The AI agent does not have unrestricted internet access during an engagement. It cannot POST your data to arbitrary URLs. Network tool-use is allow-listed per skill and audited.
- The AI agent cannot write back to Xero, send emails on your behalf, or make payments.
5. Controls in Place Today
5.1 Infrastructure and Data Residency
- Database: Supabase (managed PostgreSQL 15). Encryption at rest (AES-256). TLS 1.2+ in transit. Daily automated backups with point-in-time recovery.
- Region: Case schemas live in the Supabase AU region (ap-southeast-2, Sydney). This is verified per project at case start.
- Document intake (SFTP → S3): Source documents are uploaded to a secure SFTP endpoint (key-based authentication, scoped credentials per case) backed by a private S3 bucket in the AU region. Files are parsed server-side directly from S3 into the case staging schema. The bucket is private, blocks public access, encrypted at rest (SSE-KMS), and access is logged.
- No document transfer via email or shared drives. This is a hard rule, not a preference. Process AI staff will refuse to accept source documents via email, public file-sharing services, or operator local storage. The SFTP endpoint is the only intake path.
- In-database document references: Once parsed, transaction-linked document references live in the case's private object storage. Practitioners view them via signed, time-limited URLs in the browser — the file does not leave the storage tier.
- Source control: Application code lives in a private GitHub repository. 2FA is enforced on all maintainers. Branch protection is enabled on main. GitHub secret scanning is enabled. Client data is never committed to source control.
5.2 Case Isolation
- Per-case schema pair (IAN). Every IAN case — every claim, liquidation, administration, or investigation — is provisioned with its own staging schema and production schema. Multiple cases for the same insolvency firm are completely separate databases from PAiD's perspective. There is no firm-level join, no firm-level view, and no skill that operates above the case level.
- Per-client schema pair (FAN). FAN engagements use the same isolation model at the client level — one client, one ongoing engagement, one staging + production pair.
- No shared tenant data. There is no cross-tenant table or shared row store. The schema-per-case / schema-per-client convention is the only way data is segregated, and it is enforced at the database level.
- No cross-schema skills. Skills declare their target schema and cannot be invoked against another schema without an explicit, reviewable change. There is no skill in the codebase that joins across schemas.
- The operator is in the loop. Every case is operated by a named human practitioner. All activity is attributable.
5.3 Data Integrity and Audit Trail
- Double-entry enforcement at the database layer. Where PAiD posts financial entries, validation and immutability are enforced by the database engine itself: debits must equal credits, posted lines cannot be silently mutated after the fact.
- AI decisions are persisted, not hidden. Every autonomous AI decision is written into the production schema as a structured audit record with a severity, a confidence score, a timestamp, and a human-readable reason. This is a complete audit trail of what the agent decided and why — practitioners can inspect, override, or roll back any decision from the source data.
- Immutable engagement metadata. For insolvency engagements, appointment date, practitioner details, and locked baselines (e.g. aged A/R at appointment date) are recorded in immutable rows that cannot be silently modified.
5.4 AI Agent Controls
- Agents query the database via SQL — they do not see raw row dumps. PAiD's AI agents issue scoped SQL queries through MCP and receive back summarised, aggregated, or filtered results for the question at hand. The LLM context is built from those results, not from a bulk extract of the case data. This pattern is consistent with Anthropic's published guidance on safe tool use: minimise the data the model touches, and summarise upstream of the prompt. Raw source documents (PDFs, CSV exports) are never passed into the model context — only the structured rows that resulted from them.
- Some agent actions write back to the database — these writes are the audit trail. Certain workflows (for example, business-vs-personal classification of expenditure) record their conclusions back into the case production schema so that downstream reasoning can build on what earlier steps decided. These writes are deliberate, scoped to the case, and recorded as structured audit rows with a confidence score and a human-readable reason. Nothing is hidden — every write can be inspected, overridden, or rolled back by the practitioner.
- The agent's surface area is bounded. AI agents can only perform the actions that PAiD's published workflows are designed to perform. Adding a new capability is a deliberate, reviewable change — not something the agent improvises mid-engagement.
- Every material decision has a human checkpoint. Decisions that affect a finding, a posting, an outbound communication, or an irreversible state change require explicit practitioner confirmation. Workflows that take an irreversible step are designed so the practitioner has reviewed every input row before signing off.
- No silent writes to external systems. The agent cannot send email, post to third-party APIs, move money, or write back to a source system. Outbound network access is allow-listed per workflow and audited.
- Model hosting — Anthropic API today, Bedrock tomorrow. Today inference runs against Anthropic's commercial Claude API. Migration to Amazon Bedrock in the Sydney region is planned before the first non-validation engagement. On the Bedrock path we have the option to run Claude and best-in-class open-source models (e.g. Llama, Mistral) entirely inside our AWS account in the Sydney region — in our VPC, under our IAM controls, with no transit to a model vendor outside Australia. Open-source model use is opt-in per case where the practitioner wants it; Claude on Bedrock will be the default once the migration lands.
5.5 Access Control
- Access to a case (or client) schema is granted only to the named practitioner assigned to it.
- Access to the Supabase project dashboard is limited to Process AI engineering staff, MFA-enforced through GitHub SSO.
- Access to the SFTP intake endpoint is per-case: the client receives credentials scoped only to their case's S3 prefix.
- Operator laptops never hold a copy of client source documents. The agent and the practitioner work against database rows and signed URLs; the underlying PDFs and exports stay in S3 and Supabase Storage.
.envfiles and local agent configuration live outside the git tree and are globally gitignored.
6. Incident Response
- Detection: Supabase logs, GitHub secret scanning alerts, endpoint detection on operator workstations, and a monthly manual review of the agentic workflow allow-list and local configuration files.
- Escalation: A confirmed or suspected incident triggers a standing incident channel with the Process AI leadership team. The named practitioner on the affected engagement is the first point of contact.
- Notification SLA: Initial notification to affected clients within 24 hours of confirmation. Written incident report within 5 business days.
- Forensic support: On request, and at no additional charge for incidents originating on our side, we provide forensic logs and any artefacts required to support your own incident response.
7. Sub-Processors
- Supabase — managed PostgreSQL, Storage, Auth. Case schemas and document references live here. Region: ap-southeast-2 (Sydney). Attestations: SOC 2 Type II, HIPAA.
- Amazon Web Services (S3 + AWS Transfer Family / SFTP) — document intake. Per-case private S3 bucket holds source PDFs and exports; SFTP endpoint is the only intake path. Region: ap-southeast-2 (Sydney). Attestations: SOC 1/2/3, ISO 27001, IRAP.
- Anthropic PBC — Claude model inference (current path). Per-call prompt context only — no persistent storage of client data. Region: US. Attestations: SOC 2 Type II. Commercial API does not train on inputs.
- Amazon Web Services (Bedrock) — Claude model inference (target path — replaces Anthropic direct). Region: ap-southeast-2 (Sydney). Attestations: SOC 1/2/3, ISO 27001, IRAP.
- GitHub (Microsoft) — source control for PAiD application code only — not client data. Region: US. Attestations: SOC 1/2, ISO 27001.
- Vercel — static hosting for the capability-demo website only. Not used for client data. Region: Global. Attestations: SOC 2 Type II.
PAiD does not process client data through any sub-processor not listed above. Any addition requires a 30-day notice period and a DPA amendment.
8. Contact
All security, DPA, incident, and commercial enquiries should be directed to your named Process AI engagement contact — typically the practitioner you have already been speaking with, or the Process AI representative who provided this document. They will route the question to the right person internally and respond on the record.
For incident reporting during a live engagement, contact the named engagement practitioner first; they will escalate within Process AI immediately.
- Email: support@process-ai.com.au
- Website: processai.com.au
- Entity: Process AI Pty Ltd (ABN 70 678 449 271)
This document is issued in good faith and reflects the state of the PAiD platform on the date above. It supersedes any earlier version. If you are reviewing PAiD for an engagement, request the current version — we will reissue whenever the posture materially changes.