BetaWe're currently in beta. Signing in will place you on our waitlist.

AI Data Privacy & Security

AI Data Privacy & Security

TurboPentest uses Claude (by Anthropic) as the AI backbone for our agentic pentesting system. This document explains exactly how your data is handled and why it is never used for AI training.

Anthropic API - Zero Training Guarantee

TurboPentest uses the Anthropic commercial API, not the consumer Claude product. Under Anthropic's commercial API terms:

  • API inputs and outputs are NOT used to train models. Anthropic explicitly states that data sent through the API is not used for model training or improvement.
  • No human review. Your data is not reviewed by Anthropic employees unless you explicitly report an issue and provide consent.
  • Zero-day retention. By default, Anthropic does not retain API inputs or outputs beyond the duration of the request processing.

Source: Anthropic's API Data Usage Policy

Data Flow Architecture

Your Targets/Domains
        |
        v
  TurboPentest Platform (our infrastructure)
        |
        |-- Findings, reports --> Our PostgreSQL database (encrypted at rest)
        |-- Pentest task context --> Anthropic API (ephemeral, not retained)
        |
        v
  Claude API processes request
        |
        v
  Response returned to TurboPentest
        |
  Anthropic deletes all request data

What IS sent to the Anthropic API:

  • Task-specific context: target URL patterns, discovered endpoints, HTTP response snippets
  • Agent reasoning prompts: instructions for what to test and how to analyze
  • Validation context: evidence for confirming or ruling out vulnerabilities

What is NEVER sent to the Anthropic API:

  • Full authentication credentials or API keys
  • Complete database dumps or source code repositories
  • Personal identifying information of your users or customers
  • Historical pentest data from other clients
  • Your payment information or account credentials

Technical Safeguards

API Configuration

  • We use the Anthropic API with standard commercial terms
  • All API calls use HTTPS/TLS 1.3 encryption in transit
  • API keys are stored in environment variables, never in code

Data Minimization

  • Each API call receives only the minimum context needed for the specific pentest task
  • Agent prompts are scoped to individual vulnerability checks
  • Raw target data is preprocessed to remove unnecessary sensitive information before API calls

Our Infrastructure

  • All pentest data stored in Azure-hosted PostgreSQL with encryption at rest
  • Database backups are encrypted and access-controlled
  • Application logs are scrubbed of sensitive target data
  • SOC 2 aligned security controls

Frequently Asked Questions

Q: Could Anthropic change their policy and start training on API data? A: Anthropic's commercial API terms contractually prohibit training on customer data. Any policy change would require notice and would not apply retroactively to data already processed.

Q: What about the AI agents' "memory" between pentests? A: Each pentest session starts fresh. Agents do not retain memory between separate pentest runs. Finding continuity (tracking the same vulnerability across pentests) is handled entirely within our platform database, not via AI memory.

Q: Can I request deletion of my data? A: Yes. You can delete your account and all associated pentest data at any time. Since Anthropic does not retain API data, there is nothing to delete on their side.

Q: Is this built to meet compliance standards like SOC 2 / ISO 27001 / HIPAA / PCI DSS? A: Our data handling practices are built to meet the requirements of these frameworks. The zero-training, zero-retention API usage model aligns with data minimization and confidentiality controls required by these standards.

On this page