Security & Compliance at GAP — SOC 2 Type I Certified
Growth Acceleration Partners is committed to security with established security controls unique to its environment. This Security posture aligns with common risk categories, including access controls, user authentication, intrusion and detection, legal and fraud prevention, and more.
Compliance and Certification
As part of its commitment and continuous improvement, GAP has undergone an independent audit and received a SOC 2 Type 1 report for its Engineering Business Unit, showing the application of all relevant controls and providing the appropriate evidence.
Managing Risk, Empowering People, Securing Partnerships

Risk Management
A risk assessment is conducted annually or in the event of significant changes to the organization and/or information systems. It will identify, assess, mitigate, report and monitor security, fraud, legal and regulatory, and vendor risks.

Employee Training
Every employee at GAP goes through mandatory annual security awareness training. This training teaches them how to report any security incidents or concerns, and it covers specific topics relevant to their role and the sensitive data they can access.

Vendor & Third-Party Security
Critical vendors and service providers are reviewed annually or more frequently based on risk level. The review includes updating vendor risk profiles; vendors' risk and SOC2 reports, if any, are reviewed and registered by the company.
Software Development & Engineering
We treat AI as a high-velocity collaborator, not a final authority. Every line of code generated via AI undergoes a rigorous "Human-in-the-Loop" review process.
Absolutely not. As a strategic partner, we enforce a strict "Zero-Data Leakage" policy, and all our coding tools are licensed, not train models, ensuring your proprietary intellectual property remains secure.
We follow a strict, multi-layered "secrets management" policy.
- Strict Prohibition: Our developers are trained to never paste secrets, API keys, or raw PII directly into any AI prompt.
- Local Environment: Secrets are managed exclusively in local, git-ignored environment files (.env) or through our secure corporate vault. These files are not indexed or read by the AI tools.
- Tool-Specific Exclusions: For tools like GitHub Copilot, we configure the editor to explicitly exclude sensitive files and directories from being used as context for suggestions.
- Abstraction: When developers need help with a piece of logic, they are trained to use generic placeholders (e.g., process_data(API_KEY, USER_EMAIL)) rather than the actual sensitive data.
You do. The intellectual property for all work-product created during our engagement, including any code generated by AI tools, is fully owned by you.
The AI tools are treated as productivity enhancers, similar to a spell-checker or a code linter. The output is part of the work we deliver, and all rights are transferred to you upon completion and payment.
Engineering training is a mandatory part of our AI adoption strategy. Our policy is built around established industry frameworks for responsible and secure AI.
Our training program is continuous and covers:
- Data Security: A strict "never-paste-secrets" rule. Developers are trained to use placeholders and environment variables, never raw PII, API keys, or proprietary data in prompts.
- IP & Licensing: Understanding how to use AI-generated code, check for open-source license compliance, and avoid "plagiarism" by always reviewing, refactoring, and testing suggestions.
- Critical Review: Treating AI as a "pair-programmer," not an infallible expert. All AI-generated code must be critically reviewed for security vulnerabilities, bugs, and performance issues before being committed.
- Prompt Engineering: How to write effective, secure, and context-aware prompts that produce high-quality, maintainable code.
This is a critical concern, and we manage it through strict data tenancy and ephemeral session isolation.
- No Training: First and foremost, the plans we use contractually forbid the AI vendor from training their public models on our data. This is the primary safeguard. Your proprietary algorithm from Client A can never be incorporated into the base model that serves Client B.
- Siloed Context: When our developers use an AI tool, the context they provide (your code, their prompts) is handled in an isolated, ephemeral session. Think of it like a secure, private browsing window. The AI only uses that code to generate an immediate response for that specific developer in that specific session.
- In short, the AI doesn't "learn" from your code; it only "reads" it temporarily in a secure silo to provide an in-the-moment suggestion, and then that context is purged.
To request the SOC 2 report, please fill in the following form:
Request the SOC 2 Report
"*" indicates required fields