GenAI transforms industries by enhancing productivity, automating tasks, and generating insights from complex data. Yet, for highly regulated organisations — such as financial institutions and government agencies — the promise of GenAI comes with serious concerns. How do providers like OpenAI and Anthropic address data privacy, security, and regulatory compliance? And when might it be better to build your solution on-premises?
In this post, we explore how external GenAI services can be safely integrated into regulated environments, illustrate these points with real-world examples, and discuss when organisations might consider private deployments.
Securing sensitive data: Privacy & security measures
Data security is essential when handling confidential financial records, classified documents, or proprietary business intelligence. Both OpenAI and Anthropic have implemented robust security measures:
- Encryption at every step: Data is encrypted in transit (using protocols such as TLS 1.2+) and at rest (with algorithms like AES-256). This ensures that data travelling to the cloud or stored on disk remains protected from prying eyes.
- Data privacy policies: Providers explicitly commit that customer data is not used to train AI models without permission. For instance, Morgan Stanley negotiated a zero data retention agreement with OpenAI to ensure that sensitive financial prompts and responses were never stored on the provider’s servers. This is a powerful example of how clear contractual terms can protect intellectual property.
- Access controls & audit trails: Enterprise versions include fine-grained access controls with Single Sign-On (SSO) and role-based permissions, ensuring only authorised personnel have access. Detailed logging and auditing make it possible to track data access for compliance. These measures help mitigate risks, as seen in organisations that require strict audit trails to satisfy internal and external reviews.
- Data retention & ownership: Options to control data retention (even allowing for zero retention) mean that sensitive prompts and outputs aren’t stored unnecessarily once a session ends. This practice minimises exposure and aligns with privacy best practices.
Meeting regulatory standards
For financial institutions and government agencies, adhering to a broad spectrum of regulatory requirements is essential:
- Compliance with data protection laws: OpenAI and Anthropic support regulations like the GDPR and CCPA through contractual assurances and Data Processing Addenda (DPAs). These agreements help ensure that data is handled by data minimization, deletion protocols, and other privacy principles.
- Certifications that matter: External GenAI services have achieved key certifications such as SOC 2 Type II. Moreover, when integrated via cloud platforms like Microsoft Azure, providers can tap into additional compliance frameworks such as FedRAMP and ISO 27001 — vital for U.S. federal agencies and government bodies.
- Financial industry regulations: Although frameworks like PSD2 and SOX weren’t written with AI in mind, they require strict controls over data access and auditability. Banks can, for example, use AI to support internal knowledge work or customer service without risking exposure to sensitive financial data. Several financial institutions have experimented with using AI for internal tasks once they verified proper controls.
- Public sector requirements: Some government agencies have adopted GenAI via cloud solutions. For example, U.S. federal agencies now use Microsoft Azure’s OpenAI Service, which operates within FedRAMP High-compliant environments. This setup ensures that sensitive government data stays within designated, secure geographic boundaries.
Data locality and cloud Integration
Data locality — the requirement to be stored and processed within specific jurisdictions — is a common mandate, especially in Europe. Modern GenAI services address these concerns as follows:
- Regional data residency options: OpenAI offers data residency features for enterprise users. A European bank, for example, can opt to have its data processed in EU data centers. This commitment helps meet GDPR requirements and mitigates legal risks associated with cross-border data transfers.
- Cloud partnerships: OpenAI and Anthropic integrate with major cloud providers such as Microsoft Azure, Google Cloud, and AWS. These platforms offer built-in compliance tools, data residency options, and secure network integrations.
- Azure OpenAI service: Provides regional deployment options for U.S. Government and EU regions.
- AWS Bedrock: Supports Anthropic’s Claude within a customer’s AWS environment, allowing control over encryption and key management.
- Google Cloud’s Vertex AI: Offers similar capabilities for hosting GenAI models in geographically specific regions.
By leveraging these cloud ecosystems, organisations maintain control over where and how their data is processed — a critical factor for meeting internal policies and external regulatory demands.
When to embrace external GenAI services
There are many scenarios where using a cloud-based GenAI service is not only acceptable but advantageous:
- Non-sensitive and anonymised data: Tasks such as generating marketing content, drafting generic code, or summarizing public documents involve low risk. For instance, if a bank’s marketing department uses AI for brainstorming campaigns, the information involved is typically not highly confidential.
- Internal productivity tools: Companies increasingly use AI to power internal assistants for knowledge management. Morgan Stanley’s experience with their zero data retention agreement allowed them to safely use OpenAI’s tools to assist with internal research without compromising sensitive data.
- Software development and IT support: Developers can leverage AI to speed up coding or generate documentation. Abstracting sensitive details ensures that proprietary code isn’t fully exposed to external services.
- Customer-facing applications with guardrails: External GenAI can drive chatbots for routine customer inquiries — as long as these bots are carefully designed not to process personal data. For example, a bank might use a chatbot for general questions like branch hours, ensuring that the system escalates requests involving sensitive data to a secure internal system.
The key is implementing robust internal policies and technical controls that ensure only appropriate data is sent to these services.
When to consider on-premises or private deployments
In some cases, the risk profile or regulatory requirements may dictate that data never leaves your organization’s secure environment:
- Handling highly sensitive or classified information: External cloud processing may be too risky when data is extremely sensitive — such as classified government documents or proprietary financial data. A notable example is Samsung’s cautionary tale. After engineers inadvertently uploaded sensitive source code to ChatGPT, Samsung banned external AI tools and accelerated the development of an in-house solution to maintain strict data control.
- Strict data sovereignty requirements: If regulations demand data remain within national borders, hosting a GenAI model on-premises ensures complete control. Some banks, like Wells Fargo, have explored hosting open-source models (such as Meta’s LLaMA 2) internally to meet their stringent data sovereignty and security requirements.
- Custom security and audit needs: External models can be “black boxes” with limited transparency. When complete visibility and control over the AI’s operation are necessary — for example, in critical risk management or decision-support systems — on-premises solutions can tailor logging, auditing, and model behaviour to internal needs.
While self-hosting requires significant infrastructure and investment in technical expertise, it may be the only viable option when absolute data control is non-negotiable.
Best practices for adopting GenAI in regulated environments
Whether using an external service or a private deployment, consider these recommendations to balance innovation with compliance:
- Conduct a thorough risk assessment: Evaluate the sensitivity of the data and potential impacts of a breach. Use this analysis to determine which use cases are appropriate for external services and require in-house handling.
- Choose the right deployment model: Enterprise-grade offerings that support regional data residency and have key certifications (SOC 2, FedRAMP, ISO 27001) are sufficient for many tasks. A hybrid approach might work best for highly sensitive applications — leveraging external AI for low-risk tasks and internally processing sensitive data.
- Implement robust access controls: Limit access to the AI and carefully control what data is shared. Use role-based access, SSO, and regular audits to ensure compliance.
- Use data minimisation techniques: Send only the minimum necessary data to the GenAI service. Anonymise or tokenise data where possible, ensuring that the most critical details remain protected even if a breach occurs.
- Establish clear usage policies: Develop and enforce guidelines on how GenAI should be used, and train employees accordingly. Real-world missteps — like the Samsung case — highlight the need for strict policies to prevent accidental exposure of sensitive data.
- Monitor and audit regularly: Continuously log AI interactions and review usage patterns to detect anomalies. Regular audits help maintain compliance and provide evidence of due diligence for regulatory reviews.
Conclusion
Generative AI is a game changer — even for highly regulated organisations. With robust security measures, firm privacy commitments, and integration with trusted cloud platforms, providers like OpenAI and Anthropic are increasingly capable of meeting the demands of financial institutions and government agencies. At the same time, for the most sensitive data or under the strictest regulatory mandates, on-premises or private cloud deployments remain the safest option.
Real-world examples — from Morgan Stanley’s zero data retention agreement and government agencies using FedRAMP-compliant cloud deployments to Samsung’s pivot to internal AI and Wells Fargo’s on-premises approach — illustrate that there is no one-size-fits-all solution. The future is bright for AI-enabled innovation but must be pursued enthusiastically and cautiously.
By carefully assessing risks, choosing the appropriate deployment model, and implementing best practices, organisations can harness the power of GenAI without compromising security or compliance.