At the Crossroads of Cloud and AI: AJ, why are you writing a blog about this?

In my role as a Field CISO, I regularly speak with leaders who are beginning to sense that their established cloud security practices are straining under the weight of AI. This has become a constant refrain in our conversations across industries. Most are still trying to untangle which risks are already addressed by existing governance or operational processes and which elements represent a genuine departure from the traditional cloud model.

This is a broad, complex landscape, but I have found the best way to tackle a complex problem is to break it down to smaller more approachable problems. A foundational step in architecting cloud AI security is defining who is responsible for each element of the AI pipeline or architecture. This approach should feel familiar because it rhymes with the Shared Responsibility Model of the cloud. Teams must begin by disambiguating ownership which basically just means identifying who owns the model, who secures the surrounding environment, and which risks fall under standard third-party management versus internal architecture.

By leaning into this familiar rhythm of shared responsibility, we can begin to map the specific contours of an AI strategy. Let’s dive in!

Executive Brief: AI Security Strategy in the Cloud

For over a decade, CISOs have navigated the Shared Responsibility Model, distinguishing between security OF the cloud (the provider’s burden) and security IN the cloud (the customer’s burden). As AI integration becomes a business imperative, we find ourselves at a similar crossroads. Just as cloud security required separating the underlying infrastructure from the applications built upon it, AI security demands we distinguish the core model’s integrity from the environment surrounding it.

This is not a departure from established cloud governance, but an extension of it. For Oracle Cloud Infrastructure (OCI) users, securing this new frontier requires a nuanced strategy categorized into three distinct layers: security of, in, and with the AI. This is not a formal industry standard, but it serves as a practical lens for applying the Shared Responsibility Model to AI deployments. In this context, the “who” depends entirely on your consumption method. When we look at the Security OF the AI, responsibility shifts based on whether you are building on infrastructure (OCI) or consuming a finished product (SaaS).

 Security OF the AI (Infrastructure & Model Protection)

This layer covers the foundational assets including the physical GPU, the hypervisors, and the integrity of the model artifacts such as weights and checkpoints.

Security OF the AI (Infrastructure & Model Integrity)

In this layer, the Shared Responsibility Model depends on how the model is sourced and deployed. It is vital to recognize that the cloud infrastructure provider is not necessarily the party that built the AI model being used.

  • In IaaS (Infrastructure as a Service):
    • The Cloud Provider: Responsible for the physical security of the GPUs/CPUs, the virtualization layer, and the high-speed networking that allows the AI to function. They ensure the hardware is secure.
    • The Model Provider: When using third-party models, organizations must understand whether the provider maintains responsibility for model integrity (for example via hosted inference APIs) or whether that responsibility shifts to the customer once the model artifact is deployed.
    • The Customer: Responsible for the security of the specific instance they deploy. For customer-managed models you deploy on IaaS, you are responsible for securing that model’s weights, the API wrapper around it, and ensuring it hasn’t been tampered with during deployment. Beyond the obligation to secure the underlying platform, isolation, and baseline controls, the cloud provider is generally not responsible for the security flaws of a model a customer chooses to run on their servers.
  • In SaaS (Software as a Service):
    • The SaaS Vendor: Responsible for the stack up to and including the application layer. In most SaaS environments, the AI is effectively black-boxed inside the application which means the vendor owns the responsibility for security of both the infrastructure and the model.
    • The Customer: Responsible for Data Governance and Advanced Access Control. The customer acts as a critical gatekeeper, deciding precisely which corporate data is permitted to enter the SaaS environment. Beyond basic user access, the customer must implement rigorous identity controls for Non-Human Identities (NHIs) and agentic workflows. This includes maintaining strict visibility and observability over API integrations and emerging agent frameworks such as Model Context Protocol (MCP) connections to ensure the AI doesn’t become a back door for unauthorized data exfiltration or unmonitored Shadow AI activity.

 Security IN the AI (Governance and Safety)

This focuses on the “mind” of the AI or how the model interacts with users and handles sensitive information. Unlike infrastructure security, this is primarily about the content and logic flowing through the system.

  • In IaaS:
    • The Customer: Carries most of the responsibility. In most architectures, the customer is responsible for the security of the vector database and the data retrieval pipeline. A common architectural oversight is focusing exclusively on cleaning training or reference data while neglecting the real-time flow of PII or regulated data through the retrieval-augmented generation (RAG) stack.
    • The Cloud Provider: Responsible only for providing the tools (such as WAFs, API gateways, or specialized security services) that the customer can use to build those protections. However, traditional web application security controls may not fully address AI-specific risks such as prompt injection or data exfiltration through model outputs.
  • In SaaS:
    • The SaaS Vendor: Responsible for the default safety filters, system prompts, and other guardrails designed to prevent unsafe or unintended model behavior.  Some enterprise SaaS AI offerings allow customers to define customer instructions or tenant-level prompts so if your team opts to leverage those capabilities you must secure them.
    • The Customer: Responsible for Policy and Data Input. The customer must define what internal data is appropriate to share with the SaaS AI and monitor for shadow AI usage within the organization.

The Goal: Ensuring the AI behaves according to corporate policy, developer expectation, and even common sense. We want to make sure it works predictably and does not become a legal or operational liability.

Security WITH the AI (Empowering the Defender)

This is the “sword and shield.” It involves using AI to augment your existing security operations center (SOC) and outpace increasingly automated threats.

  • In IaaS:
    • The Customer: Responsible for integrating AI-driven security tools (like automated log analyzers or anomaly detection) into their specific environment to monitor the workloads they own.
    • The Cloud Provider: Responsible for providing AI-enhanced platform security (e.g., machine-learning assisted threat detection and DDoS mitigation) that protects the underlying cloud ecosystem.
  • In SaaS:
    • The SaaS Vendor: Responsible for using AI behind the scenes to detect unauthorized access or account takeovers within the application.
    • The Customer: Responsible for acting on the AI-generated alerts and insights provided by the vendor’s security dashboard.

The Goal: Reducing Mean Time to Respond (MTTR) by using machine learning to automate the heavy lifting of security analysis, while ensuring that qualified personnel continuously oversee these automated processes.

AI Governance and Risk Management: Recommendations

Operationalizing this framework requires a dual-track approach to accountability. From a vendor management perspective, organizations must update their due diligence processes to verify where a provider’s responsibility ends and the customers begins, particularly when third-party models are hosted on external infrastructure. This includes auditing model provenance and ensuring contractual clarity on data privacy and model integrity.

Internally, governance works best as a cross-functional effort. While the CISO’s office manages the technical security architecture, organizations need a way to oversee the ethical and behavioral guardrails of the AI. Some may choose a formal AI Risk Committee with legal, data science, and business leads, but more dynamic teams might prefer integrating these reviews directly into existing agile workflows or DevOps pipelines to avoid slowing down innovation. By aligning these efforts with established standards like the NIST AI Risk Management Framework, leadership can ensure that defense-in-depth strategies are used to harden the entire AI ecosystem and protect the integrity of the production environment.

Conclusion

The transition to AI in the cloud mirrors the early days of cloud adoption because it requires a shift in mindset from owning the entire stack to managing a complex web of shared responsibilities intentionally.  By categorizing your efforts into the security of, in, and with AI, you move away from a haphazard patchwork style defense toward a disciplined, structured framework that feels more intuitive. Ultimately, the strategic challenge in this landscape is defined by leadership’s ability to untangle which risks are a true departure and which are perennial, transforming a chaotic tapestry of seemingly novel challenges into a reasoned strategy informed by our collective experience.