Securing generative AI: Applying relevant security controls

The first step of securing an application is to understand the scope of the application. The first post in this series introduced the Generative AI Scoping Matrix, which classifies an application into one of five scopes. After you determine the scope of your application, you can then focus on the controls that apply to that scope as summarized in Figure 1. The rest of this post details the controls and the considerations as you implement them. Where applicable, we map controls to the mitigations listed in the MITRE ATLAS knowledge base, which appear with the mitigation ID AML.Mxxxx. We have selected MITRE ATLAS as an example, not as prescriptive guidance, for its broad use across industry segments, geographies, and business use cases. Other recently published industry resources including the OWASP AI Security and Privacy Guide and the Artificial Intelligence Risk Management Framework (AI RMF 1.0) published by NIST are excellent resources and are referenced in other posts in this series focused on threats and vulnerabilities as well as governance, risk, and compliance (GRC).