Generative AI has endless opportunities and applications for businesses and public organisations alike. It can create new content from text to images, videos, code and more.

However, as such, GenAI identifies and replicates patterns from massive amounts of data. For that reason, using it involves a lot of uncertainty, which can cause you a lot of grey hairs.

If you implement and run GenAI responsibly, you can create your own safe space for GenAI, avoid legal and reputational harm, resemble good and ethical practices, and be a force for good in the world.

By reading this article, you’ll learn the four phases of getting started with GenAI governance (based on the NIST AI Risk Management Framework):

  1. Establish context, foundation model, policy and related risks for your GenAI app.
  2. Evaluate the performance and risks for the foundation model and use case.
  3. Identify, prioritise and implement risk mitigation methods for your GenAI app.
  4. Operate, communicate and iterate for continual improvement.

Phase 1: Establish context, foundation model, policy and risks for your GenAI app

Before getting on with any responsible GenAI implementation, it’s essential to identify and understand the foundation model for your app, select a policy to follow and map the potential risks for your specific use case.

In Phase 1, you should:

  • Describe your use case.
  • Select a GenAI policy to follow (e.g. EU AI Act or your internal AI policy).
  • Select a foundation model for your use case.

You should also familiarise yourself with the foundation model and document:

  • Capabilities and limitations of the model;
  • Data the model has been trained on;
  • Copyrights and license terms the foundation model;
  • Risks and governance of the model.

Looking for more information about the capabilities, limitations and risks of your selected foundation model? Check out our model library with all the details you need in one place.

Phase 2: Evaluate your GenAI app’s performance and risks

Once you’ve familiarised yourself with the model and policy you’ve selected, it’s time to assess the model’s performance and risks for your use case.

From our risk catalogue, you’ll find these kinds of risks concerning your model:

  • Legal risks
  • Cyber security risks
  • Environmental risks
  • Technical risks
  • Trust-related risks
  • Fundamental rights related risks
  • Privacy and data risks
  • Third-party related risks
  • Business-related risks

In Phase 2, you should:

  • Analyse risks and impact for you, individuals, groups, organisations and society.
  • Assess the requirements for the performance and functionality of your system.
  • Identify methods and metrics for risk assessment.
  • Evaluate the AI system for trustworthy characteristics, incl. model performance, robustness, security and fairness.
  • Collect and assess feedback on evaluations.

Phase 3: Identify, prioritise and implement risk mitigations

After looking over the risks related to your GenAI system, the next step for you is to think about how you’ll deal with them.

In Phase 3, you should:

  • Prioritise the risks based on your assessment.
  • Identify and implement mitigations to minimise negative impacts, including model, safety system, app, and communications.
  • Manage risks from third-party entities.
  • Identify and document risk treatments, including response and recovery.
  • Document your use case, make sure it’s in line with policy requirements – the one you’ve selected among others.

Phase 4: Operate, communicate and iterate for improvement.

Once you've got the documentation and plans ready, you can take your AI system to production. From then on, knowing how you’ll improve your GenAI system over time is particularly important.

In Phase 4, you should:

  • Set up and implement monitoring oversight mechanisms
  • Define and implement practices for transparency and sharing information about your GenAI system (including reporting, communications and oversight mechanisms).
  • Implement the staged deployment of your GenAI system.
  • Prepare mechanisms for incident response.
  • Design and implement mechanisms for collecting and evaluating user feedback.
  • Improve your GenAI system based on feedback from users.
  • Learn from the best GenAI governance practices and iterate for improvement.

How Saidot makes your GenAI governance journey a smooth sailing?

Want to make sure your GenAI solution is ethical, responsible and reliable from the get-go? Here’s how Saidot’s AI governance tool makes it simple to keep your GenAI systems under control:

  • GenAI model catalogue: Helps you choose the foundation model best suited for your use case and stay up to date on its capabilities and limitations.
  • Out-of-the-box policy templates: Set requirements, controls and governance tasks for your GenAI system with minimal effort — and make sure you’re ready for the upcoming regulations.
  • GenAI risk catalogue: Explore possible risks and best practice mitigations for your GenAI and monitor it. Our team proactively helps customers to stay up to date on possible risks.
  • GenAI evaluations: Identify metrics and tools for your evaluations; conduct internal and external reviews and assessments.
  • Use case inventory: Register and manage your generative AI use cases inventory and responsible teams across your organisation.
  • Transparency reports and end-user guidance: Make your GenAI compliance crystal clear to all your stakeholders and equip them with the right information and feedback mechanisms.