We follow overarching principles for the responsible use and development of AI, supported by practical steps and methods that ensure that we meet these standards in practice.
Read the full policy below.
The Saidot AI Policy sets out the high-level principles of responsible use and development of AI at Saidot. In addition to the principles, the Policy provides practical guidance on measures and practices through which the responsible use and development of AI is achieved at Saidot. Because responsible AI is at the heart of our business, it is fundamental to our internal AI activities and a top priority for us.
The Saidot AI Policy applies to the development and use of AI at Saidot, including the AI tools, systems, and products we use and develop.
The AI system in the Saidot AI Policy aligns with the definition of AI system provided in the EU AI Act.
‘AI system’ is a machine-based system that operates with varying levels of autonomy and displays possible adaptiveness after deployment. Such an AI system, for implicit or explicit objectives, infers from the input it receives how to generate outputs, such as predictions, content, recommendations, or decisions, that can influence physical or virtual environments.
At Saidot, our mission is to empower AI product and business teams to achieve high-quality AI governance efficiently, unlocking the promise of AI responsibly. We believe AI can be a force for good if used responsibly, and this belief drives both the technology we build and the standards we uphold internally.
Our five core values – Progress, Collaboration, Transparency, Accountability, and Diversity – shape the way we operate as an organisation. They are fundamental to both our internal culture and external relationships, and they are rooted in our approach to AI governance.
Together, our mission and values ensure that we not only help our customers achieve high-quality, responsible AI governance but also embody the same standards and values in our operations.
Our responsible AI principles translate our mission and values into practice, guiding the way we develop and use AI across the organisation. Saidot’s principles for responsible use of AI are: Human agency and oversight, Transparency, explainability and accountability, Privacy, safety, and security, Lawful AI, and Sustainability and societal well-being.
Our rationale for applying these principles is twofold. First, Saidot’s responsible AI principles are derived from well-recognised and widely applied principles within the field of responsible AI. They resonate with the best practices and standard principles established by key actors in this space, such as the High-Level Expert Group on Artificial Intelligence (AI HLEG), OECD and UNESCO. Second, these principles are closely aligned with Saidot’s core values. We believe that the use of data and AI within Saidot must be rooted in our mission and values. This means using technology not only to advance the business but also to benefit both internal and external stakeholders.
Below, we have explained Saidot’s responsible AI principles and combined them with concrete measures to ensure their operationalisation in practice.
AI systems should empower human beings, allowing them to make informed decisions. Humans should leverage the power of this technology to add value and benefits to people, businesses, and society and help us solve problems and tackle challenges.
AI systems should be transparent and understandable, enabling humans to make informed decisions. They should clearly communicate their purpose, capabilities, and limitations, and provide meaningful explanations for their outputs. Responsibility for AI systems must be clearly assigned and defined to ensure accountability throughout the AI lifecycle.
AI systems should be designed and used in ways that maintain security, resilience, and privacy throughout their lifecycle. They must be equipped with safeguards to prevent unintended harm or misuse. Data must be handled responsibly and lawfully, with appropriate governance mechanisms in place to protect personal and confidential information.
AI systems must be developed and used in compliance with applicable laws and policies. Lawfulness includes following legal requirements, considering industry best practices, and meeting contractual obligations. Lawfulness enables responsible innovation and protects fundamental rights.
AI systems should be developed and used in ways that contribute to the well-being of people, society, and the environment. AI-related activities should support inclusion and contribute to sustainable and socially beneficial outcomes.
The AI Policy's Responsible AI Principles and practical measures are operationalised through Saidot’s AI Governance Framework, which is documented in the Saidot Knowledge Base. Governance activities for all AI systems used and developed at Saidot are managed on the Saidot platform.
Our Head of Services and Customer Success Iiris, and our AI governance expert team, can share more information.
Get in touch