Responsible AI Principles¶
Frameworks for building AI that benefits, rather than harms, society.
Core Pillars of Responsible AI¶
Many large tech companies (Microsoft, Google, IBM) have published Responsible AI guidelines. They generally orbit the following concepts:
- Fairness: AI systems should treat all people fairly. They should not allocate opportunities or resources differently based on protected characteristics like race, gender, or age.
- Reliability & Safety: AI systems should perform reliably and safely. They should have clear operational boundaries and gracefully handle situations outside their training data.
- Privacy & Security: AI systems should respect user privacy and adhere to data protection laws (like GDPR). Models should not be able to "leak" the personal training data they consumed.
- Inclusiveness: AI systems should empower everyone and engage people. The teams building the models should be diverse to spot blind spots in the data.
- Transparency: AI systems should be understandable. Stakeholders should understand how and why models make their decisions.
- Accountability: People must remain accountable for AI systems. An algorithm cannot be fired; the person or company who deployed it holds the ultimate responsibility.
KSB Mapping¶
| KSB | Description | How This Addresses It |
|---|---|---|
| S5 | Deployment, value assessment, and ROI | Translating model performance into business impact |
| S6 | Communicate through storytelling and visualisation | Presenting ML results to non-technical stakeholders |
| B4 | Consideration of organisational goals | Framing technical results in terms of business objectives |
| B1 | Inquisitive approach | Exploring creative ways to explain model behaviour |