ISO/IEC TR 24028:2020 (Artificial Intelligence — Overview of Trustworthiness)
Active Industry GlobalISO/IEC TR 24028:2020 offers a framework for assessing the trustworthiness of AI systems, focusing on critical factors such as explainability, resilience, fairness, and security. It provides recommendations for evaluating AI models and systems against these trustworthiness criteria.
Explore Legal Details (external link)
Under ISO/IEC TR 24028:2020, organizations are required to implement a comprehensive framework to evaluate and enhance the trustworthiness of AI systems. Key technical recommendations include:
-
Transparency:
- Explainable AI: Ensure that AI decisions are explainable and interpretable by end-users and stakeholders.
- Documentation: Maintain detailed documentation of AI system functionalities, decision-making processes, and underlying algorithms to promote transparency.
-
Bias Detection and Mitigation:
- Bias Identification: Develop and employ methods to identify biases in AI models and training data.
- Bias Correction: Implement techniques to reduce or eliminate identified biases, ensuring fair and equitable outcomes across diverse populations.
-
Robustness:
- Reliability Measures: Design AI systems to function reliably under a wide range of conditions, including handling unexpected inputs and scenarios.
- Adversarial Resilience: Incorporate safeguards to protect AI systems from adversarial attacks that could compromise their integrity and performance.
-
Fairness:
- Equitable Design: Ensure AI systems are designed to operate without unfairly favoring or disadvantaging any individual or group.
- Inclusive Testing: Conduct comprehensive testing across diverse datasets to validate the fairness of AI system outcomes.
-
Security:
- Cybersecurity Protocols: Implement robust security measures to protect AI systems from unauthorized access, manipulation, and other cybersecurity threats.
- Data Protection: Ensure that data used in AI systems is securely stored and transmitted, employing encryption and other data protection techniques.
-
Accountability:
- Audit Trails: Maintain comprehensive audit logs to track AI system activities, enabling accountability and facilitating audits.
- Incident Response: Develop and implement protocols for responding to and recovering from AI-related incidents or failures, ensuring accountability for system performance and ethical compliance.
-
Fairness and Inclusivity:
- Diverse Data Utilization: Use diverse and representative datasets to train AI models, minimizing the risk of biased outcomes.
- Accessibility Features: Incorporate accessibility features to ensure AI systems are usable by individuals with varying abilities and backgrounds.
-
Misinformation Mitigation:
- Content Verification: Implement verification processes to prevent the dissemination of false or misleading information through AI systems.
- Fact-Checking Tools: Utilize AI-driven fact-checking tools to enhance the accuracy and reliability of information presented to users.
-
Ethical AI Development:
- Stakeholder Collaboration: Engage with diverse stakeholders, including ethicists, policymakers, and affected communities, to inform AI development practices.
- Continuous Monitoring: Establish ongoing monitoring systems to evaluate the ethical implications of AI systems and make necessary adjustments.
Additional Technical Measures:
- Algorithm Auditing: Regularly audit AI algorithms to assess and mitigate biases, ensuring fair and equitable outcomes.
- Privacy Preservation: Incorporate privacy-preserving techniques, such as differential privacy and data anonymization, to protect user data.
- Sustainability Practices: Optimize computational resources to reduce the environmental impact of AI systems during development and deployment.
- Explainable AI (XAI): Utilize XAI techniques to enhance the transparency and interpretability of AI decisions, enabling stakeholders to understand and trust AI outputs.
- Documentation and Reporting: Maintain comprehensive documentation of AI system designs, risk assessments, and mitigation strategies to facilitate transparency and accountability.
Earliest Date: Aug 15, 2020
Full Force Date: Aug 15, 2020