Rfwel Engr AI Automation
+1 480 218 1877 Contact Us

ISO/IEC TR 24028:2020 (Artificial Intelligence — Overview of Trustworthiness)

Active Industry Global
Technical Description

ISO/IEC TR 24028:2020 offers a framework for assessing the trustworthiness of AI systems, focusing on critical factors such as explainability, resilience, fairness, and security. It provides recommendations for evaluating AI models and systems against these trustworthiness criteria.

Explore Legal Details (external link)

Detailed Technical Description

Under ISO/IEC TR 24028:2020, organizations are required to implement a comprehensive framework to evaluate and enhance the trustworthiness of AI systems. Key technical recommendations include:

 

  1. Transparency:

    • Explainable AI: Ensure that AI decisions are explainable and interpretable by end-users and stakeholders.
    • Documentation: Maintain detailed documentation of AI system functionalities, decision-making processes, and underlying algorithms to promote transparency.
  2. Bias Detection and Mitigation:

    • Bias Identification: Develop and employ methods to identify biases in AI models and training data.
    • Bias Correction: Implement techniques to reduce or eliminate identified biases, ensuring fair and equitable outcomes across diverse populations.
  3. Robustness:

    • Reliability Measures: Design AI systems to function reliably under a wide range of conditions, including handling unexpected inputs and scenarios.
    • Adversarial Resilience: Incorporate safeguards to protect AI systems from adversarial attacks that could compromise their integrity and performance.
  4. Fairness:

    • Equitable Design: Ensure AI systems are designed to operate without unfairly favoring or disadvantaging any individual or group.
    • Inclusive Testing: Conduct comprehensive testing across diverse datasets to validate the fairness of AI system outcomes.
  5. Security:

    • Cybersecurity Protocols: Implement robust security measures to protect AI systems from unauthorized access, manipulation, and other cybersecurity threats.
    • Data Protection: Ensure that data used in AI systems is securely stored and transmitted, employing encryption and other data protection techniques.
  6. Accountability:

    • Audit Trails: Maintain comprehensive audit logs to track AI system activities, enabling accountability and facilitating audits.
    • Incident Response: Develop and implement protocols for responding to and recovering from AI-related incidents or failures, ensuring accountability for system performance and ethical compliance.
  7. Fairness and Inclusivity:

    • Diverse Data Utilization: Use diverse and representative datasets to train AI models, minimizing the risk of biased outcomes.
    • Accessibility Features: Incorporate accessibility features to ensure AI systems are usable by individuals with varying abilities and backgrounds.
  8. Misinformation Mitigation:

    • Content Verification: Implement verification processes to prevent the dissemination of false or misleading information through AI systems.
    • Fact-Checking Tools: Utilize AI-driven fact-checking tools to enhance the accuracy and reliability of information presented to users.
  9. Ethical AI Development:

    • Stakeholder Collaboration: Engage with diverse stakeholders, including ethicists, policymakers, and affected communities, to inform AI development practices.
    • Continuous Monitoring: Establish ongoing monitoring systems to evaluate the ethical implications of AI systems and make necessary adjustments.

 

Additional Technical Measures:

  • Algorithm Auditing: Regularly audit AI algorithms to assess and mitigate biases, ensuring fair and equitable outcomes.
  • Privacy Preservation: Incorporate privacy-preserving techniques, such as differential privacy and data anonymization, to protect user data.
  • Sustainability Practices: Optimize computational resources to reduce the environmental impact of AI systems during development and deployment.
  • Explainable AI (XAI): Utilize XAI techniques to enhance the transparency and interpretability of AI decisions, enabling stakeholders to understand and trust AI outputs.
  • Documentation and Reporting: Maintain comprehensive documentation of AI system designs, risk assessments, and mitigation strategies to facilitate transparency and accountability.

Key Dates

Earliest Date: Aug 15, 2020

Full Force Date: Aug 15, 2020

Links and Documents