Rfwel Engr AI Automation
+1 480 218 1877 Contact Us

AI Compliance

Stay informed with the latest updates on AI regulations, compliance requirements, industry standards, and guidelines compiled by Rfwel Engineering's AI Automation team.

Because Rfwel Engineering is a registered engineering firm (Arizona Reg# 17227, Electrical), the AI Automation team is able more efficiently manage AI compliance issues in complex electrical, control, and communication AI applications.

Search and Filter Compliance Information

Statute


European Union

EU AI Act Partially Active

United States

The Algorithmic Accountability Act (AAA) mandates organizations to perform algorithmic impact assessments (AIAs) to evaluate the design, bias, and potential impacts of automated decision-making systems. Developers must ensure AI systems are designed to minimize harm and comply with data protection principles.

Executive Order


United States

Executive Order 14110 Active Oct 30, 2023 (earliest date) Oct 30, 2023 (full force date )

EO 14110 requires organizations to report data on the training and deployment of high-impact AI models. It includes provisions for risk assessment, content watermarking, and cybersecurity measures to ensure AI systems align with national security and ethical standards.

Industry


United States

NIST AI Risk Management Framework Active Jan 26, 2023 (earliest date) Jan 26, 2023 (full force date )

The NIST AI RMF outlines technical processes to identify, measure, and mitigate risks associated with AI systems. It emphasizes transparency, robustness, fairness, and security in AI model development and deployment, while promoting accountability through continuous risk monitoring.

Global

ISO/IEC TR 24028:2020 (Artificial Intelligence — Overview of Trustworthiness) Active Aug 15, 2020 (earliest date) Aug 15, 2020 (full force date )

ISO/IEC TR 24028:2020 offers a framework for assessing the trustworthiness of AI systems, focusing on critical factors such as explainability, resilience, fairness, and security. It provides recommendations for evaluating AI models and systems against these trustworthiness criteria.

SOC Type 1 assesses the design of an AI service provider's controls related to data security, privacy, and system integrity at a specific point in time. It reviews the existence of controls for handling data securely but does not test their effectiveness over a prolonged period. This report is useful for organizations that need to establish baseline controls for their AI systems.

SOC Type 2 evaluates the operational effectiveness of an AI service provider's controls related to data security, privacy, and system processing over time. The audit reviews how these controls perform over a period (e.g., 6-12 months), verifying that they consistently meet compliance and performance standards. It is ideal for organizations that need to demonstrate ongoing control reliability.

Regulation


European Union

Digital Services Act Partially Active Nov 16, 2022 (earliest date) Feb 17, 2024 (full force date )

The DSA mandates online platforms to implement robust content moderation systems, conduct thorough risk assessments, and ensure algorithmic transparency. Additionally, platforms must provide users with the ability to appeal content moderation decisions and offer clear, accessible terms of service.

General Data Protection Regulation Active May 25, 2018 (earliest date) May 25, 2018 (full force date )

GDPR impacts AI systems that process personal data, requiring technical measures like encryption, anonymization, and data minimization. It emphasizes data protection by design and default, ensuring AI systems comply with privacy standards and mitigate risks associated with automated decision-making.

Medical Devices Regulation Active May 25, 2017 (earliest date) May 26, 2018 (full force date )

The MDR mandates that medical devices using AI undergo rigorous clinical evaluation and risk assessment. Developers must ensure AI systems meet high standards for accuracy, reliability, and safety. Additional requirements include data protection, post-market surveillance, and traceability.

United States

Autonomous Vehicle Regulations require manufacturers to comply with safety standards, implement secure data handling, and provide transparency in AI decision-making processes. Technical requirements often include collision avoidance, fail-safe systems, and rigorous testing protocols.

FCC

HIPAA requires safeguards for ePHI (electronic protected health information), including access control, data integrity, and transmission security. AI systems using ePHI  must limit access and use de-identified data

Ethical Framework


Global

UNESCO Recommendation on AI Ethics Partially Active Nov 25, 2021 (earliest date)

The UNESCO AI Ethics framework provides guidelines for responsible AI development, emphasizing transparency, accountability, and fairness. It advocates for AI systems to align with universal human rights and requires mechanisms to mitigate potential harms such as bias or misinformation.

OECD AI Principles Active May 22, 2019 (earliest date) May 22, 2019 (full force date )

The OECD AI Principles recommend technical and operational measures to ensure AI systems are transparent, robust, and fair. They call for mechanisms to assess risks, ensure explainability, and align AI models with ethical standards and societal values.

The Trustworthy AI Assurance Framework outlines technical measures for assessing the trustworthiness of AI systems, including risk evaluations, explainability protocols, and bias audits. It provides organizations with tools to certify their AI systems as trustworthy.

Additional Compliance Information

In addition to the AI technical compliance information above, consider AI legal compliance, or wireless compliance information in the links below.

Legal compliance information is provided by Kama Thuo, PLLC AI Law Firm (external link).