AI Compliance
Stay informed with the latest updates on AI regulations, compliance requirements, industry standards, and guidelines compiled by Rfwel Engineering's AI Automation team.
Because Rfwel Engineering is a registered engineering firm (Arizona Reg# 17227, Electrical), the AI Automation team is able more efficiently manage AI compliance issues in complex electrical, control, and communication AI applications.
Search and Filter Compliance Information
Statute
European Union
United States
The Algorithmic Accountability Act (AAA) mandates organizations to perform algorithmic impact assessments (AIAs) to evaluate the design, bias, and potential impacts of automated decision-making systems. Developers must ensure AI systems are designed to minimize harm and comply with data protection principles.
Executive Order
United States
EO 14110 requires organizations to report data on the training and deployment of high-impact AI models. It includes provisions for risk assessment, content watermarking, and cybersecurity measures to ensure AI systems align with national security and ethical standards.
Industry
United States
The NIST AI RMF outlines technical processes to identify, measure, and mitigate risks associated with AI systems. It emphasizes transparency, robustness, fairness, and security in AI model development and deployment, while promoting accountability through continuous risk monitoring.
Global
ISO/IEC TR 24028:2020 offers a framework for assessing the trustworthiness of AI systems, focusing on critical factors such as explainability, resilience, fairness, and security. It provides recommendations for evaluating AI models and systems against these trustworthiness criteria.
SOC Type 1 assesses the design of an AI service provider's controls related to data security, privacy, and system integrity at a specific point in time. It reviews the existence of controls for handling data securely but does not test their effectiveness over a prolonged period. This report is useful for organizations that need to establish baseline controls for their AI systems.
SOC Type 2 evaluates the operational effectiveness of an AI service provider's controls related to data security, privacy, and system processing over time. The audit reviews how these controls perform over a period (e.g., 6-12 months), verifying that they consistently meet compliance and performance standards. It is ideal for organizations that need to demonstrate ongoing control reliability.
Regulation
European Union
The DSA mandates online platforms to implement robust content moderation systems, conduct thorough risk assessments, and ensure algorithmic transparency. Additionally, platforms must provide users with the ability to appeal content moderation decisions and offer clear, accessible terms of service.
GDPR impacts AI systems that process personal data, requiring technical measures like encryption, anonymization, and data minimization. It emphasizes data protection by design and default, ensuring AI systems comply with privacy standards and mitigate risks associated with automated decision-making.
The MDR mandates that medical devices using AI undergo rigorous clinical evaluation and risk assessment. Developers must ensure AI systems meet high standards for accuracy, reliability, and safety. Additional requirements include data protection, post-market surveillance, and traceability.
United States
Autonomous Vehicle Regulations require manufacturers to comply with safety standards, implement secure data handling, and provide transparency in AI decision-making processes. Technical requirements often include collision avoidance, fail-safe systems, and rigorous testing protocols.
Global
FCC
HIPAA requires safeguards for ePHI (electronic protected health information), including access control, data integrity, and transmission security. AI systems using ePHI must limit access and use de-identified data
Ethical Framework
Global
The UNESCO AI Ethics framework provides guidelines for responsible AI development, emphasizing transparency, accountability, and fairness. It advocates for AI systems to align with universal human rights and requires mechanisms to mitigate potential harms such as bias or misinformation.
The OECD AI Principles recommend technical and operational measures to ensure AI systems are transparent, robust, and fair. They call for mechanisms to assess risks, ensure explainability, and align AI models with ethical standards and societal values.
The Trustworthy AI Assurance Framework outlines technical measures for assessing the trustworthiness of AI systems, including risk evaluations, explainability protocols, and bias audits. It provides organizations with tools to certify their AI systems as trustworthy.
Additional Compliance Information
In addition to the AI technical compliance information above, consider AI legal compliance, or wireless compliance information in the links below.
Legal compliance information is provided by Kama Thuo, PLLC AI Law Firm (external link).