Executive Order 14110, signed by President Joe Biden on October 30, 2023, is a comprehensive directive aimed at governing the safe, secure, and trustworthy development and use of artificial intelligence (AI). This order has significant implications for private enterprises and vendors working with the federal government.
Implications for Private Enterprises
The cornerstone of Executive Order 14110 is a mandate for AI safety and security standards. The order tasks the National Institute of Standards and Technology (NIST) with developing guidelines for trustworthy AI systems.
Sticking to these guidelines is non-negotiable for private enterprises, particularly those in the supply chain for federal agencies. AI systems within these entities must undergo comprehensive safety testing and risk assessments before deployment. Specifically, it requires developers of powerful AI systems to share their safety test results with the U.S. government. In addition, companies developing foundation models that could pose serious risks to national security, economic security, or public health and safety must notify the federal government when training these models.
Promoting Responsible Innovations
While the executive order encourages AI innovation, it balances it out by ensuring that innovations do not exacerbate societal harms, such as discrimination or bias. Private enterprises must actively develop and deploy AI systems that promote equity and fairness. President Biden indicated his administration “cannot—and will not—tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice.”
Companies should implement measures to avoid unintended consequences, such as AI tools that disproportionately impact particular demographic groups. A focus on responsible AI development will be essential for businesses to prevent damaging reputations or possible legal wrangles.
Consumer Protection and Privacy
Executive Order 14110 also emphasizes protecting consumer privacy and civil liberties. Businesses must ensure that their AI systems do not infringe on individual rights or enable AI-driven threats. This involves implementing robust data governance and privacy practices. As AI systems increasingly handle sensitive data, companies must ensure that privacy is built into their design and operation, aligning with the principles of data minimization and user consent.
Dual-Use Foundation Models
One of the order’s more nuanced aspects is its focus on dual-use AI models—those with both civilian and security applications like GPT-4, Gemini, or Claude 3. Companies developing these models must comply with specific reporting requirements, including disclosing information about the models’ capabilities, safety test results, and associated risk assessments.
This measure was put in place to prevent the misuse of AI technologies that could threaten national security. For companies operating in this space, transparency and collaboration with federal authorities will be at the heart of ensuring compliance.
The Impact on Employment Practices
The order and guidance from the Department of Labor highlight the need for fairness in AI-driven employment practices. Companies utilizing AI in hiring, promotions, or other HR functions must ensure these tools do not result in discriminatory outcomes.
Per the Department of Labor’s Department of Labor’s AI and Worker Wellbeing: Principles for Developers and Employees, regular audits of AI tools must be carried out, and detailed records must be maintained to show compliance with equal employment opportunity obligations. Failure to do this could lead to legal challenges and penalties.
Implications for Federal Government Vendors
Compliance with federal AI policies is paramount for vendors working with federal agencies. This includes aligning with NIST’s AI Risk Management Framework, which outlines the safety and security standards for AI systems used in government contexts. AI systems must meet these standards before being deployed in federal projects. Non-compliance could see vendors losing valuable contracts.
The executive order mandates that AI systems must meet these NIST standards before being deployed in federal projects. This requirement is not optional, making compliance essential for any vendor hoping to work on federal AI initiatives.
Additionally, the Office of Management and Budget (OMB) is directed to develop guidance for federal agencies regarding AI procurement. This guidance will likely include requirements for vendors to demonstrate compliance with NIST standards.
Record-Keeping and Transparency
Under the Order, federal contractors must also maintain detailed records of their AI systems to ensure transparency in their development and deployment. This is even more important for compliance evaluations and investigations by federal agencies.
Vendors must be prepared to provide documentation demonstrating that their AI systems meet safety, security, and ethical standards. Failure to maintain these records can lead to penalties or disqualification from federal tenders.
Liability Considerations
According to the Presidential order, federal contractors cannot delegate compliance obligations to third-party AI vendors. This means that even if a vendor procures AI technologies from another firm, the buck stops with them, and they remain 100% responsible for ensuring these systems comply with federal regulations.
Vendors must conduct thorough due diligence on all AI systems they use or procure to avoid liability issues. They must check that their AI tools do not have a disparate impact on protected groups and must be prepared to provide all AI-related records to federal agencies upon request.
Participation in AI Governance
Executive Order 14110 offers opportunities for private sector stakeholders to take part in shaping AI governance policies. Vendors can consult and have their say in developing AI standards and regulations. By being involved, companies stay ahead of regulatory changes and position themselves as leaders in the responsible development of AI technologies. Vendors, too, can build their reputation and credibility in the market by actively participating in AI governance.
Steps to Ensure Compliance with Executive Order 14110
To effectively comply with the AI guidelines outlined in Executive Order 14110, private enterprises and federal vendors must consider the implementing guidance by the Office of Management and Budget (OMB) and take the following steps:
Adopt AI Safety and Security Standards
They should implement the guidelines and best practices developed by NIST for trustworthy AI systems. This includes conducting rigorous safety testing and risk management practices to ensure AI systems are secure and reliable before they are deployed.
Conduct AI Impact Assessment
Regular AI impact assessments are essential for weighing up the potential risks associated with AI systems. This involves real-world testing, independent evaluations, and continuous monitoring to identify and mitigate risks, particularly those that could impact public safety or rights.
Ensure Transparency and Accountability
Companies must maintain transparency in their AI development processes, particularly for AI systems that pose significant risks (systems that could pose a high risk for personal health, security, or fundamental rights). Sharing safety test results and critical information with relevant authorities will be crucial for regulatory compliance.
Mitigate Bias and Promote Equity
Addressing bias in AI systems is another key priority under the Executive Order. Businesses need to implement measures to pinpoint and cut bias through fairness testing and align their practices with the Blueprint for an AI Bill of Rights or the OECD AI Principles.
Enhance in Continous Learning and Talent Development
Data privacy and security are central to the executive order’s goals. Companies must enforce rigid data governance and cybersecurity measures to protect sensitive data used in AI systems.
Engage in Continous Learning and Talent Development
Investing in employee training and upskilling in AI technologies, ethical practices, and risk management is essential. This confirms that the workforce is equipped to handle the complexities of AI governance and innovation.
The Role of the Chief AI Officer
Executive Order 14110 marks a significant shift in how AI is regulated in the United States. For all entities involved, compliance with this order is a legal requirement and a strategic step to facilitate safe and responsible AI development.
As companies consider all that is needed to comply with AI rules and regulations, a new member of the C-suite is joining the ranks—the Chief AI Officer (CAIO). These individuals are tasked with fueling innovation and growth and helping organizations of every size and industry gain a competitive edge.
The CAIO’s role will be steering entities through the complexities of AI governance and ensuring compliance with regulations like Executive Order 14110. In our next blog, we will explore the responsibilities and benefits of appointing a CAIO in more detail.
Book a consultation with Elevate to assist you built your AI Governance and AI Risk Management program and ensure compliance with contractual and regulatory requirements.