California issues procurement guidelines for state entities that are acquiring generative AI – Technologist
Following the Governor’s executive order on Generative AI (GenAI) published last year, California state agencies have worked to implement its provisions, publishing GenAI procurement guidelines and risk assessment methodology for the ethical and responsible deployment of GenAI. These resources are aimed at aiding state entities in evaluating the risk associated with GenAI systems.
For the private sector, the guidelines provide insight into what California regulators view as critical components of GenAI compliance considerations, which may suggest the direction that California is moving with regards to the regulation of GenAI generally, and, they lay out the requirements that companies that sell GenAI products to California state agencies will need to meet.
Last year, California Governor Gavin Newsom, issued an executive order directing California agencies to study GenAI technologies and develop additional guidance on the procurement of these technologies. In March, the Newsom administration released GenAI Guidelines that state agencies must follow if they purchase or use AI to generate content, such as analyses of health care claims or tax filing data. These AI guidelines for procurement follow the trend of providing high-level requirements, focused on higher risk activities that are proliferating across states. The California Department of Technology (CDT) Office of Information Security also released its GenAI Toolkit and Generative AI Intelligence Risk Assessment to aid state entities in evaluating GenAI systems.
Highlights from GenAI Resources
State entity responsibilities. The GenAI Guidelines highlight the responsibility of each state entity to deploy GenAI in an ethical, transparent, and trustworthy manner. State entities are directed to conduct an inventory of all GenAI uses and submit it to the CDT. Additionally, each state entity director and their executive leadership teams, including their Chief Information Officers (CIOs), are directed to:
- Assign a member of the executive team to monitor and evaluate GenAI use
- Attend mandatory GenAI trainings (executive & procurement teams)
- Review annual employee training to ensure staff understands GenAI use policies
- Prior to procurement, identify the business needs being addressed by using GenAI and understand the implications of using GenAI to address them
- Create an open and collaborative culture with employees regarding GenAI’s impact for intentional procurements
- Assess the risks and potential impacts of deploying the GenAI technology it intends to procure
- Prior to deployment, prepare data inputs and adequately test models to reduce bias and errors
- Establish GenAI-focused team to continuously evaluate GenAI use and its implications
GenAI Procurement. The GenAI Guidelines list out a set of requirements for state entities seeking to procure GenAI, which will have an impact on companies that wish to sell GenAI to those entities. State entities will be required to assign a GenAI subject matter expert to assist with contract management functions and report all contracts that are intended to purchase GenAI. But, many of the requirements will impact companies that wish to sell GenAI to the state entities:
- The companies’ products will need to pass a GenAI assessment (as described further below)
- Bidders for state contracts will be required to submit a GenAI fact sheet describing the models in fairly extensive detail
- The contracts will need to include special provisions, which have not yet been drafted by CDT, but will include at least a disclosure that GenAI is being used and a requirement to disclose the use of GenAI in any responses to state entities
GenAI risk assessment and management. The GenAI Guidelines mandate each state entity’s CIO to complete a risk assessment created by CDT. That risk assessment is based on NIST’s AI Risk Management Framework, regardless of the intent. Essential questions to address from the outset include:
- What are potential inequities in problem formulation?
- What are the data inputs?
- How and when will the solution be implemented and integrated into existing and future processes and delivery of services?
- Who will be the GenAI team responsible within the program area to monitor, validate, and evaluate the GenAI tool?
Similar to other regulatory frameworks (in particular, those in the EU), the assessment categorizes risks into separate levels (low, moderate, high) and identifies the relevant criteria state entities can use to determine the appropriate level. Under Part 1 of the risk assessment, state CIOs complete a series of questions aimed at evaluating the implementation and implications of using GenAI solutions within the state organization. The topics covered include an assessment of the project overview and organizational need, an analysis of alternative solutions, an examination into the type of GenAI system being sought out including whether the system will be shared with other state entities or third parties, the completion of relevant impact assessments, and an evaluation of financial considerations such as the long-term viability and maintenance of integrated GenAI systems.
Once Phase 1 is completed, and if the GenAI risk level is rated moderate or high, state entities must complete Phase 2, which lists specific controls that state entities must incorporate before procuring and deploying higher risk GenAI systems. These controls include:
- ensuring human verification to ensure accuracy of outputs,
- business services not being contingent on the use of the GenAI system,
- implementing a data loss prevention system,
- using security controls compliant with NIST SP 800-53, and
- making the entire IT infrastructure compliant to a zero-trust architecture.
For the full list, see here.
State workforce GenAI training. The GenAI Guidelines recommend state entities incorporate training on emerging technologies, such as GenAI, into mandatory Information Privacy and Security Training for all state employees. They also outline a phased approach to workforce training. Phase 1 focuses on broad educational and training programs for executive-level personnel then specialized training for roles such as legal, labor, and privacy specialists. Phase 2 covers developing GenAI skills among program staff to enhance operational efficiency and support the delivery of safe, high-quality, equitable services (including training for technical and cybersecurity experts). Phase 3 involves foundational education for the general workforce before deployment of GenAI tools.
Other AI-related California Legislative Developments
The developments around AI governance are not isolated to imposing requirements on the public sector. Local legislators in California are also aiming to regulate the private sector’s use of AI by addressing many of the same concerns being addressed by the GenAI Guidelines and risk assessment. In March, California also continued to move forward on SB 1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Models (not Systems) Act), which would govern developers of frontier models. Additionally, other AI-related bills have moved forward and are currently with committee. These include:
- Senate Bill 896 – seeks to build upon recent AI directives from President Biden and Governor Newsom to encourage innovation while protecting the rights and opportunities of all Californians.
- Senate Bill 892 – would require the CDT to establish safety, privacy, and nondiscrimination standards relating to AI services and prevent related contracts unless they comply with such standards.
- Senate Bill 893 – would establish the California Artificial Intelligence Research Hub within the Government Operations Agency.
- Assembly Bill 2930 – aims to regulate Automated Decision Systems (ADS) by assessing and eliminating algorithmic bias.
Next Steps
These developments provide concrete examples of the types of questions and risk mitigations that state regulators consider when evaluating compliance activities associated with the procurement, deployment, and use of GenAI technologies. While the incorporation of these technologies may be new to many organizations, the California resources underscores that the necessary diligence involved in integrating such technologies is not and sets out baseline expectations from potential regulators.
Organizations seeking to incorporate AI systems, including GenAI functionalities, within their business can adapt existing compliance procedures and policies to address the AI-specific issues and risks. Key principles such as oversight and accountability, needs assessment, performance monitoring, leadership and management training, due diligence and risk assessments, and proper contracting can be leveraged to promote and support appropriate internal governance and as evidence to regulators that the organization treats the use of AI systems with the appropriate amount of care and attention. As organizations build out and assess their own AI safety and compliance efforts, these resources can be a helpful starting point or reference to identify gaps or areas for improvement.
Authored by
Nathan Salminen, Alyssa Golay, and Pat Bruny.