Join Us in Accelerating the Skills First Movement

The Center for a Skills First Future helps employers put skills at the center of hiring and advancement—unlocking talent, fueling growth, and creating opportunity. We provide the tools, guidance, and shared language to make skills-first practices the standard.

fdsaf

Skills Action Planner

Skills Action Planner

Evaluate your organization’s progress in adopting a skills-first approach. Our interactive tool helps you identify strengths, gaps, and actionable steps to implement a skills-based approach effectively.

f

Resource Library

Resource Library

Access research, tools, and employer examples to implement and sustain skills-first talent strategies across the employee lifecycle.

f

Skills First Credential

Skills First Credential

Demonstrate expertise in skills-based workforce strategies. Equip yourself with the tools to drive impact across hiring, development, and retention.

d

Vendor Database

Vendor Database

Find vetted solutions and community support for skills-first implementations—from sourcing and assessment to upskilling and mobility.

19x


Removing degree requirements unlocks 19x more candidates (JFF, 2023)

30%


Skills-first hiring reduces cost-per-hire by 30% (SHRM, 2023)

42%


Companies adopting skills- first decrease turnover by 42% (McKinsey, 2023)

Featured Resource

Regulation/Policy in Artificial Intelligence (AI)

Overview

Artificial Intelligence (AI) is increasingly integrated into workplaces, learning environments, and credentialing systems, raising critical questions around ethics, safety, and accountability. Regulatory and policy frameworks are emerging to guide responsible AI development and use, balancing innovation with risk mitigation.

In the United States, AI oversight is decentralized: federal agencies provide guidance, while states enact sector- or technology-specific regulations. Employers also play a key role in shaping AI adoption and compliance in practice. Globally, governments are creating diverse frameworks reflecting local legal, social, and economic contexts.

Together, these layers form a multi-level governance system shaping how AI is deployed, monitored, and held accountable.

Drivers & Background

Key drivers for regulation/policy highlight the need for coordinated policy frameworks that can keep pace with rapid technological change:

  • Growing use of AI in high-impact decisions, such as hiring, financial eligibility, healthcare diagnostics, education technology, and public benefits administration.
  • Increasing concerns about civil rights, discrimination, and algorithmic bias, particularly for vulnerable populations.
  • Public-sector adoption of AI, which requires clear rules for procurement, transparency, and agency accountability.
  • Data privacy and cybersecurity challenges arising from AI models trained on large, sensitive datasets.
  • National competitiveness, emphasizing responsible innovation and workforce readiness.
  • Sector-specific risks, such as the use of AI in autonomous vehicles, medical devices, and financial technologies.
  • Global policy developments—notably the European Union’s AI Act and cross-national frameworks developed by the Organisation for Economic Co-operation and Development (OECD)—which create external pressure for coherent U.S. approaches.

AI regulation and policy have direct implications for the learn-and-work ecosystem:

  • Educational institutions must ensure AI-enabled tools used for teaching, learning, advising, and assessment comply with privacy, civil rights, and transparency requirements.
  • Employers and workforce agencies must adopt responsible AI practices in hiring, training, performance monitoring, and workplace decision-making.
  • Learners and workers need access to AI literacy and skills training to participate in an AI-shaped labor market.
  • Credentialing bodies and training providers may see increased demand for certifications, microcredentials, and degree programs focused on responsible AI, AI auditing, and AI system design.
  • Equity considerations are central: policies aim to reduce bias and ensure fair access to educational and economic opportunities influenced by AI systems

AI Regulation/Policy — U.S. Government

At the federal level, AI regulation and policy encompass laws, executive orders, regulatory guidance, and national standards that promote the safe and ethical use of AI. Key focus areas include:

  • Civil Rights and Equity
    • Federal agencies enforce protections against discrimination when AI is used in employment, housing, credit, education, and public services. These protections establish baseline expectations for fairness and human rights.
  • AI Safety and Risk Management
    • Federal policy promotes risk-based approaches, including safety testing, red-teaming, continuous monitoring, and impact assessments—especially for high-risk or high-consequence systems.
  • Technical Standards and Benchmarks
    • Organizations issue frameworks and evaluation guidelines that help federal agencies and private actors build trustworthy AI aligned with national expectations.
  • Government Use and Procurement
    • Federal agencies are required to assess and document risks associated with AI tools they purchase or develop, ensuring transparency, human oversight, and accountability.
  • Sector-Specific Oversight
    • Federal law applies to AI used in areas such as healthcare, financial services, transportation, and national security—each governed by specialized regulatory bodies.
  • International Alignment
    • Federal policy increasingly considers global interoperability and cross-border data implications, aligning with efforts by other nations.

Examples of federal guidance include:

  • NIST AI Risk Management Framework: The National Institute of Standards and Technology (NIST) released the AI Risk Management Framework (AI RMF 1.0) as a voluntary guide for organizations to identify, assess, and mitigate AI risks.
  • Federal Trade Commission (FTC): The FTC enforces laws against deceptive or unfair practices involving AI, providing guidance for transparency and accountability in algorithmic systems.
  • Executive Initiatives: The Biden administration issued executive orders and interagency guidance promoting ethical AI development, safety standards, and public-private collaboration to ensure responsible AI adoption.

State Role in AI Regulation/Policy

States play a critical and expanding role in shaping AI governance in targeted, localized ways, including:

  • AI-Specific Laws
    • Many states have enacted or proposed laws addressing automated decision systems, transparency requirements, data privacy protections, and limits on high-risk AI applications.
      • California: The California Consumer Privacy Act (CCPA) regulates collection, storage, and use of personal data, including AI-driven analytics.
      • Illinois: The Biometric Information Privacy Act (BIPA) sets strict rules for AI applications involving biometric data, such as facial recognition.
      • New York: Proposed legislation focuses on algorithmic accountability for high-risk AI systems, requiring transparency and bias mitigation.
    • These laws differ across states, creating a complex compliance landscape.
  • Governance Structures
    • States increasingly create AI task forces, advisory boards, or ethics committees to assess risks and recommend policy frameworks.
  • Public-Sector Standards
    • States regulate how AI can be used in state agencies, including requirements for audits, impact statements, procurement guidelines, and restrictions on certain technologies.
  • Context-Specific Regulations
    • States often focus on areas such as K–12 and higher education uses of AI, workforce and labor protections, consumer privacy, public safety and law enforcement tools

State-level policies vary widely, creating a complex compliance environment for organizations operating across multiple jurisdictions. While states retain authority to implement more stringent or specialized rules, they must do so within the bounds of federal civil rights, labor laws, and sectoral regulations. This creates a layered governance model in which federal policy establishes minimum expectations, and states refine, expand, or enforce context-specific requirements.

Employer & Industry Developments

Employers are adapting to evolving AI regulations while integrating AI into talent, credentialing, and operational processes:

  • Hiring/Talent Management
    • Organizations are evaluating AI-driven hiring platforms to ensure compliance with bias mitigation and transparency requirements. Tools that assess skills, credentials, and candidate fit must align with both state-level laws and industry best practices.
  • Workforce Credentialing/Learning
    • Companies increasingly use AI for workforce upskilling, personalized learning, and credential verification. Regulatory oversight ensures these AI systems operate fairly, accurately, and with appropriate privacy safeguards.
  • Risk Management and Ethics Programs
    • Many employers are establishing AI governance frameworks, ethics boards, and internal audits to mitigate risks related to algorithmic bias, privacy violations, and unintended decision-making consequences.

Employer engagement in AI regulation also shapes public policy. Industry groups, consortia, and partnerships often provide feedback on draft regulations, share best practices, and develop standards for responsible AI use. Employer and sector-based policies operate under federal and state regulatory frameworks rather than standing apart from them. These include:

  • Internal responsible AI guidelines, such as algorithmic audits, fairness checks, and data management standards.
  • Compliance with federal and state laws, including civil rights, labor protections, consumer privacy, and safety standards.
  • Industry-specific regulatory expectations, such as requirements in healthcare, finance, transportation, and defense sectors.
  • Voluntary codes of conduct or certification programs, often aligned with federal standards (e.g., NIST guidance) or international norms.

Employers do not create independent regulatory authority, but they develop operational policies to meet legal obligations and professional standards within their sectors.

Global Developments

AI regulation is evolving worldwide, reflecting different legal traditions, policy priorities, and ethical considerations:

  • European Union
    • Enacted the Artificial Intelligence Act, a g risk‑based regulatory framework that categorizes AI systems by level of risk (from minimal to “unacceptable risk”) and imposes stricter rules on high-risk systems.
    • Under this law, certain AI uses — such as social scoring, manipulative behavior, and some biometric systems — are prohibited, while “high-risk” systems face obligations for transparency, human oversight, and conformity assessments.
    • The EU’s governance architecture also includes a European AI Office, a Scientific Panel of independent experts, and a Board of member-state representatives to guide consistent implementation.
    • At the intergovernmental level, the Organisation for Economic Co‑operation and Development (OECD) has established a set of AI Principles that many countries use as a foundation for policy. These principles emphasize values such as human rights, transparency, robustness, accountability, and promotion of inclusive growth.
    • In June 2025, the OECD published a report on global AI governance that highlights how governments are defining roles, setting risk-based frameworks, and creating institutional structures to implement trustworthy AI.
    • There is also movement toward cross-border legal instruments: for example, the Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law was adopted by the Council of Europe.
  • United Kingdom
    • The UK emphasizes sector-specific guidance and ethical AI standards, aligning regulation with innovation while promoting international compatibility.
  • Canada
    • Canada’s Directive on Automated Decision-Making sets standards for federal AI deployment, focusing on bias assessment, accuracy, and transparency.
  • Asia-Pacific
    • Japan, Singapore, and South Korea prioritize ethical, human-centered AI and data protection while encouraging innovation and adoption across sectors.
    • Korea’s “AI Basic Act” (effective January 2026) designates “high‑impact” AI uses and requires enhanced transparency and trust‑worthiness measures.

These global developments reflect a broader trend: nations are not only applying ethical frameworks, but also building enforceable legal entities, institution‑building, and policy mechanisms to coordinate AI governance across borders.

Challenges & Considerations

  • Regulatory fragmentation between federal and state approaches may create complexity for institutions, employers, and technology developers.
  • Rapid pace of technological change challenges the ability of laws and policies to remain relevant and effective.
  • Data governance complexity increases as AI systems rely on larger and more interconnected data sources.
  • Resource disparities may limit states, agencies, small employers, or educational institutions in implementing policy requirements.
  • International variation in AI rules may create challenges for cross-border data flows and global technology deployment.

Resources

European Commission. (2023). Proposal for a regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence

https://en.wikipedia.org/wiki/Framework_Convention_on_Artificial_Intelligence

https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-final-green-light-to-the-first-worldwide-rules-on-ai

Government of Canada. (2020). Directive on automated decision-making. https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592

Learn & Work Ecosystem Library: Glossary / Artificial Intelligence (AI) Regulations / Policy - U.S. Government | Learn & Work Ecosystem Library

Learn & Work Ecosystem Library: Glossary / Artificial Intelligence (AI) Regulations / Policy - States | Learn & Work Ecosystem Library

National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce. https://doi.org/10.6028/nist.ai.100-1

Office of the Federal Trade Commission. (2022). Using artificial intelligence and algorithms. https://www.ftc.gov/business-guidance/blog/2022/04/using-artificial-intelligence-and-algorithms

Organisation for Economic Co-operation and Development. (2019). OECD principles on artificial intelligence. OECD Publishing. https://doi.org/10.1787/eedfee77-en

State of California. (2020). California Consumer Privacy Act (CCPA). https://oag.ca.gov/privacy/ccpa

UK Department for Digital, Culture, Media & Sport. (2023). National AI strategy: An overview. https://www.gov.uk/government/publications/national-ai-strategy

United States White House. (2022). Blueprint for an AI bill of rights: Making automated systems work for the American people. Office of Science and Technology Policy. https://www.whitehouse.gov/ai-bill-of-rights

United States White House. (2023). Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. Office of the President. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

Learn more
"I wish this had been available years ago—it would’ve saved time and made things much easier." -Adrienne Farthing
Are you using skills-first hiring, development, or advancement practices? Let's learn from each other.
Founding Investors
Logo Cloud
Logo Cloud
Logo Cloud
Founding Partners
Logo Cloud
Logo Cloud
Logo Cloud
Logo Cloud
Logo Cloud
Logo Cloud
Logo Cloud
Logo Cloud
Logo Cloud
Logo Cloud
Logo Cloud