
Today, I submitted a comment in response to the Office of Science and Technology Policy's (OSTP) request for information on identifying statutes, regulations, agency rules, guidance, forms, and administrative processes that unnecessarily hinder the adoption of artificial intelligence (AI) within the United States.
This comment commends OSTP’s broader interest in repealing or modifying regulations, processes, etc., that existing or emerging AI capabilities rendered obsolete or outmoded. Reaping the enormous upside of AI will require accelerating the adoption of new AI-enabled processes and applications, not merely leading at the technical frontier. Diffusing existing AI capabilities into every corner of the economy is also relatively low risk, and likely essential for adapting to – and defending against – various forms of AI use and misuse.
As the Request alludes to, even the most beneficial regulations tend to codify practices for a particular technological paradigm, market structure, or mode of production. A comprehensive regulatory reset is thus justified for facilitating AI diffusion as a general purpose technology independent of whether one is “pro” or “anti” regulation in the abstract. Imagine a future where AI-powered medical clinics are permitted to provide low-cost medical services in underserved parts of the country. Such clinics would likely be prohibited under current law and regulation, or at a minimum be severely curtailed in how autonomously they can operate. Nevertheless, it is now only a matter of time before AI systems can surpass medical doctors on all relevant benchmarks while being sufficiently robust to not require direct human oversight. At that point, the role of the human practitioner would become pro forma and thus an artificial barrier to realizing AI’s full potential for expanding the supply-side of medical services.
The above is just an illustrative example. As frontier AI systems approach capabilities resembling Artificial General Intelligence (AGI), similar regulatory constraints on diffusion risk manifesting for everything, everywhere, all at once. The Request is thus correct to focus on how existing laws and regulations from the pre-AI era risk binding on diffusion in indirect and non-obvious ways, over and above laws that target AI companies or services directly.
However, the sheer complexity of U.S. law and regulation suggests it will not be sufficient to remove inhibiting regulations on an ad hoc basis or one at a time. Consider the earlier toy example of a fully autonomous AI medical clinic. Such a clinic may be technically feasible today but de facto prohibited by a thicket of interacting laws, standards and regulations. Remove one and discover five more. A more streamlined approach would instead work backwards from the technology by through regulatory sandboxes or bespoke waivers that define new and superseding performance-based benchmarks.
The scale of the needed reform is daunting. Updating our legal and regulatory systems for AGI will require close engagement with Congress and policymakers at every level of government. We hope the policy outputs downstream from this Request for Information help lead the way.
Responses to questions:
(i) What AI activities, innovations, or deployments are currently being inhibited, delayed, or otherwise constrained due to Federal statues, regulations, or policies? Please describe the specific barrier and the AI capability or application that would be enabled if it was addressed. The barriers may directly hinder AI development or adoption, or indirectly hinder through incompatible policy frameworks.
(ii) What specific Federal statutes, regulations, or policies present barriers to AI development, deployment, or adoption in your sector? Please identify the relevant rules and authority with specificity, including a cite to the Code of Federal Regulations (CFR) or the U.S. Code (U.S.C.) where applicable.
Given the breadth of AI’s potential applications, below is a highly non-exhaustive list of specific statutes, regulations, or policies where legacy practices delay or inhibit the adoption of AI. Examples have been elevated for inclusion due to their clear societal or economic import.
Transportation
The clearest examples of legacy regulation inhibiting the deployment of AI-enabled innovation are in transportation, where autonomous vehicles (AVs) and drones clash with rules designed for human operators. Federal Motor Vehicle Safety Standards (FMVSS) (49 CFR Part 571) assume vehicles have human controls: steering wheels, pedals, mirrors, etc. In fact, prior to recent updates, standards for steering columns and driver’s seats effectively required a steering wheel and front-facing driver’s seat. A vehicle designed purely for an AI driver (with no steering wheel) could thus not comply without an exemption. NHTSA has begun updating some FMVSS terminology (e.g. replacing “steering wheel” with more neutral “steering control” in regulations), but many legacy provisions still constrain AV design. The result is that truly driverless cars are legal only in small pilot fleets, and innovations like novel cabin layouts or safer seating configurations are foreclosed by outdated requirements.
Beyond vehicle design, operational rules assume a human driver present at all times. For instance, under Federal Motor Carrier Safety Regulations for trucking, a “driver” is defined as “any person who operates any commercial motor vehicle”. By definition, this excludes an AI system. Many other key rules – hours-of-service limits, commercial driver’s licensing, roadside inspection duties – also hinge on a human driver. A Level-4 autonomous semi-truck that operates itself thus has no way to satisfy rules like 49 CFR §392.22(b), which requires the driver to manually place warning triangles on the road if the truck breaks down. This task is “straightforward for a driver, but as currently designed, driverless commercial motor vehicles are unable to handle the same task”. Waymo and others thus had to petition FMCSA just to use an automated beacon in lieu of reflective triangles when no human is in the cab. These types of requirements are direct barriers to the rollout of autonomous freight trucks.
For drones (unmanned aircraft), the Federal Aviation Administration’s rules similarly assume a human pilot. Under 14 CFR §107.31, a remote pilot must keep visual line-of-sight (VLOS) to the drone at all times. This blanket restriction makes advanced AI-driven drone operations – like long-range infrastructure inspection or package delivery – unlawful unless a case-by-case waiver is obtained, even though the technology for safe beyond-visual-line-of-sight (BVLOS) flight exists. The default VLOS rule means AI-powered drones cannot be widely deployed for precision agriculture, surveying, or delivery services, significantly delaying adoption.
Autonomous trains and ships face analogous barriers. Freight railroads increasingly have the technical ability for driverless or remotely supervised train operations, but Federal Railroad Administration rules (49 CFR Parts 240–242) mandate certified human engineers and conductors on board. Maritime law (46 U.S.C. and Coast Guard regulations) similarly requires minimum crew manning on vessels. There is currently no clear path to operate an unmanned commercial ship, because 46 CFR Part 15 “Manning Requirements” presumes a crewed vessel. In sum, across transportation modes, regulatory mismatch and structural incompatibility (rules built entirely around human presence) are directly inhibiting AI-powered innovations that could improve safety and efficiency.
Recommendations
- Autonomous vehicles: Automated Driving Systems (ADS) are constrained by Federal Motor Vehicle Safety Standards (FMVSS) that assume human controls, prohibiting fully driverless designs. Relief is limited to 49 U.S.C. §30113 and 49 CFR Part 555 exemptions with tight volumes and durations. NHTSA should raise unit caps and set a service-level agreement for issuing ADS exemptions with operational monitoring within a reasonable timeframe.
- Aviation BVLOS: Small UAS remain tied to visual line of sight. Operations beyond visual line of sight (BVLOS) require slow, bespoke waivers under 14 CFR §107.31 and §107.200, blocking scalable applications for delivery, logistics, and inspection. FAA should instead publish standardized BVLOS safety-case templates and categorical waivers in low-risk corridors.
- Rail automation: FRA’s 49 CFR Part 218, Subpart G generally requires two-person rail crews, preventing automation where equivalent or superior risk performance can be shown. Explicit automation pathways should be added.
- Maritime autonomy: 33 CFR §83.05 (Rule 5) and 46 CFR §§15.705/15.850 embed a human lookout presumption. Allow sensor-fusion “equivalent lookout” performance with remote watchkeeping.
Manufacturing, construction and heavy industry
AI-driven automation in manufacturing, construction, and heavy industries are often constrained by rules that assume human operators. For instance, in high-regulation industries (pharmaceuticals, medical devices), Good Manufacturing Practice (GMP) rules assume human oversight in quality control. FDA’s drug GMPs mandate a human Quality Control Unit to “approve or reject” each batch (21 CFR 211.22), making it unclear if an AI system could ever serve that role. Cutting-edge vision AI can detect product defects more reliably than people, but fully automating QC violates the letter of GMP. As a result, AI is relegated to an advisory role, and potential improvements in consistency and speed are lost.
Autonomous construction machinery (robotic excavators, bulldozers, or cranes) is also inhibited by legacy rules. Fully autonomous machines could reduce injuries by removing operators from hazardous environments and work continuously to speed up projects. However, OSHA’s construction safety rules assume a human operator for such equipment. Specifically, OSHA requires that each crane operator be “trained, certified/licensed, and evaluated” before operating a crane (29 CFR §1926.1427(a)). This framework makes sense for human operators, but offers no mechanism to certify an AI-driven crane or earthmover. An autonomous crane that can lift materials on site cannot legally operate because there is no human to hold the required operator license or certification. The only way to comply would be to have a person constantly “pretend” to be the operator, defeating the purpose of autonomy. Thus, the deployment of autonomous lifting equipment is effectively prohibited on U.S. construction sites today.
Another barrier is the requirement for human supervision and inspection in construction safety. Many OSHA construction standards mandate a “competent person” on site to identify hazards and make safety decisions (e.g. inspecting excavations, scaffolds, etc.). AI-based vision systems now can monitor sites 24/7, detecting risks like unstable trenches or missing fall protection in real time. Yet if the regulation (e.g. 29 CFR §1926.651 for excavations) explicitly requires a competent human person to perform an inspection, it’s unclear whether an AI detection system could satisfy the rule. Project managers therefore underutilize such AI safety tech, since it lacks official recognition as an equivalent to human oversight.
Additionally, regulatory processes in construction (permitting, code compliance) are often inflexible, slowing AI use. For example, building design AI tools can generate novel structural designs or optimize construction plans. But federal and state building codes (while not CFR, they are enforced via federal projects and OSHA’s general duty) rely on professional engineer sign-offs and traditional calculations. AI-generated designs, even if superior, face approval bottlenecks because codes and contracting rules did not anticipate AI design. In federal construction projects, any deviation from established standards may require lengthy variances or additional proof, delaying project schedules. This lack of adaptive regulatory process means AI augmentation in engineering – like generative design or automated code compliance checking – is inhibited. Overall, many promising construction AI applications (from autonomous equipment to intelligent project management) are stuck in pilot mode due to regulations predicated on manual, human-centric practices.
AI-enabled autonomy and decision-making are also advancing in heavy industries like mining and energy. In large open-pit mines, autonomous haul trucks and drills can remove operators from hazardous areas and improve efficiency. U.S. mine safety rules (30 CFR) were not written for driverless fleets, and MSHA has no autonomy-specific operating standard today. Instead, mines must meet a new performance-based rule for surface mobile equipment, requiring a written safety program that identifies hazards, sets maintenance/repair procedures, trains personnel, and evaluates technologies (the current compliance hook for autonomous systems). The result is not a ban but a gray zone governed by performance duties rather than autonomy certification. Practical ambiguities remain because many provisions presume a human operator or attendant. For example, “unattended equipment” parking rules and pre-operation/shift examinations by a competent or certified person, which raise questions about who performs inspections and how “unattended” applies to autonomous trucks.
Recommendations
- Autonomous construction machinery: OSHA’s crane rules (29 CFR §1926.1427) assume a licensed human operator and provide no pathway for AI-operated cranes or earthmovers. OSHA should create an “autonomous equipment authorization” pathway—via near-term interpretation/enforcement discretion with site-specific safety cases and a designated responsible supervisor, and long-term rulemaking in Subpart CC—to certify system-level performance rather than a human license holder.
- Site monitoring: Many construction standards (e.g., 29 CFR §1926.651 for excavations) require inspections by a “competent person,” which leaves validated AI vision systems in limbo. OSHA should publish acceptance criteria for AI-based detection/monitoring and develop standards for when AI oversight is an equivalent means of compliance.
- AI design & permitting for federal projects: Current code compliance and procurement practices require traditional calculations and PE sign-offs, slowing adoption of generative design and automated plan checking. Federal owners (GSA, DoW, etc.) should accept AI-generated designs, adopt automated code-check pipelines, and create categorical “alternate means” approvals for AI-aided designs.
- Mining autonomy: MSHA relies on a performance-based surface mobile equipment safety program without autonomy-specific operating standards, producing ambiguities around “unattended equipment” rules and pre-shift exams. MSHA should issue Program Policy Letters and a compliance guide clarifying inspection responsibilities for autonomous fleets, how parking/attendance rules apply, and a pilot “autonomy authorization” process with defined performance metrics and data logging.
Healthcare
There is perhaps no sector where AI has greater upside potential than in healthcare and biotechnology. Nevertheless, AI innovation in diagnostics and labs is bottlenecked by rules written for physician-ordered, human-performed testing. Medicare’s physician-order gate (42 CFR 410.32 with 411.15(k)) makes patient-initiated AI screening (think pharmacy kiosks or home-based dermatology checks) generally non-covered. Inside laboratories, proficiency-testing anti-referral provisions (42 CFR 493.801) chill cross-lab benchmarking and federated evaluations for AI analyzers. Meanwhile, whiplash around FDA’s oversight of lab-developed tests has frozen investment in digital genomics AI by muddying the route to market. Loosening these constraints would unlock low-friction screening at scale, safer cross-site model comparison, and clearer commercialization paths for AI-enabled laboratory developed tests (LDTs).
Health data access and training remain structurally misaligned for modern AI. HIPAA’s de-identification safe harbor doesn’t recognize formal privacy approaches, constraining the creation and sharing of robust de-identified training corpora. Substance-use data (42 CFR Part 2) and patient-safety work product keep rich behavioral-health and incident datasets siloed unless navigated through narrow exceptions, hampering unified risk models and safety learning. ONC’s HTI-1 disclosures for predictive Decision Support Interventions (DSIs) can also slow rollouts when an EHR vendor supplies the tool. Updating de-ID to be modality-aware while right-sizing DSI transparency would expand lawful data for training while preserving trust.
Operational and payment rules within the U.S. medical system likewise assume a human at every step. Virtual direct supervision flexibilities for “incident to” and outpatient services remain partial and time-limited. Hospital Conditions of Participation require human authentication and order origination in ways that don’t cleanly accommodate AI-generated drafts or automated provenance, even with human attestation and immutable audit trails, thereby slowing adoption of safe scribing and AI-assisted ordering. Making virtual supervision permanent and tech-neutral, and explicitly recognizing AI-originated documentation, would let providers redeploy staff time while maintaining accountability.
Finally, statutory data infrastructure is lagging. The long-standing appropriations rider blocking a national patient identifier prevents deterministic matching, degrading both training data quality and real-time coordination that AI could power.
Recommendations
- Autonomous screening: Autonomous screening and patient-initiated AI diagnostics are throttled because Medicare requires most diagnostic tests to be ordered by the treating physician, while patient-initiated AI screening or pharmacy-kiosk AI is generally not considered “reasonable and necessary.” Reforming 42 CFR 410.32(a) and 42 CFR 411.15(k)(1) would enable retail/remote AI screening (retinopathy, dermatology, spirometry, audiology) and continuous AI triage at scale.
- Multi-site learning: Algorithmically assisted, multi-site learning inside labs is chilled by CLIA proficiency testing (PT) anti-referral rules that forbid inter-lab communication on PT samples, while labs fear federated evaluation or model comparison could be misconstrued as PT “referral.” 42 CFR 493.801(b)(3)–(5) and CMS’s interpretive guidance should thus be updated to enable safer cross-lab benchmarking of AI analyzers.
- HIPAA de-identification: Existing de-identification rules are not tuned to a world with high-dimensional, multimodal forms of AI ( text, images, waveforms). HIPAA Safe Harbor lists 18 identifiers but doesn’t address re-ID risk in rich modalities or synthetic data. 45 CFR 164.514(a) should be reformed to enable scalable use of de-identified imaging/waveform corpora and other privacy-preserving synthetic datasets.
- Algorithm transparency burdens in EHRs: Rules governing certified EHRs (ONC HTI-1) can slow deployment of embedded predictive tools due to extensive Predictive DSI source-attribute disclosures and risk-management documentation when the EHR supplies the model. Amending 45 CFR Part 170 (the DSI criterion) would enable faster integration of vetted third-party or self-developed models.
- Medical record authentication: Medical record authentication and order rules assume a human originator for each entry/order, complicating AI-generated drafts/orders without explicit allowance for automated provenance with human attestation. 42 CFR 482.24(c) should be amended to enable safe AI scribing and ordering with clear attest-audit patterns.
- National data infrastructure: A longstanding Congressional rider is blocking the creation of a unique patient identifier, preventing HHS from promulgating a national standard and thus hampering patient matching for AI across settings. OSTP and the White House should work with Congress to remove the Section 510 rider in the next annual Labor-HHS appropriations bill. Unique patient identifiers are otherwise authorized under HIPAA 42 U.S.C. 1320d-2(b). This would enable higher-fidelity training datasets and safer AI-enabled coordination across health systems.
Responses to questions:
(v) Where barriers arise from a lack of clarity or interpretive guidance on how existing rules cover AI activities, what forms of clarification (e.g., standards, guidance documents, interpretive rules) would be most effective?
(vi) Are there barriers that arise from organizational factors that impact how Federal statues, regulations, or policies are used or not used? How might Federal action appropriately address them?
Beyond sector-specific rules or regulations, the administrative procedures governing Federal agencies are themselves enormous inhibitors to AI adoption – both directly within government, and indirectly through the burdens slow bureaucracies impose on the private sector. Yet most friction in public-sector AI adoption isn’t from a single bad rule; it’s from process defaults that assume slow, one-off systems and human-centric paperwork. Program teams under-use fast tracks in the Paperwork Reduction Act (PRA), struggle to reuse authorizations to operate (ATOs), and reinvent compliance checklists agency by agency. Meanwhile, FedRAMP queues, fragmented standards, and siloed testbeds make every pilot program feel bespoke. The fix is to change the “firmware” of how the federal government evaluates, procures, and monitors AI so that capable teams can move quickly without compromising on safety, privacy, or security.
Two of the biggest time sinks are the PRA and ATO/FedRAMP. For PRA, it may be worth establishing a government-wide A-11 §280 umbrella/generic clearance for AI telemetry, A/B tests, and human-in-the-loop labels with built-in privacy guardrails. Program offices can then draw down from it instead of filing from scratch. For ATO/FedRAMP, an AI-SaaS Provisional ATO lane could be created and require authorization reuse by default, with 90-day reviews when only the model or underlying hardware is swapped.
Finally, wire in decision-time SLAs and shared testbeds. Agencies should publish clocks for waivers and pilots (e.g., NHTSA Part 555, FMCSA Part 381, FAA 107 waivers) so innovators know when to expect an answer. Similarly, instead of one-off demos, fund cross-agency sandboxes using OTAs and prize authority with shared data enclaves, so evidence from one pilot can inform the others.