Content

/

Public Filings

/

Comments on Revision to the Annual Integrated Economic Survey

public filings

Comments on Revision to the Annual Integrated Economic Survey

October 21, 2025
The featured image for a post titled "Comments on Revision to the Annual Integrated Economic Survey"

Today, I submitted a comment in response to the U.S. Census Bureau’s proposed revision to the Annual Integrated Economic Survey (AIES).

This comment supports the Bureau’s modernization goals while recommending that the AIES include basic coverage of artificial intelligence (AI) investments. These modest additions ensure that the AIES best captures the most important variables affecting the state of the U.S. economy.

Measuring AI Adoption

Artificial intelligence (AI), as a general-purpose technology, has profound potential to rewire the American economy. What that transformation looks like—how quickly it might come about, which sectors it may affect most, and whether it augments or replaces other capital and labor investments—is hotly debated. But these are precisely the questions policymakers must confront in the coming years. Today, they lack the information needed to make informed judgments.

The nation is entering an uncertain era of technological progress. Early analyses indicate that the rate of AI adoption may already be outpacing historical trends for technology diffusion. Revenue reports from major technology producers suggest as much.

Anthropic, an enterprise-focused developer, for example, reported a fivefold revenue increase between January and August 2025. Surveys demonstrate that businesses are buying this technology en masse. McKinsey found that 78% of workers surveyed said that their employers use AI in at least one business function. It is, however, important to note that enthusiastic adoption notwithstanding, evidence of productivity enhancement remains inconclusive. Reliable sources of data are sorely needed to make sense of current trends and forecast future trajectories.

The good news is progress has already been made in assessing AI adoption through other Bureau surveys. In July 2025, the Business Trends and Outlook Survey (BTOS) included an AI supplement asking a series of important questions about AI adoption. Such updates are commendable and offer insight into how many businesses are using AI products and toward what ends.

Despite such progress, it should be reiterated that reliable quantitative data remain limited, particularly regarding business spending on AI products. While the BTOS can illuminate the breadth of AI proliferation across the economy, the AIES is well-positioned to measure its depth: the magnitude of AI integration and use. By tracking not only how many companies use AI, but also how much they invest in it, policymakers and the public can better gauge its growing significance to business operations.

Without consistent and systematic data collection, policymakers, researchers, and firms cannot reliably assess the extent to which AI is reshaping productivity, employment, or capital formation. Amid great uncertainty, the Bureau’s AIES stands out as a unique source of data to guide consequential decision-making in the coming years.

In order for the Bureau to accurately survey businesses on this consequential input to the American economy, it is important that AIES continue to ask respondents about software expenses. The current Federal Register notice indicates that, beginning with survey year 2025, the Census Bureau intends to “remove select capital expenditure details (finance leases and software)”. While the goal of reducing respondent burden is commendable, removing crucial software reporting risks excluding one of the fastest-growing and most consequential components of business investment: software-based artificial intelligence systems.

I therefore recommend:

1. Retaining a “Software Expenditures” item

2. Adding an “Artificial Intelligence Expenditures” detailed item

3. Piloting additional questions related to AI as detailed items

4. Ensuring transparency and stakeholder engagement

Recommendations

1. Retain a “Software Expenditures” Item

Even if most detailed software line items are removed, the Bureau should retain at least one aggregated entry under capital expenditures for “Software Systems (purchased or internally developed).”

To reduce respondent burden, the Bureau could combine capitalized and expensed software purchases into one reporting item. This approach simplifies response requirements while still capturing a critical measure of digital investment across U.S. businesses:

What were the total expenditures for computer software developed or obtained for internal use in 2025?

$ __________

2. Add an “Artificial Intelligence Expenditures” Detailed Item

To complement the BTOS, which already tracks AI use and workforce effects, the AIES should focus on capturing the dollar value of AI investment itself. The Bureau can achieve this by adding a detailed item about outlays specifically attributable to artificial intelligence systems:

What were the total expenditures for AI-specific software products, systems, or services in 2025?

$ __________

The detailed item should include one-time purchases, subscription services (including cloud computing and API services), as well as the cost of internally-developed AI products.

It should be noted that it may become increasingly difficult to differentiate between software and AI as providers integrate machine learning functions into existing products. Consequently, the Bureau should consider piloting several detailed items for a more granular view of AI usage and workshop the language with relevant stakeholders as described below.

3. Pilot Additional Questions Related to AI as Detailed Items

To improve the measurement of AI investment, the Bureau should pilot a limited set of follow-up questions to understand the types of AI products and services businesses are including in Item (2). These pilot items would help distinguish the major categories of AI adoption and expenditure, recalling that different classes of AI products (e.g. generative chatbots, API-based model access, or AI-enabled cloud services) may have distinct implications for economic dimensions like labor demand. Similarly, the Bureau should seek to differentiate between internal and external use of AI.

The objective of these pilots would not be to expand respondent burden, but to test whether businesses can reliably differentiate among these categories and to establish which distinctions are economically meaningful for ongoing data collection. Because AI applications and deployment models are likely to evolve rapidly over the coming years, it is particularly important to understand the different ways businesses incorporate AI into their workflows and operations.

To this end, the Bureau should explore questions that separately identify expenditures in the below domains. Both capitalized and expensed costs (incl. software licenses, subscriptions, usage-based fees, or internal development costs) should be captured:

What were total expenditures for conversational or generative-language AI systems in 2025?

$ __________

Of the total expenditures reported above, what amount supported:

Internal business operations (e.g., employee-facing AI tools)

$ __________

Customer-facing products or services (e.g., chatbots or client AI tools)

$ __________

Of the total expenditures reported above, what amount supported the use of AI agents?

$ __________

What were the total expenditures for AI tools used for software development, testing, or debugging in 2025?

$ __________

What were the total expenditures for developing or maintaining internal AI models (including staff time, compute, and data costs that were capitalized as part of model development) in 2025?

$ __________

What were total expenditures on usage-based access to external AI models or services (such as API credits or inference fees) in 2025?

$ __________

Briefly list the types of AI software purchases made in 2025.

4. Ensure Transparency and Stakeholder Engagement

Before full implementation, the Bureau should:

  • Publish pilot findings on response rates, data quality, and item nonresponse. This follows Census’s standard pretesting practice and allows for empirical validation of question performance.
  • Convene a stakeholder workshop with data users, researchers, and industry representatives to refine definitions and identify practical reporting challenges. Such workshops have successfully guided prior content changes in surveys like the ABS and BTOS.

Recognizing the Bureau’s 2025 production schedule, these steps could occur during 2025–2026, enabling validated AI questions to be piloted in the 2026 cycle and fully integrated by AIES 2027.

Expected Impact and Feasibility

Note that the proposed structure adds minimal respondent time. The analytical payoff, however, is substantial. Reliable measures of software and AI expenditures would sharpen how researchers, policymakers, and businesses understand the technological transformation underway in the U.S. economy. Specifically, these data would:

  • Distinguish substitution and augmentation effects from AI: Tracking AI investment separately from broader software expenditures would illuminate whether AI is displacing other forms of capital and labor spending or complementing them, a key question for productivity and capital-deepening analysis.
  • Improve financial analysis and firm valuation: Consistent reporting on AI capital outlays would enable analysts to assess how AI adoption is reshaping firm balance sheets and intangible assets, information that is increasingly central to investment decisions and market valuation.
  • Provide a baseline for future economic research: A clear benchmark for AI spending will allow future comparisons across time, industry, and economic cycles. It will also support research into whether current levels of investment reflect sustainable technological adoption or cyclical overinvestment.
  • Enhance business decision-making: Firms themselves benefit from knowing how their AI expenditures compare to peers and industry averages. Benchmarking can reveal where AI has proven most useful, where diffusion lags, and where productivity gains are concentrated.
  • Enable policy evaluation: Detailed expenditure data will help evaluate how software and AI investment respond to changes in the tax code or other policy instruments, such as the treatment of intangible assets under the 2017 Tax Cuts and Jobs Act (TCJA).

Response to PRA Comment Criteria

  • Necessity and practical utility: AI and software expenditures are now central drivers of economic growth. Their omission would materially reduce the survey’s analytical and policy value.
  • Accuracy of burden estimate: The proposed module adds only modest burden through skip-based questions and limited industry subsampling.
  • Enhancement of quality, utility, and clarity: Clear definitions, simple accounting categories, and alignment with existing AIES items maximize comparability and interpretability.
  • Minimization of burden: Aggregation, skip logic, and modular design ensure efficiency and limit respondent fatigue.

Concluding remarks

The Annual Integrated Economic Survey is the foundation of the nation’s business statistics. Ensuring that it captures the role of software and AI investment will allow the U.S. statistical system to keep pace with the technologies reshaping the real economy.

A concise AI module would establish the Census Bureau as the global standard-setter for measuring digital transformation, all while imposing negligible additional burden on respondents.

Thank you for considering these comments and for the Bureau’s continued leadership in improving the quality, relevance, and timeliness of U.S. economic statistics.


Explore More Policy Areas

InnovationGovernanceNational SecurityEducation
Show All