
AI and data centers absolutely dominate the energy policy discourse these days. Many readers are already familiar with the basics: data centers are poised to represent nearly half of all U.S. load growth from now until 2028, and in pockets of the country where they’re most likely to be deployed (Texas, Virginia, Pennsylvania, etc.), they will represent not just one of the biggest energy issues for policymakers, but one of the biggest issues period.
We’re already seeing the challenges of new data center deployments. AEP Ohio imposed a moratorium on new data-center hookups that lasted more than two years while regulators struggled to address grid costs. Just last week, the city council of College Station, TX unanimously shot down a land sale for a 600-MW data center campus, with residents citing strain on the grid as a key concern. And in a move that sparked a week’s worth of drama on Energy Twitter, PJM has proposed a restrictive new service category for large incoming loads: Non-Capacity-Backed Load (NCBL).
Today is Part 1 of a two-part series. In this installment, we’ll break down what NCBL actually does, what aspects of stakeholder backlash are on the mark, and why “time to power” is what dominates data center decisions. In Part 2, we’ll lay out the fixes—what “flexible load done right” might look like—and how to get there.