Connect with us

Published

on

Nearly 15 years after journalist David Owen and I tangled — and then united — over Jevons Paradox, the New York Times today published a guest essay on that subject by a Murdoch-employed London journalist. David and I went deeper and did better, as you’ll see in a moment.

Jevons Paradox denotes the tendency of economies to increase, not decrease, their use of something as they learn how to use that thing more efficiently. Its 19th-century archetype, observed by Britisher William Stanley Jevons, was that “as steam engines became ever more efficient, Britain’s appetite for coal [to power them] increased rather than decreased,” as Sky News editor Ed Conway put it today, in The Paradox Holding Back the Clean Energy Revolution. Why? Because the “rebound” in use of steam as its manufacture grew cheaper more than offset the direct contraction in use from the increased efficiency.

Illustration by Joost Swarte for “The Efficiency Dilemma,” in the New Yorker magazine’s Dec. 20, 2010 print edition (Dec. 12 on line).

Where does David Owen come in? In October 2009 he published an op-ed in the Wall Street Journal claiming that congestion pricing could never cure traffic congestion, on account of the bounceback in car traffic due to lesser congestion. (Funnily enough, the Journal never runs opinion pieces maintaining that induced demand prevents highway expansions from “solving” road congestion.) My subsequent rebuttal in Streetsblog, Paradox, Schmaradox, Congestion Pricing Works, changed David’s mind. The disincentive of the congestion toll, he told me, could probably stave off enough of the rebound in driving to allow congestion pricing to fulfill its promise of curbing gridlock.

A year later, when David revisited Jevons Paradox in a full-blown New Yorker magazine narrative, The Efficiency Dilemma, he made sure to point to “capping emissions or putting a price on carbon or increasing energy taxes” as potential ways out. I was thrilled. and I published a post in Grist riffing on “The Efficiency Dilemma.” I’ve pasted it below. I hope to comment on Conway’s NY Times essay in a future post soon.

If efficiency hasn’t cut energy use, then what?

By Charles Komanoff, reprinted from Grist, Dec. 16, 2010.

One of the most penetrating critiques of energy-efficiency dogma you’ll ever read is in this week’s New Yorker (yes, the New Yorker). “The efficiency dilemma,” by David Owen, has this provocative subtitle: “If our machines use less energy, will we just use them more?” Owen’s answer is a resounding, iconoclastic, and probably correct Yes.

Owen’s thesis is that as a society becomes more energy efficient, it becomes downright inefficient not to use more. The pursuit of efficiency is smart for individuals and businesses but a dead end for energy and climate policy.

This idea isn’t wholly original. It’s known as the Jevons paradox, and it has a 150-year history of provoking bursts of discussion before being repressed from social consciousness. What Owen adds to the thread is considerable, however: a fine narrative arc; the conceptual feat of elevating the paradox from the micro level, where it is rebuttable, to the macro, where it is more robust; a compelling case study; and the courage to take on energy-efficiency guru Amory Lovins. Best of all, Owen offers a way out: raising fuel prices via energy taxes.

Thirty-five years ago, when the energy industry first ridiculed efficiency as a return ticket to the Dark Ages, it was met with a torrent of smart ripostes like the Ford Foundation’s landmark “A Time to Choose” report — a well-thumbed copy of which adorns my bookshelf. Since then, the cause of energy efficiency has rung up one triumph after another: refrigerators have tripled in thermodynamic efficiency, energy-guzzling incandescent bulbs have been booted out of commercial buildings, and developers of trophy properties compete to rack up LEED points denoting low-energy design and operation.

Yet it’s difficult to see that these achievements have had any effect on slowing the growth in energy use. U.S. electricity consumption in 2008 was double that of 1975, and overall energy consumption was up by 38 percent. True, during this time U.S. population grew by 40 percent, but we also outsourced much of our manufacturing to Asia. In any case, efficiency, the assertedly immense resource that lay untapped in U.S. basements, garages, and offices, was supposed to slash per capita energy use, not just keep it from rising. Why hasn’t it? And what does that say for energy and climate policy?

A short form of the Jevons paradox, and a good entry point for discussing it, is the “rebound effect” — the tendency to employ more of something when efficiency has effectively cut its cost. The rebound effect is a staple of transportation analysis, in two separate forms. One is the rebound in gallons of gas consumed when fuel-efficiency standards have reduced the fuel cost to drive a mile. The other is the rebound from the reduction in car trips after imposition of a road toll, now that the drop in traffic has made it possible to cover the same ground in less time.

Rebound effect one turns out to be small. As UC-Irvine economics professor Ken Small has shown, no more than 20 percent of the gasoline savings from improved engine efficiency have been lost to the tendency to drive more miles — and much less in the short term. Rebound effect two is more significant and becoming more so, as time increasingly trumps money in the decision-making of drivers, at least better-off ones.

Rebound effects, then, vary in magnitude from one sector to another. They can be tricky to analyze, as Owen unwittingly demonstrated in an ill-considered 2009 Wall Street Journal op-ed criticizing congestion pricing, “How traffic jams help the environment.” He wrote:

If reducing [congestion via a toll] merely makes life easier for those who drive, then the improved traffic flow can actually increase the environmental damage done by cars, by raising overall traffic volume, encouraging sprawl and long car commutes.

Not so, as I wrote in “Paradox, schmaradox. Congestion pricing works”:

When the reduction in traffic is caused by a congestion charge, life is not just easier for those who continue driving but more costly as well. Yes, there’s a seesaw between price effects and time effects, but setting the congestion price at the right point will rebalance the system toward less driving, without harming the city’s economy.

Rebound effects from more fuel-efficient vehicles, as depicted in “Energy sufficiency and rebound effects,” a 2018 concept paper by Steve Sorrell, Univ. of Sussex, and Birgitta Gabersleben & Angela Druckman, Univ. of Surrey, UK.

More importantly, as Owen points out in his New Yorker piece, a narrow “bottom up” view — one that considers people’s decision-making in isolated realms of activity one-by-one — tends to miss broader rebound effects. On the face of it, doubling the efficiency of clothes washers and dryers shouldn’t cause the amount of laundering to rise more than slightly. But consider: 30 years ago, an urban family of four would have used the washer-dryer in the basement or at the laundromat, forcing it to “conserve” drying to save not just quarters but time traipsing back and forth. Since then, however, efficiency gains have enabled manufacturers to make washer-dryers in apartment sizes. We own one, and find ourselves using it for “spot” situations — emergencies that aren’t really emergencies, small loads for the item we “need” for tomorrow — that add more than a little to our total usage. And who’s to say that the advent of cheap and rapid laundering hasn’t contributed to the long-term rise in fashion-consumption, with all it implies for increased energy use through more manufacturing, freight hauling, retailing, and advertising?

Owen offers his own big example. Interestingly, it’s not computers or other electronic devices. It’s cooling. In an entertaining and all-too-brief romp through a half-century of changing mores, he traces the evolution of refrigeration and its “fraternal twin,” air conditioning, from rare, seldom-used luxuries then, to ubiquitous, always-on devices today:

My parents’ [first fridge] had a tiny, uninsulated freezer compartment, which seldom contained much more than a few aluminum ice trays and a burrow-like mantle of frost … The recently remodeled kitchen of a friend of mine contains an enormous side-by-side refrigerator, an enormous side-by-side freezer, and a drawer-like under-counter mini-fridge for beverages. And the trend has not been confined to households. As the ability to efficiently and inexpensively chill things has grown, so have opportunities to buy chilled things — a potent positive-feedback loop. Gas stations now often have almost as much refrigerated shelf space as the grocery stores of my early childhood; even mediocre hotel rooms usually come with their own small fridge (which, typically, either is empty or — if it’s a minibar — contains mainly things that don’t need to be kept cold), in addition to an icemaker and a refrigerated vending machine down the hall.

Air conditioning has a similar arc, ending with Owen’s observation that “access to cooled air is self-reinforcing: to someone who works in an air-conditioned office, an un-air-conditioned house quickly becomes intolerable, and vice versa.”

If Owen has a summation, it’s this:

All such increases in energy-consuming activity [driven by increased efficiency] can be considered manifestations of the Jevons paradox. Teasing out the precise contribution of a particular efficiency improvement isn’t just difficult, however; it may be impossible, because the endlessly ramifying network of interconnections is too complex to yield readily to empirical, mathematics-based analysis. [Emphasis mine.]

Defenders of efficiency will call “endlessly ramifying network” a cop-out. I’d say the burden is on them to prove otherwise. Based on the aggregate energy data mentioned earlier, efficiency advocates have been winning the micro battles but losing the macro war. Through engineering brilliance and concerted political and regulatory advocacy, we have increased energy-efficiency in the small while the society around us has grown monstrously energy-inefficient and cancelled out those gains. Two steps forward, two steps back.

I wrote something roughly similar five years ago in a broadside against my old colleague, Amory Lovins:

[T]hough Amory has been evangelizing “the soft path” for thirty years, his handful of glittering successes have only evoked limited emulation. Why? Because after the price shocks of the 1970s, energy became, and is still, too darn cheap. It’s a law of nature, I’d say, or at least of Economics 101: inexpensive anything will never be conserved. So long as energy is cheap, Amory’s magnificent exceptions will remain just that. Thousands of highly-focused advocacy groups will break their hearts trying to fix the thousands of ingrained practices that add up to energy over-consumption, from tax-deductible mortgages and always-on electronics to anti-solar zoning codes and un-bikeable streets. And all the while, new ways to use energy will arise, overwhelming whatever hard-won reductions these Sisyphean efforts achieve.

I wrote that a day or two after inviting Lovins to endorse putting carbon or other fuel taxes front-and-center in energy advocacy. He declined, insisting that “technical efficiency” could be increased many-fold without taxing energy to raise its price. Of course it has, can, and will. But is technical efficiency enough? Owen asks us to consider whether a strategy centered on technical and regulatory measures to boost energy efficiency may be inherently unsuited for the herculean task of keeping coal and other fossil fuels safely locked in the ground.

I said earlier that Owen offers an escape from the Jevons paradox, and he does: “capping emissions or putting a price on carbon or increasing energy taxes.” It’s hardly a clarion call, and it’s not the straight carbon taxers’ line. But it’s a lifeline.

The veteran English economist Len Brookes told Owen:

When we talk about increasing energy efficiency, what we’re really talking about is increasing the productivity of energy. And, if you increase the productivity of anything, you have the effect of reducing its implicit price, because you get more return for the same money — which means the demand goes up.

The antidote to the Jevon paradox, then, is energy taxes. We can thank Owen not only for raising a critical, central question about energy efficiency, with potential ramifications for energy and climate policy, but for giving us a brief — an eloquent and powerful one — for a carbon tax.

Author’s present-day (Feb. 22, 2024) note: I overdid it somewhat in belittling energy efficiency’s impacts on U.S. energy use in that 2010 Grist post. Indeed, in posts here in 2016 and again in 2020 I quantified and enthused over improved EE’s role in stabilizing electricity demand and slashing that sector’s carbon emissions.

Carbon Footprint

Why a forest with more species stores more carbon

Published

on

A forest is not just trees. The number of species it holds, from canopy giants to understorey shrubs to soil fungi, directly determines how much carbon it can absorb, and, more importantly, how much it can keep over time. Buyers of carbon credits increasingly ask a reasonable question: Is the carbon in this project long-lasting? The science of biodiversity has a clear answer.

Continue Reading

Carbon Footprint

OpenAI Hits Pause on $40B UK AI Project: Energy Costs Shake Data Center Economics

Published

on

OpenAI Hits Pause on $40B UK AI Project: Energy Costs Shake Data Center Economics

ChatGPT developer OpenAI has paused its flagship UK data center project, known as “Stargate UK,” citing high energy costs and regulatory uncertainty. The project was part of a broader £31 billion ($40+ billion) investment plan aimed at expanding artificial intelligence (AI) infrastructure in the country.

The initiative was designed to deploy up to 8,000 GPUs initially, with plans to scale to 31,000 GPUs over time. It was aimed to boost the UK’s “sovereign compute” capacity. This means building local infrastructure to support AI development and reduce reliance on foreign systems.

However, the company has now paused development. An OpenAI spokesperson stated that they:

“…support the government’s ambition to be an AI leader. AI compute is foundational to that goal – we continue to explore Stargate UK and will move forward when the right conditions such as regulation and the cost of energy enable long-term infrastructure investment.”

Energy Costs Are Now a Core Constraint

The main issue is energy. AI data centers require large amounts of electricity to run GPUs and cooling systems.

In the UK, industrial electricity prices are among the highest in developed markets. Recent estimates show costs at around £168 per megawatt-hour, compared to £69 in France and £38 in Texas. This gap creates a major disadvantage for large-scale data center investments.

AI workloads are especially power-intensive. A single large data center can consume as much electricity as tens of thousands of homes. As AI adoption grows, this demand is rising quickly.

Globally, the International Energy Agency estimates that data centers could consume over 1,000 terawatt-hours (TWh) of electricity by 2030, up sharply from about 415 TWh in 2024. This growth is largely driven by AI. 

data center electricity use 2035
Source: IEA

The result is clear. Energy is no longer just a cost. It is a key factor in where AI infrastructure gets built.

Regulation Adds Another Layer of Risk

Energy is only part of the challenge. Regulation is also slowing investment. In the UK, uncertainty around AI rules, especially copyright laws for training data, has created hesitation among companies.

Earlier proposals to allow AI firms to use copyrighted content were withdrawn after backlash. This left companies without clear guidance on compliance.

For large infrastructure projects, this uncertainty increases risk. Data centers require billions in upfront investment. Companies need stable rules before committing capital.

Planning delays and grid connection timelines also add friction. These factors increase both cost and project timelines.

Together, energy costs and regulatory uncertainty create a difficult environment for hyperscale AI infrastructure.

OpenAI’s Global Infrastructure Expands, But More Selectively

Despite the pause, ChatGPT-maker is still expanding globally. The company is investing heavily in AI infrastructure through partnerships with Microsoft, NVIDIA, and Oracle. It is also linked to a much larger $500 billion “Stargate” initiative in the United States, focused on building next-generation AI data centers.

At the same time, the company faces rising costs. Reports suggest OpenAI could lose billions of dollars annually as it scales infrastructure to meet demand.

This reflects a broader industry shift. AI is becoming more like energy or telecom infrastructure. It requires large capital investment, long timelines, and stable operating conditions.

The pause also highlights a deeper issue. AI growth is increasing pressure on energy systems and the environment.

The Hidden Carbon Cost Behind Every AI Query

ChatGPT and similar tools rely on large data centers. These facilities already account for about 1% to 1.5% of global electricity use. Projections for their energy use vary widely due to various factors. 

Each individual query may seem small. A typical ChatGPT request can use about 0.3 watt-hours of electricity, which is relatively low. However, usage at scale changes the picture.

ChatGPT now serves hundreds of millions of users. Even small energy use per query adds up quickly. Training models is even more energy-intensive. For example, training GPT-3 required about 1,287 megawatt-hours of electricity and produced roughly 550 metric tons of CO₂.

chatgpt environmental footprint

Newer models are even larger. Some estimates suggest training advanced models like GPT-4 could emit up to 15,000 metric tons of CO₂, depending on the energy source.

At the system level, the impact is growing fast. AI systems could generate between 32.6 and 79.7 million tons of CO₂ emissions in 2025 alone. By 2030, AI-driven data centers could add 24 to 44 million tons of CO₂ annually.

AI servers annual carbon emissions
Note: carbon emissions (g) of AI servers from 2024 to 2030 under different scenarios. The red dashed lines in e–g denote the forecast footprint of the US data centres, based on previous literature. Source: https://doi.org/10.1038/s41893-025-01681-y

Looking further ahead, global generative AI emissions could reach up to 245 million tons per year by 2035 if growth continues. These numbers show a clear pattern. Efficiency is improving, but total demand is rising faster.

Big Tech Scrambles to Balance AI Growth and Emissions

OpenAI has not published a detailed standalone net-zero target. However, its operations rely heavily on partners such as Microsoft, which has committed to becoming carbon negative by 2030.

The company has acknowledged that energy use is a real concern. Leadership has pointed to the need for more renewable energy, including nuclear and clean power, to support AI growth.

Across the industry, companies are responding in several ways:

  • Improving model efficiency to reduce energy per query
  • Investing in renewable energy and long-term power contracts
  • Exploring new cooling systems to reduce water and energy use

Efficiency gains are already visible. Some AI systems have reduced energy per query by more than 30 times within a year, showing how quickly technology can improve. Still, total emissions continue to rise because demand is scaling faster than efficiency gains.

The Global AI Infrastructure Race

The pause in the UK highlights a larger trend. AI infrastructure is becoming a global competition shaped by energy, policy, and cost.

Regions with lower energy prices and faster permitting processes have an advantage. The United States and parts of the Middle East are attracting large-scale AI investments due to cheaper power and supportive policies.

At the same time, governments are trying to attract these projects. The UK has pledged billions to support AI growth and improve compute capacity. But this case shows that policy ambition alone is not enough. Companies need reliable energy, clear rules, and predictable costs.

AI’s Next Phase Will Be Decided by Energy, Not Code

The decision by OpenAI does not signal a retreat from AI investment. Instead, it reflects a shift in priorities.

Companies are becoming more selective about where they build infrastructure. They are focusing on locations that offer the right mix of energy access, cost stability, and regulatory clarity.

The UK project may still move forward, but only if conditions improve. For now, the message is clear. The future of AI will not be shaped by technology alone. It will also depend on energy systems, policy frameworks, and long-term investment conditions.

The post OpenAI Hits Pause on $40B UK AI Project: Energy Costs Shake Data Center Economics appeared first on Carbon Credits.

Continue Reading

Carbon Footprint

U.S. Uranium Mining Returns: UEC Launches First New Mine in a Decade

Published

on

U.S. Uranium Mining Returns: UEC Launches First New Mine in a Decade

Uranium Energy Corporation (NYSE: UEC) has started production at its Burke Hollow project in South Texas. This is the first new uranium mine to open in the U.S. in over ten years.

The project started production in April 2026 after getting final regulatory approval. This marks a big step for domestic uranium supply. It’s also the world’s newest in-situ recovery (ISR) uranium mine, which shows a move toward less harmful extraction methods.

Burke Hollow was originally discovered in 2012 and spans roughly 20,000 acres, with only about half of the site explored so far. This suggests significant long-term expansion potential as additional wellfields are developed.

The mine’s output will go to UEC’s Hobson Central Processing Plant in Texas. This plant can produce up to 4 million pounds of uranium each year.

A Scalable ISR Platform Expands U.S. Uranium Capacity

The Burke Hollow launch transforms UEC into a multi-site uranium producer in the United States. The company runs two active ISR production platforms. The second one is at its Christensen Ranch facility in Wyoming; both are shown in the table from UEC.

UEC burke hollow resources

UEC Christensen Ranch resources

This “hub-and-spoke” model allows uranium from multiple wellfields to be processed through centralized facilities, improving efficiency and scalability. UEC’s operations in Texas and Wyoming are now active. This gives them a licensed production capacity of about 12 million pounds per year across the U.S.

ISR mining plays a key role in this strategy. Unlike conventional mining, ISR involves circulating solutions underground to dissolve uranium and pump it to the surface. This reduces surface disturbance and can lower environmental impact compared to open-pit or underground mining.

Burke Hollow is the largest ISR uranium discovery in the U.S. in the last ten years. This boosts its long-term value as a domestic resource.

Unhedged Strategy Pays Off as Uranium Prices Rise

UEC’s production launch comes at a time of strong uranium market conditions. The company uses a fully unhedged strategy. This means it sells uranium at current market prices instead of securing long-term contracts.

This approach has recently delivered strong financial results. In early 2026, UEC sold 200,000 pounds of uranium for $101 each. This price was about 25% higher than average market rates. The sale brought in over $20 million in revenue and around $10 million in gross profit.

The strategy allows the company to benefit directly from rising uranium prices, which have been supported by:

  • Growing global nuclear energy demand
  • Supply constraints in key producing regions
  • Increased long-term contracting by utilities

Unhedged exposure raises risk in downturns, but offers more upside in strong markets. UEC is currently taking advantage of this.

Nuclear Energy Growth Is Driving Demand for Uranium

The timing of Burke Hollow’s launch aligns with a broader global shift back toward nuclear energy. Governments are increasingly turning to nuclear power as a reliable, low-carbon energy source.

nuclear power capacity additions IAEA projection 2024 to 2050
Source: IAEA

The International Atomic Energy Agency projects that global nuclear capacity could double by 2050, depending on policy and investment trends. This would require a significant increase in uranium supply.

In the United States, nuclear energy accounts for around 20% of electricity generation. It also produces zero carbon emissions during operations. This makes it a key component of many net-zero strategies.

There are several factors supporting renewed nuclear demand, including:

  • Development of small modular reactors (SMRs)
  • Extension of existing nuclear plant lifetimes
  • Government funding to maintain nuclear capacity
  • Rising electricity demand from data centers and electrification

As demand grows, securing a reliable uranium supply becomes increasingly important.

uranium demand and supply UEC

Reducing Import Risk: A Strategic Domestic Supply Push

The Burke Hollow project also addresses a major vulnerability in U.S. energy policy. The country currently imports about 95% of its uranium needs, leaving it exposed to global supply risks.

A large share of uranium production and enrichment capacity is concentrated in a few countries, including Russia and Kazakhstan. This concentration has raised concerns about supply disruptions and geopolitical risk.

uranium production US 2025 EIA

By expanding domestic production, UEC is helping to reduce reliance on imports and strengthen the U.S. nuclear fuel supply chain.

The company’s broader strategy includes building a vertically integrated platform covering mining, processing, and, eventually, uranium conversion. This approach aligns with U.S. government efforts to rebuild domestic nuclear fuel capabilities.

Federal programs have allocated billions to boost uranium production and enrichment. This shows how important the sector is.

Two Hubs, One Strategy: Wyoming Supports the Texas Breakthrough

While Burke Hollow is the main focus, UEC’s Christensen Ranch operation in Wyoming remains an important part of its production base.

The Wyoming site has recently received approvals for expanded wellfield development, allowing it to increase output alongside the Texas operation.

Together, the two sites form the foundation of UEC’s dual-hub production model. However, it is the Texas project that marks the first new U.S. uranium mine in over a decade, making it the central milestone in the company’s growth strategy.

Investor Momentum Builds Around Uranium Revival

The restart of U.S. uranium production is drawing strong attention from investors and industry players. Uranium markets have tightened in recent years, driven by rising demand and limited new supply.

UEC’s production launch has already had a positive market impact. The company’s share price rose following the announcement, reflecting investor confidence in its growth strategy.

UEC stock price

At the same time, utilities are increasing long-term contracting activity to secure fuel supply. This trend is expected to continue as new nuclear capacity comes online and existing plants extend operations.

Industry forecasts suggest that uranium demand will remain strong through the 2030s, supporting higher prices and increased investment in new production.

Lower Impact Mining, Higher ESG Expectations

The use of ISR mining at Burke Hollow reflects a broader shift toward more sustainable extraction methods. ISR typically reduces land disturbance and avoids large-scale excavation.

However, environmental management remains critical. Key issues include groundwater protection, chemical use, and long-term site restoration.

UEC has emphasized environmental controls and regulatory compliance in its operations. These efforts are important for maintaining social license and meeting ESG expectations.

From a climate perspective, uranium production plays an indirect but important role. Supporting nuclear energy, it helps enable low-carbon electricity generation and reduces reliance on fossil fuels.

The Bottom Line: A Defining Moment for U.S. Uranium Production

The launch of the Burke Hollow mine marks a major milestone for the U.S. uranium sector. It ends a decade-long gap in new mine development and signals renewed momentum in domestic production.

In the short term, it strengthens supply and supports rising uranium markets. In the long term, it highlights the growing role of nuclear energy in global decarbonization strategies.

UEC’s Burke Hollow shows that new uranium projects can advance in today’s market. There are still challenges, like scaling production and handling environmental risks, but progress is possible.

As demand for nuclear energy continues to grow, domestic projects like Burke Hollow will play a key role in shaping the future of energy security and low-carbon power.

The post U.S. Uranium Mining Returns: UEC Launches First New Mine in a Decade appeared first on Carbon Credits.

Continue Reading

Trending

Copyright © 2022 BreakingClimateChange.com