Connect with us

Published

on

As global temperatures rise, extreme weather events are becoming more intense and more frequent all around the world.

Over the past two decades, the cutting-edge field of extreme weather attribution has sought to establish the role that human-caused warming has played in these events.

There are now hundreds of attribution studies, assessing extremes ranging from heatwaves in China and droughts in Madagascar through to wildfires in Brazil and extreme rainfall in South Africa.

Carbon Brief has mapped every attribution study published to date, revealing that three-quarters of the extremes analysed were made more intense or likely due to climate change.

Along with this explosion of new studies, the different types of attribution studies have evolved and expanded over the past two decades.

For example, the World Weather Attribution service was established in 2015 to provide rapid-response studies, streamlining the process of estimating the human contribution to extreme events in a matter of days.

Meanwhile, a growing community of researchers are developing the “storyline approach” to attribution that focuses more on the dynamics of the specific events being studied.

Other researchers are using weather forecasts to attribute events that have not even happened yet. And many studies are now combining these methods to get the best of all worlds in their findings.

In this detailed Q&A, Carbon Brief explores how the field of attribution science has evolved over time and explains the key methods used today.

What are the origins of ‘extreme weather attribution’?

The Intergovernmental Panel on Climate Change (IPCC) made its first mention of attribution in its first assessment report (pdf), published in 1990. In a section called “Attribution and the fingerprint method”, the report refers to attribution as “linking cause and effect”.

IPCC’s first assessment report, section 8.1.4.
IPCC’s first assessment report, section 8.1.4.

In these early days of attribution science, experts used statistical methods to search for the “fingerprint” of human-caused climate change in global temperature records.

However, the 1990 report says that “it is not possible at this time to attribute all or even a large part of the observed global mean warming to the enhanced greenhouse effect on the basis of the observational data currently available”.

As the observational record lengthened and scientists refined their methods, experts became more confident about attributing global temperature rise to human-caused climate change. By the time its third assessment report was published in 2001, the IPCC could state that “detection and attribution studies consistently find evidence for an anthropogenic signal in the climate record of the last 35 to 50 years”.

Just two years later, Prof Myles Allen – professor of geosystem science at the University of Oxford – wrote a Nature commentary from his home in Oxford that would open the door for attributing extreme weather events to climate change. The article begins:

“As I write this article in January 2003, the floodwaters of the River Thames are about 30 centimetres from my kitchen door and slowly rising. On the radio, a representative of the UK Met Office has just explained that although this is the kind of phenomenon that global warming might make more frequent, it is impossible to attribute this particular event (floods in southern England) to past emissions of greenhouse gases. What is less clear is whether the attribution of specific weather events to external drivers of climate change will always be impossible in principle, or whether it is simply impossible at present, given our current state of understanding of the climate system.”

Just months after Oxford’s floodwaters began to recede, a now-infamous heatwave swept across Europe. The summer of 2003 was the hottest ever recorded for central and western Europe, with average temperatures in many countries reaching 5C higher than usual.

The unexpected heat resulted in an estimated 20,000 “excess” deaths, making the heatwave one of Europe’s deadliest on record.

In 2004, Allen and two other UK-based climate scientists produced the first formal attribution study, published in Nature, which estimated the impact of human-caused climate change on the heatwave.

To conduct the study, the authors first chose the temperature “threshold” to define their heatwave. They decided on 1.6C above the 1961-90 average, because the European summer of 2003 was the first on record to exceed this average temperature.

They then used a global climate model to simulate two worlds – one mirroring the world as it was in 2003 and the other a fictional world in which the industrial revolution never happened. In the second case, the climate is influenced solely by natural changes, such as solar energy and volcanic activity, and there is no human-caused warming.

The authors ran their models thousands of times in each scenario from 1989 to 2003. As the climate is inherently chaotic, each model “run” – individual simulations of how the climate progresses over many years – produces a slightly different progression of temperatures. This means that some runs simulated a heatwave in the summer of 2003, while others did not.

The authors counted how many times the 1.6C threshold temperature was crossed in the summer of 2003 in each model run. They then compared the likelihood of crossing the threshold temperature in the world with – and a world without – climate change.

They concluded that “it is very likely that human influence has at least doubled the risk of a heatwave exceeding this threshold magnitude”.

A Nature commentary linked to the study called the paper a “breakthrough”, stating that it was the “first successful attempt to detect man-made influence on a specific extreme climatic event”.

In the decade following the heatwave study, more teams from around the world began to use the same methods – known as “probabilistic”, “risk-based” or “unconditional” attribution.

Prof Peter Stott is a science fellow in climate attribution at the UK Met Office and an author on the study. Stott tells Carbon Brief that the basic methods used in this first attribution study are “still used to this day”, but that scientists now use more “up-to-date” climate models than the one used in his seminal study.

Back to top

What is ‘probabilistic’ attribution?

As the 2004 Nature study demonstrated, probabilistic attribution involves scientists running climate models thousands of times in scenarios with and without human-caused climate change, then comparing the two.

This allows them to say how much more likely, intense or long-lasting an event was due to climate change.

Many studies since have added a third scenario, in which the planet is warmer than present-day temperatures, to assess how climate change may impact extreme weather events in the future.

The figure below shows three distributions of multiple different simulated extreme events. The x-axis (horizontal) represents the intensity of the climate variable – in this instance temperature – with lower temperatures on the left and higher temperatures on the right. The y-axis (vertical) shows the likelihood of this variable hitting certain values.

Each curve shows how the climate variable behaves in a different scenario, or “world”. The red-shaded curve shows a pre-industrial world that was not warmed by human influence, the yellow-shaded curve indicates today’s climate, while the dashed line shows a future, warmer world. The curves shift from left to right as the climate warms.

The peak of each curve shows the most likely temperatures, while likelihood is lowest at the far left and far right of each curve, where temperatures are most extreme. The hatched areas show the temperatures that cross a predefined “threshold” temperature. (In the attribution study on the 2003 European heatwave, this threshold was defined as 1.6C above the 1961-90 average.)

The three curves show how the threshold is more likely to be crossed as the world warms.

Illustration of the changing probability of crossing a threshold in the past, present and future climates. Source: Carbon Brief
Illustration of the changing probability of crossing a threshold in the past, present and future climates. Source: Carbon Brief

Back to top

Which weather extremes can scientists link to climate change?

In 2011, the American Meteorological Society decided to include a “special supplement” about attribution research in its annual report.

The supplement presented six different attribution studies. It generated significant media interest and the “Explaining Extreme Events” report has been published by Bulletin of the American Meteorological Society almost every year since.

As the research field has grown, so too has the range of different extremes that have been studied.

Heatwaves are generally considered the simplest extreme events to attribute, because they are mainly driven by thermodynamic influences. In contrast, storms and droughts are more strongly affected by complex atmospheric dynamics, so can be trickier to simulate in a model.

The graphic below shows the relative confidence of attributing different types of extreme events.

Relative confidence in attribution of different extreme events
Relative confidence of attributing different types of extreme events. Adapted from a graphic by National Academy of Sciences

Attribution studies on extreme heat often assess how much hotter, long-lasting or likely an event was due to climate change. For example, one study finds that the summer heatwave that hit France in 2019 was made 1.5-3C hotter due to climate change and about 100 times more likely.

Heatwaves are the most-studied extreme event in attribution literature, but are becoming “less and less interesting for researchers”, according to a Bloomberg article from 2020.

Assessing extreme rainfall is more complicated – in part because the Earth’s chaotic weather system means that the size and path of a storm or heavy rainfall event has a large element of chance, which can make it challenging to identify where climate change fits in.

Nevertheless, many teams have published studies attributing extreme rainfall events and storms. For example, one study (pdf) found that climate change doubled the likelihood of the intense rainfall that fell in northern China in September 2021.

Scientists also study more complex events, such as drought, wildfires and floods, which are impacted by factors including land use and disaster preparedness.

For example, there are many different ways to define a drought. Some are linked just to rainfall, while others consider factors including soil moisture, groundwater and river flow. Some attribution studies investigating the impact of climate change on drought focus only on rainfall deficit, while others (pdf) study temperature or vapour pressure deficit – the difference between the amount of moisture in the air and how much moisture the air can hold when it is saturated.

A scientist’s decision about which type of drought to study sometimes depends on the available data and the type of impacts caused by the drought. In other cases, the choice may come down to what caused the biggest impact on people.

For example, in late 2022, South America was plagued by a severe drought that caused widespread crop failure. An attribution study on the event, therefore, focused on “agricultural” drought, which captures the response of rainfall on soil moisture conditions and is the most relevant for crop health.

Dry cracked bed of the Alalay lagoon in Cochabamba, Bolivia.
Dry cracked bed of the Alalay lagoon in Cochabamba, Bolivia. Credit: Associated Press / Alamy Stock Photo

Meanwhile, a study on drought in Madagascar over 2019-21 chose to focus on rainfall deficit. The study says “this was because recent research found rainfall deficits were the primary driver of drought in regions of East Africa with very similar climatic properties to south-west Madagascar”.

Wildfires are affected by conditions including temperature, rainfall, wind speed and land use. While some wildfire attribution studies focus on vapour pressure deficit, others quantify the fire weather index, which looks at the effects of fuel moisture and wind on fire behaviour and spread”.

Tropical cyclones are also complex. There is evidence that climate change can increase the peak “rain rates” and wind speeds of tropical cyclones, and that storm tracks are shifting poleward. There are many aspects of a cyclone that can be analysed, such as rainfall intensity, storm surge height and storm size.

Back to top

Why do scientists perform ‘rapid’ attribution studies?

As extreme weather attribution became more mainstream, researchers began to produce studies more quickly. However, challenges in communicating the findings of attribution studies in a timely way soon became evident.

After conducting a study, writing it up and submitting it to a journal, it can still take months or years for research to be published. This means that, by the time an attribution study is published, the extreme event has likely long passed.

The World Weather Attribution (WWA) initiative was founded in 2015 to tackle this issue. The team uses a standard, peer-reviewed methodology for their studies, but does not publish the results in formal journals – instead publishing them directly on their website.

(After publishing these “rapid attribution” studies on their website, the team often write full papers for publication in formal journals, which are then peer reviewed.)

This means that rather than taking months or years to publish their research, the team can make their findings public just days or weeks after an extreme weather event occurs.

In 2021, the founders of the initiative – including Carbon Brief contributing editor Dr Friederike Otto, who is a senior lecturer in climate science at Imperial College London’s Grantham Institute – wrote a Carbon Brief guest post explaining why they founded WWA:

“By reacting in a matter of days or weeks, we have been able to inform key audiences with a solid scientific result swiftly after an extreme event has occurred – when the interest is highest and results most relevant.”

The guest post explains that to conduct an attribution study, the WWA team first uses observed data to assess how rare the event is in the current climate – and how much this has changed over the observed record. This is communicated using a “return period” – the expected frequency an event of this magnitude could be expected under a given climate.

For example, the WWA analysed the UK’s record-shattering heatwave of 2022, when the country recorded temperatures above 40C for the first time. They found that the maximum temperature seen in the UK on 19 July 2022 has a 1,000-year return period in today’s climate – meaning that even in today’s climate, 40C heat would only be expected, on average, once in a millennium.

The authors then use climate models to carry out the “probabilistic” attribution study, to determine how much more intense, likely or long-lasting the event was as a result of climate change. They conclude by conducting “vulnerability and exposure” analysis, which often highlights other socioeconomic problems.

Sometimes, the authors conclude that climate change did not influence the event. For example, a 2021 rapid attribution study by WWA found that poverty, poor infrastructure and dependence on rain-fed agriculture were the main drivers of the ongoing food crisis in Madagascar, while climate change played “no more than a small part”.

Other groups are also conducting rapid attribution studies. For example, a group of scientists – including some WWA collaborators – recently launched a “rapid experimental framework” research project called ClimaMeter. The tool provides initial attribution results just hours after an extreme weather event takes place.

ClimaMeter focuses on the atmospheric circulation patterns that cause an extreme event – for example, a low-pressure system in a particular region. Once an event is defined, the scientists search the historical record to find events with similar circulation patterns to calculate how the intensity of the events has changed over time.

Back to top

Can the impacts of extreme weather be linked to climate change?

A branch of attribution science called “impact attribution” – which aims to quantify the social, economic and/or ecological impacts of climate change on extreme weather events – is also gaining popularity. There are four main types of impact attribution, as shown in the graphic below.

Types of heat-related impact attribution studies
Different types of impact attribution study. Adapted from graphic in Carlson et al.

1) Trend-to-trend impact attribution

    The first method, called “trend-to-trend” impact attribution, assesses long-term trends in both the climate system and in “health outcomes”. This approach was used in a 2021 study on heat-related mortality around the world, which received extensive media attention.

    The authors used data from 732 locations in 43 countries to identify relationships between temperature and mortality in different locations, known as “exposure-response functions”. This allowed them to estimate how many people would die in a given location, if temperatures reach a certain level.

    The authors used these relationships to calculate heat-related mortality over 1991-2018 for each location under two scenarios – one with and one without human-caused climate change. The study concluded that 37% of “warm-season heat-related deaths” can be attributed to human-caused climate change.

    2) Event-to-event attribution

      The second type of study is known as “event-to-event” attribution. In one study using this method, the authors used data on observed mortality rates to determine how many people died in Switzerland during the unusually warm summer of 2022.

      They calculated how much climate change contributed to warming during that summer. They then then ran a model to calculate the “hypothetical heat-related burden” that would have been seen during the summer without the warming influence of climate change.

      Using this method, they estimate that 60% of the 623 heat-related deaths “could have been avoided in absence of human-induced climate change”.

      3) Risk-based event attribution

        “Risk-based” event impact attribution – which is demonstrated in a more recent study on the 2003 European heatwave – is the third type of impact attribution. This method combines probabilistic event attribution with resulting health outcomes.

        When the paper was published, its lead author, Prof Dann Mitchell – a professor of climate science at the University of Bristol – explained the method to Carbon Brief:

        “We have a statistical relationship between the number of additional deaths per degree of warming. This is specific to a certain city and changes a lot between cities. We use climate simulations to calculate the heat in 2003, and in 2003 without human influences. Then we compare the simulations, along with the observations.”

        They find, for example, that in the summer of 2003, anthropogenic climate change increased the risk of heat-related mortality in London by around 20%. This means that out of the estimated 315 deaths in London during the heatwave, 64 were due to climate change.

        4) Fractional attribution

          In the final method, known as “fractional” attribution, the authors combine the results of two independent numbers – an estimation of the total damages caused by an extreme weather event, and a calculation of the proportion of the risk from an extreme weather event for which anthropogenic climate change is responsible, known as the “fraction of attributable risk” (FAR).

          The authors of one study used this method to estimate the economic damages linked to Hurricane Harvey.

          Buildings destroyed by hurricane Harvey August 2017.
          Buildings destroyed by hurricane Harvey August 2017. Credit: inga spence / Alamy Stock Photo

          The authors calculate that “fraction of attributable risk” for the rainfall from Harvey was around three-quarters – meaning that climate change was responsible for three-quarters of the intense rainfall.

          Separately, the authors find that according to best estimates, the hurricane caused damages of around US$90bn. From this, the authors conclude that US$67bn of the damages caused by the Hurricane’s intense rainfall can be attributed to climate change.

          A study on the 2010 Russian heatwave also used this method. The authors found that the heatwave was responsible for more than 55,000 deaths (pdf), and found an 80% chance that the extreme heat would not have occurred without climate warming. The study concludes that almost 45,000 of the deaths were attributable to human-caused climate change.

          However, the fractional attribution method has received criticism. One paper argues that the method “inflates the impacts associated with anthropogenic climate change”, because it “incorrectly assumes” that the event has no impact unless it exceeds the threshold defined by the researchers.

          Some of the authors of the Hurricane Harvey paper later wrote a paper advising caution in interpreting the results of FAR studies. They say:

          “The fraction of attributable risk (FAR) method, useful in extreme weather attribution research, has a very specific interpretation concerning a class of events, and there is potential to misinterpret results from weather event analyses as being applicable to specific events and their impact outcomes…FAR is not generally appropriate when estimating the magnitude of the anthropogenic signal behind a specific impact.”

          Expanding scope

          Impact attribution is continuing to expand in scope. For example, studies are now being conducted to assess the impact of climate change on disease transmission.

          In 2020, scientists quantified the influence of climate change on specific episodes of extreme ice loss from glaciers for the first time. They found that human-caused climate change made the extreme “mass loss” seen in glaciers in the Southern Alps, New Zealand, in 2018 at least 10 times more likely.

          Scientists have also linked climate change to ecosystem shifts. One study focusing on temperature finds that the “extremely early cherry tree flowering” seen in Kyoto in 2021 was made 15 times more likely due to climate change.

          Cherry blossom.
          Cherry blossom. Credit: Koshiro K / Alamy Stock Photo

          Others go even further, linking weather extremes to societal impacts. For example, a 2021 study published in Scientific Reports says:

          “By combining an extreme event attribution analysis with a probabilistic model of food production and prices, we find that climate change increased the likelihood of the 2007 co-occurring drought in South Africa and Lesotho, aggravating the food crisis in Lesotho.”

          Meanwhile, Imperial College London’s Grantham Institute is working on an initiative to publish rapid impact attribution studies about extreme weather events around the world. Similar to WWA studies, these rapid studies will not be peer reviewed individually, but will be based on a peer-reviewed methodology.

          Dr Emily Theokritoff – a research associate at Grantham, who is working on the initiative, tells Carbon Brief that it will be launched “in the near future”. She adds:

          “The aim is to recharge the field, start a conversation about climate losses and damages, and help people understand how climate change is making life more dangerous and more expensive.”

          Back to top

          How do scientists attribute ‘unprecedented’ events?

          An attribution method known as the “storyline approach” or “conditional attribution” has become increasingly popular over the past decade – despite initially causing controversy in the attribution community.

          In this approach, researchers first select an extreme weather event, such as a specific heatwave, storm or drought. They then identify the physical components, such as sea surface temperature, soil moisture and atmospheric dynamics, that led to the event unfolding in the way it did. This series of events is called a “storyline”.

          The authors then use models to simulate this “storyline” in two different worlds – one in the world as we know it and one in a counterfactual world – for example, with a different sea surface temperature or CO2 level. By comparing the model runs, the researchers can draw conclusions about how much climate change influenced that event.

          The storyline approach is useful for explaining the influence of climate change on the physical processes that contributed to the event. It can also be used to explore in detail how this event would have played out in a warmer (future) or cooler (pre-industrial) climate.

          One study describes the storyline approach as an “autopsy”, explaining that it “gives an account of the causes of the extreme event”.

          Prof Ted Shepherd, a researcher at the University of Reading, was one of the earliest advocates of the storyline attribution approach. At the EGU general assembly in Vienna in April 2024, Shepherd provided the opening talk in a session on storyline attribution.

          He told the packed conference room that the storyline approach was born out of the need for a “forensic” approach to attribution, rather than a “yes/no” approach. He emphasised that extreme weather events have “multiple causes” and that the storyline approach allows researchers to dissect each of these components.

          Dr Linda van Garderen is a postdoctoral researcher at Utrecht University and has carried out multiple studies using the storyline method. She tells Carbon Brief that, while traditional attribution typically investigates probability, the storyline approach analyses intensity.

          For example, she led an attribution study using the storyline method which concluded that the 2003 European and 2010 Russian heatwaves would have been 2.5-4C cooler in a world without climate change.

          She adds that it can make communication easier, telling Carbon Brief that “probabilities can be challenging to interpret in practical daily life, whereas the intensity framing of storyline studies is more intuitive and can make attribution studies easier to understand”.

          Dr Nicholas Leach is a researcher at the University of Oxford who has conducted multiple studies using the storyline approach. He tells Carbon Brief that probabilistic attribution often produces “false negatives”, wrongly concluding that climate change did not influence an event.

          This is because climate models have “biases and uncertainties” which can lead to “noise” – particularly when it comes to dynamical features such as atmospheric circulation patterns. Probabilistic attribution methods often end up losing the signal of climate change in this noise, he explains.

          The storyline approach is able to avoid these issues more easily, he says. He explains that by focusing on the dynamics of one specific event, rather than a “broad class of events”, storyline studies can eliminate some of this noise, making it more straightforward to identify a signal, he says.

          Conversely, others have critiqued the storyline method for producing false positives, which wrongly claim that climate change influenced an extreme weather event.

          The storyline approach has also been praised for its ability to attribute “unprecedented” events. In the EGU session on the storyline method, many presentations explored how the storyline method could be used to attribute “statistically impossible” extremes.

          Leach explains that when a completely unprecedented extreme event occurs, statistical models often indicate that the event “shouldn’t have happened”. When running a probabilistic analysis using these models, Leach explains: “You end up with the present probability being zero and past probability being zero, so you can’t say a lot.”

          He points to the Pacific north-west heatwave of 2021 as an example of this. This event was one of the most extreme regional heat events ever recorded globally, breaking some local high temperature records by more than 6C.

          'Extreme heat, cooling centre sign', Vancouver, Canada, 2021.
          ‘Extreme heat, cooling centre sign’, Vancouver, Canada, 2021. Credit: Margarita Young / Alamy Stock Photo

          WWA conducted a rapid attribution study on the heatwave, using its probabilistic attribution method. The heatwave was “so extreme” that the observed temperatures “lie far outside the range” of historical observations, the researchers said.

          Their assessment suggests that the heatwave was around a one-in-1,000-year event in today’s climate and was made at least 150-times more likely because of climate change.

          Leach and his colleagues used the storyline method to attribute the same heatwave. The methods of this study will be discussed more in the following section.

          Leach explains that using the storyline approach, he was able to consider the physics of the event, including an atmospheric river that coincided with the “heat dome” that was a key feature of the event. This helped him to represent the event well in his models. The study concluded that the heatwave was 1.3C hotter and eight times more likely as a result of climate change.

          Many experts tell Carbon Brief there was initially tension in the attribution community between probabilistic and storyline advocates when the latter was first introduced. However, as the storyline method has become more mainstream, criticism has abated and many scientists are now publishing research using both techniques.

          Van Garderen tells Carbon Brief that storyline attribution is “adding to the attribution toolbox”, rather than attempting to replace existing methods. She emphasises that probability-based and storyline attribution answer different questions, and that both are important.

          Back to top

          How can weather forecasts be used in attribution studies?

          Forecast attribution is the most recent major addition to the attribution toolbox. This method uses weather forecasts instead of climate models to carry out attribution studies. Many experts describe this method as sitting part-way between probabilistic and storyline attribution.

          One benefit of using forecasts, rather than climate models, is that their higher resolution allows them to simulate extreme weather events in more detail. By using forecasts, scientists can also attribute events that have not yet happened.

          The first use of “advance forecasted” attribution analysis (pdf) quantified the impact of climate change on the size, rainfall and intensity of Hurricane Florence before it made landfall in North Carolina in September 2018.

          The authors, in essence, carried out the probabilistic attribution method, using two sets of short-term forecasts for the hurricane rather than large-scale climate models. The analysis received a mixed reaction. Stott told Carbon Brief at the time that it was “quite a cool idea”, but was highly dependent on being able to forecast such events reliably.

          Dr Kevin Trenberth, distinguished senior scientist at the National Center for Atmospheric Research, told Carbon Brief in 2019 that the study was “a bit of a disaster”, explaining that the quality of the forecast was questionable for the assessment.

          The authors subsequently published a paper in Science Advances reviewing their study “with the benefit of hindsight”. The authors acknowledged that the results are quite a way off what they forecasted. However, they also claimed to have identified what went wrong with their forecasted analysis.

          Problems with the “without climate change” model runs created a larger contrast against their real-world simulations, meaning the analysis overestimated the impact of climate change on the event, they said.

          Nonetheless, the study did identify a quantifiable impact of climate change on Hurricane Florence, adding to the evidence from studies by other author groups.

          This research team has since published more forecast-based attribution studies on hurricanes. One study used hindcasts – forecasts that start from the past and then run forward into the present – to analyse the 2020 hurricane season. The team then ran a series of “counterfactual” hindcasts over the same period, without the influence of human warming from sea surface temperatures.

          They found that warmer waters increased three-hour rainfall rates and three-day accumulated rainfall for tropical storms by 10% and 5%, respectively, over the 2020 season.

          View of hurricane Laura in the Gulf of Mexico from space, August, 2020.
          View of hurricane Laura in the Gulf of Mexico from space, August, 2020. Credit: AC NewsPhoto / Alamy Stock Photo

          Meanwhile, a 2021 study by a different team showed how it was possible to use traditional weather forecasts for attribution. The researchers, who penned a Carbon Brief guest post about their work, found that the European heatwave of February 2019 was 42% more likely for the British Isles and at least 100% more likely for France.

          To conduct their study, the authors used a weather forecast model – also known as a “numerical weather prediction” model (NWP).

          They explain that a NWP typically runs at a higher resolution than a climate model, meaning that it has more, smaller grid cells. This allows it to simulate processes that a climate model cannot and makes them “more suitable for studying the most extreme events than conventional climate models,” the authors argue.

          More recently, Leach and his team carried out a forecast attribution study on the record-breaking Pacific north-west heatwave of 2021, years after the event took place.

          The authors defined 29 June 2021 as the start of the event, as this is when the maximum temperature of the heatwave was recorded. They then ran their forecasts using a range of “lead times” – the number of days before the event starts that the model simulation is initialised.

          The shortest lead time in this study was three days, meaning the scientists began running the model using the weather conditions recorded on 26 June 2021. The short lead time meant that they could tailor the model very closely to the weather conditions at this time and simulated the event itself very accurately.

          By comparison, the longest lead times used in this study were 2-4 months. This means that the models were initialised in spring and, by the time they simulated the June heatwave, their simulation did not closely resemble the events that actually unfolded.

          Leach tells Carbon Brief that by lengthening the lead time of the weather forecast, they can effectively “shift the dial” from storyline to probabilistic attribution. He explains:

          “If you’re using a forecast that’s initialised really near to your event, then you’re kind of going down that storyline approach, by saying, ‘I want what my model is stimulating to look really similar to the event I’m interested in’…

          “The further back [in time] you go, the closer you get to the more probabilistic style of statements that are more unconditioned.”

          This combination of storyline and probabilistic attribution allows the authors to draw conclusions both about how climate change affected the intensity and the likelihood of the heatwave. The authors estimate that the heatwave was 1.3C more intense and eight times more likely as a result of climate change.

          More recently, Climate Central has produced a tool that uses temperature forecasts over the US over the coming days to calculate a “climate shift index”. This index gives the ratio of how common the forecasted temperature is in today’s climate, compared to how likely it would be in a world without climate change.

          The index runs from five to minus five. A result of zero indicates that climate change has no detectable influence, an index of five means that climate change made the temperature at least five times more likely and an index of minus five means that climate change made the temperature at least five times less likely.

          The tool can be used for attribution. For example, recent analysis by the group used the index to quantify how climate change has influenced the number of uncomfortably hot nights. It concluded:

          “Due to human-caused climate change, 2.4 billion people experienced an average of at least two additional weeks per year where nighttime temperatures exceeded 25C. Over one billion people experienced an average of at least two additional weeks per year of nights above 20C and 18C.”

          Back to top

          What are the applications of attribution science?

          One often-touted application of attribution studies is to raise awareness about the role of climate change in extreme weather events. However, there are limited studies about how effective this is.

          One study presents the results of focus group interviews with UK scientists, who were not working on climate change, in which participants were given attribution statements. The study concludes:

          “Extreme event attribution shows significant promise for climate change communication because of its ability to connect novel, attention-grabbing and event-specific scientific information to personal experiences and observations of extreme events.”

          However, the study identified a range of challenges, including “adequately capturing nuances”, “expressing scientific uncertainty without undermining accessibility of key findings” and difficulties interpreting mathematical aspects of the results.

          In another experiment, researchers informed nearly 4,000 adults in the US that climate change had made the July 2023 heatwave in the US at least five times more likely. The team also shared information from Climate Central’s climate shift index. According to the study, both approaches “increased the belief that climate change made the July 2023 heatwave more likely and is making heatwaves in general more likely as well”.

          Meanwhile, as the science of extreme weather attribution becomes more established, lawyers, governments and civil society are finding more uses for this evolving field.

          For example, attribution is starting to play an important role in courts. In 2017, two lawyers wrote a Carbon Brief guest post stating “we expect that attribution science will provide crucial evidence that will help courts determine liability for climate change related harm”.

          Four years later, the authors of a study on “climate litigation” wrote a Carbon Brief guest post explaining how attribution science can be “translated into legal causality”. They wrote:

          “Attribution can bridge the gap identified by judges between a general understanding that human-induced climate change has many negative impacts and providing concrete evidence of the role of climate change at a specific location for a specific extreme event that already has led or will lead to damages.”

          In 2024, around 2,000 Swiss women used an attribution study, alongside other evidence, to win a landmark case in the European Court of Human Rights. The women, mostly in their 70s, said that their age and gender made them particularly vulnerable to heatwaves linked to climate change. The court ruled that Switzerland’s efforts to meet its emissions targets had been “woefully inadequate”.

          A group of Swiss retirees took their government to a top European court over what they claim is its failure to take stronger action on climate change.
          A group of Swiss retirees took their government to a top European court over what they claim is its failure to take stronger action on climate change. Credit: Associated Press / Alamy Stock Photo

          The 2024 European Geosciences Union conference in Vienna dedicated an entire session to climate change and litigation. Prof Wim Thiery – a scientist who was involved in many conference sessions on climate change and litigation – tells Carbon Brief that attribution science is particularly important for supporting “reparation cases”, in which vulnerable countries or communities seek compensation for the damages caused by climate change.

          He adds Carbon Brief that seeing the “direct and tangible impact” of an attribution study in a court case “motivates climate scientists in engaging in this community”.

          (Other types of science are also important in court cases related to climate change, he added. For example, “source attribution” identifies the relative contribution of different sectors and entities – such as companies or governments – to climate change.)

          Dr Rupert Stuart-Smith, a research associate in climate science and the law at the University of Oxford’s Sustainable Law Programme, adds:

          “We’re seeing a new evolution whereby communities are increasingly looking at impact-relevant variables. Think about inundated areas, lake levels, heatwave mortalities. These are the new target variables of attribution science. This is a new frontier and we are seeing that those studies are directly usable in court cases.”

          He tells Carbon Brief that some cases “have sought to hold high-emitting corporations – such as fossil fuel or agricultural companies – liable for the costs of climate change impacts”. He continues:

          “In cases like these, claimants typically need to show that climate change is causing specific harms affecting them and courts may leverage attribution or climate projections to adjudicate these claims. Impact attribution is particularly relevant in this context.”

          Dr Delta Merner is a lead scientist at the science hub for climate litigation. She tells Carbon Brief that “enhanced source attribution for companies and countries” will be “critical” for holding major emitters accountable. She adds:

          “This is an urgent time for the field of attribution science, which is uniquely capable of providing robust, actionable evidence to inform decision-making and drive accountability.”

          Meanwhile, many countries’s national weather services are working on “operational attribution” – the regular production of rapid attribution assessments.

          Stott tells Carbon Brief that the UK Met Office is operationalising attribution studies. For example, on 2 January 2024, it announced that 2023 was the second-warmest year on record for the UK, with an average temperature of 9.97C.

          New methods are also being developed. For example, groups, such as the “eXtreme events: Artificial Intelligence for Detection and Attribution” (XAIDA) team, are researching the use of machine learning and artificial intelligence for attribution studies.

          One recent attribution study uses a machine-learning approach to create “dynamically consistent counterfactual versions of historical extreme events under different levels of global mean temperature”. The authors estimate that the south-central North American heatwave of 2023 was 1.18-1.42C warmer because of global warming.

          The authors conclude:

          “Our results broadly agree with other attribution techniques, suggesting that machine learning can be used to perform rapid, low-cost attribution of extreme events.”

          Other scientists are using a method called UNSEEN, which involves running models thousands of times to increase the size of the datasets used to make it easier to derive accurate probabilities from highly variable extremes.

          Back to top

          What are the next steps for attribution research?

          The experts that Carbon Brief spoke to for this article have high hopes for the future of attribution science. For example, Stott says:

          “Attribution science has great potential to improve the resilience of societies to future climate change, can help monitor progress towards the Paris goals of keeping global warming to well below 2C and can motivate progress in driving down emissions towards net-zero by the middle of this century.”

          However, despite the progress made over the past two decades, there are still challenges to overcome. One of the key barriers in attribution science is a lack of high-quality observational data in low-income countries.

          To carry out an attribution study, researchers need a Iong, high-quality dataset of observations from the area being studied. However, inadequate funding or political instability means that many developing countries do not have sufficient weather station data.

          Dr. Robert Rohde on X/Twitter (@RARohde): Fun little map of the weather stations (both active and historical) that are used as input to Berkeley Earth's land surface temperature analysis.

          In a 2016 interview with Carbon Brief, Allen said that “right now there is obviously a bias towards our own backyards – north-west Europe, Australia and New Zealand.”

          Many WWA studies in global-south countries mention the challenge of finding adequate data and sometimes this affects the results. A WWA study of the 2022 drought in west Africa’s Sahel region was unable to find the signal of climate change in the region’s rainfall pattern – in part, due to widespread uncertainties in the observational data.

          Otto, who was an author on the study, explained at the time:

          “It could either be because the data is quite poor or because we have found the wrong indices. Or it could be because there really is no climate change signal…We have no way of identifying which of these three options it is.”

          Developing better observational datasets is an ongoing challenge. It is highlighted in much of the literature on attribution as an important next step for attribution science – and for climate science more widely. Merner tells Carbon Brief that scientists also need to work on developing “novel approaches for regions without baseline data”.

          Weather station, Belgium.
          Weather station, Belgium. Credit: Arterra Picture Library / Alamy Stock Photo

          Meanwhile, many scientists expect the methods used in attribution science to continue evolving. The Detection and Attribution Model Intercomparison Project is currently collecting simulations, which will support improved attribution of climate change in the next set of assessment reports from the Intergovernmental Panel on Climate Change.

          Mitchell says that, over the next decade, he thinks that “we will move away from the more generic attribution methods that have served us well to this point, and start developing and applying more targeted – and even more defensible – methods”.

          In particular, he highlights the need for more specific methods for impact attribution – for example, studying the impacts of weather events on health outcomes, biodiversity changes or financial losses.

          He continues:

          “The interplay of different socioeconomic states and interventions with that of climate change can make these particularly difficult to study – but we are getting there with our more advanced, albeit computationally expensive methods, such as using weather forecast models as the foundation of our attribution statements.”

          Stott tells Carbon Brief that incorporating impacts into attribution assessments is a “crucial area for development” in attribution science. He explains that impact attribution is “very relevant to the loss-and-damage agenda and further developments in attribution science are likely to include the ability to attribute the financial costs of storms”.

          Stuart-Smith tells Carbon Brief that, “in the coming years, growing numbers of studies will quantify the economic burden of climate change and its effects on a broader range of health impacts, including from vector and water-borne diseases”.

          Leach also tells Carbon Brief that it is “important for attribution to move their focus beyond physical studies and into quantitative impact studies to increase their relevance and utility in policy and the media”.

          He adds:

          “Utilising weather forecasts for attribution would fit neatly with this aim as those same models are already widely used by emergency managers and built into impact modelling frameworks.”

          Similarly, Stott tells Carbon Brief that “forecast attribution shows great potential”. He explains that by “progressing that science” will allow this method to be used to attribute more types of extreme weather with greater confidence.

          Leach advocates for greater use of weather forecast models for all types of attribution. He says:

          “Weather forecast models have demonstrated repeatedly over the past few years that they are capable of accurately representing even unprecedented weather extremes. Using these validated state-of-the-art models for attribution could bring an increase in confidence in the results.”

          Many scientists also tell Carbon Brief about the importance of operationalising attribution. The weather services in many countries already have this in place. Stott tells Carbon Brief that groups in Japan, South Korea, Australia and the US are also “at various stages of developing operational attribution services”.

          Meanwhile, Otto tells Carbon Brief that “the most important next step for attribution in my view is to really integrate the assessment of vulnerability and exposure into the attribution studies”. She adds:

          “In order for attribution to truly inform adaptation it is essential though to go from attributing hazards, as we do now mainly, to disentangling drivers of disasters.”

          Mitchell adds that he thinks attribution statements “are absolutely essential for [countries to make] national adaptation plans”.

          Meanwhile, another study suggests that extreme event attribution studies could be used by engineers, along with climate projections, to assist climate adaptation for civil infrastructure.

          Leach tells Carbon Brief that attribution could be useful in the insurance sector for similar reasons. He adds that many insurance sectors use the same forecasts in their catastrophe models that climate scientists use for forecast attribution, meaning that it should be straightforward to add attribution studies into their pipelines.

          Back to top

          The post Q&A: The evolving science of ‘extreme weather attribution’ appeared first on Carbon Brief.

          Q&A: The evolving science of ‘extreme weather attribution’

          Continue Reading

          Climate Change

          The 2026 budget test: Will Australia break free from fossil fuels?

          Published

          on

          In 2026, the dangers of fossil fuel dependence have been laid bare like never before. The illegal invasion of Iran has brought pain and destruction to millions across the Middle East and triggered a global energy crisis impacting us all. Communities in the Pacific have been hit especially hard by rising fuel prices, and Australians have seen their cost-of-living woes deepen.

          Such moments of crisis and upheaval can lead to positive transformation. But only when leaders act with courage and foresight.

          There is no clearer statement of a government’s plans and priorities for the nation than its budget — how it plans to raise money, and what services, communities, and industries it will invest in.

          As we count down the days to the 2026-27 Federal Budget, will the Albanese Government deliver a budget for our times? One that starts breaking the shackles of fossil fuels, accelerates the shift to clean energy, protects nature, and sees us work together with other countries towards a safer future for all? Or one that doubles down on coal and gas, locks in more climate chaos, and keeps us beholden to the whims of tyrants and billionaires.

          Here’s what we think the moment demands, and what we’ll be looking out for when Treasurer Jim Chalmers steps up to the dispatch box on 12 May.

          1. Stop fuelling the fire
          2. Make big polluters pay
          3. Support everyone to be part of the solution
          4. Build the industries of the future
          5. Build community resilience
          6. Be a better neighbour
          7. Protect nature

          1. Stop fuelling the fire

          Action Calls for a Transition Away From Fossil Fuels in Vanuatu. © Greenpeace
          The community in Mele, Vanuatu sent a positive message ahead of the First Conference on Transitioning Away from Fossil Fuels. © Greenpeace

          In mid-April, Pacific governments and civil society met to redouble their efforts towards a Fossil Fuel Free Pacific. Moving beyond coal, oil and gas is fundamental to limiting warming to 1.5°C — a survival line for vulnerable communities and ecosystems. And as our Head of Pacific, Shiva Gounden, explained, it is “also a path of liberation that frees us from expensive, extractive and polluting fossil fuel imports and uplifts our communities”.

          Pacific countries are at the forefront of growing global momentum towards a just transition away from fossil fuels, and it is way past time for Australia to get with the program. It is no longer a question of whether fossil fuel extraction will end, but whether that end will be appropriately managed and see communities supported through the transition, or whether it will be chaotic and disruptive.

          So will this budget support the transition away from fossil fuels, or will it continue to prop up coal and gas?

          When it comes to sensible moves the government can make right now, one stands out as a genuine low hanging fruit. Mining companies get a full rebate of the excise (or tax) that the rest of us pay on diesel fuel. This lowers their operating costs and acts as a large, ongoing subsidy on fossil fuel production — to the tune of $11 billion a year!

          Greenpeace has long called for coal and gas companies to be removed from this outdated scheme, and for the billions in savings to be used to support the clean energy transition and to assist communities with adapting to the impacts of climate change. Will we see the government finally make this long overdue change, or will it once again cave to the fossil fuel lobby?

          2. Make big polluters pay

          Activists Disrupt Major Gas Conference in Sydney. © Greenpeace
          Greenpeace Australia Pacific activists disrupted the Australian Domestic Gas Outlook conference in Sydney with the message ‘Gas execs profit, we pay the price’. © Greenpeace

          While our communities continue to suffer the escalating costs of climate-fuelled disasters, our Government continues to support a massive expansion of Australia’s export gas industry. Gas is a dangerous fossil fuel, with every tonne of Australian gas adding to the global heating that endangers us all.

          Moreover, companies like Santos and Woodside pay very little tax for the privilege of digging up and selling Australians’ natural endowment of fossil gas. Remarkably, the Government currently raises more tax from beer than from the Petroleum Resource Rent Tax (PRRT) — the main tax on gas profits.

          Momentum has been building to replace or supplement the PRRT with a 25% tax on gas exports. This could raise up to $17 billion a year — funds that, like savings from removing the diesel tax rebate for coal and gas companies, could be spent on supporting the clean energy transition and assisting communities with adapting to worsening fires, floods, heatwaves and other impacts of climate change.

          As politicians arrive in Canberra for budget week, they will be confronted by billboards calling for a fair tax on gas exports. The push now has the support of dozens of organisations and a growing number of politicians. Let’s hope the Treasurer seizes this rare window for reform.

          3. Support everyone to be part of the solution

          As the price of petrol and diesel rises, electric vehicles (EVs) are helping people cut fuel use and save money. However, while EV sales have jumped since the invasion of Iran sent fuel prices rising, they still only make up a fraction of total new car sales. This budget should help more Australians switch to electric vehicles and, even more importantly, enable more Australians to get around by bike, on foot, and on public transport. This means maintaining the EV discount, investing in public and active transport, and removing tax breaks for fuel-hungry utes and vans.

          Millions of Australians already enjoy the cost-saving benefits of rooftop solar, batteries, and getting off gas. This budget should enable more households, and in particular those on lower incomes, to access these benefits. This means maintaining the Cheaper Home Batteries Program, and building on the Household Energy Upgrades Fund.

          4. Build the industries of the future

          Protest of Woodside and Drill Rig Valaris at Scarborough Gas Field in Western Australia. © Greenpeace / Jimmy Emms
          Crew aboard Greenpeace Australia Pacific’s campaigning vessel the Oceania conducted a peaceful banner protest at the site of the Valaris DPS-1, the drill rig commissioned to build Woodside’s destructive Burrup Hub. © Greenpeace / Jimmy Emms

          If we’re to transition away from fossil fuels, we need to be building the clean industries of the future.

          No state is more pivotal to Australia’s energy and industrial transformation than Western Australia. The state has unrivaled potential for renewable energy development and for replacing fossil fuel exports with clean exports like green iron. Such industries offer Western Australia the promise of a vibrant economic future, and for Australia to play an outsized positive role in the world’s efforts to reduce emissions.

          However, realising this potential will require focussed support from the Federal Government. Among other measures, Greenpeace has recommended establishing the Australasian Green Iron Corporation as a joint venture between the Australian and Western Australian governments, a key trading partner, a major iron ore miner and steel makers. This would unite these central players around the complex task of building a large-scale green iron industry, and unleash Western Australia’s potential as a green industrial powerhouse.

          5. Build community resilience

          Believe it or not, our Government continues to spend far more on subsidising fossil fuel production — and on clearing up after climate-fuelled disasters — than it does on helping communities and industries reduce disaster costs through practical, proven methods for building their resilience.

          Last year, the Government estimated that the cost of recovery from disasters like the devastating 2022 east coast floods on 2019-20 fires will rise to $13.5 billion. For contrast, the Government’s Disaster Ready Fund – the main national source of funding for disaster resilience – invests just $200 million a year in grants to support disaster preparedness and resilience building. This is despite the Government’s own National Emergency Management Agency (NEMA) estimating that for every dollar spent on disaster risk reduction, there is a $9.60 return on investment.

          By redirecting funds currently spent on subsidising fossil fuel production, the Government can both stop incentivising climate destruction in the first place, and ensure that Australian communities and industries are better protected from worsening climate extremes.

          No communities have more to lose from climate damage, or carry more knowledge of practical solutions, than Aboriginal and Torres Strait Islander peoples. The budget should include a dedicated First Nations climate adaptation fund, ensuring First Nations communities can develop solutions on their own terms, and access the support they need with adapting to extreme heat, coastal erosion and other escalating challenges.

          6. Be a better neighbour

          The global response to climate change depends on the adequate flow of support from developed economies like Australia to lower income nations with shifting to clean energy, adapting to the impacts of climate change, and addressing loss and damage.

          Such support is vital to building trust and cooperation, reducing global emissions, and supporting regional and global security by enabling countries to transition away from fossil fuels and build greater resilience.

          Despite its central leadership role in this year’s global climate negotiations, our Government is yet to announce its contribution to international climate finance for 2025-2030. Greenpeace recommends a commitment of $11 billion for this five year period, which is aligned with the global goal under the Paris Agreement to triple international climate finance from current levels.
          This new commitment should include additional funding to address loss and damage from climate change and a substantial contribution to the Pacific Resilience Facility, ensuring support is accessible to countries and communities that need it most. It should also see Australia get firmly behind the vision of a Fossil Fuel Free Pacific.

          7. Protect nature

          Rainforest in Tasmania. © Markus Mauthe / Greenpeace
          Rainforest of north west Tasmania in the Takayna (Tarkine) region. © Markus Mauthe / Greenpeace

          There is no safe planet without protection of the ecosystems and biodiversity that sustain us and regulate our climate.

          Last year the Parliament passed important and long overdue reforms to our national environment laws to ensure better protection for our forests and other critical ecosystems. However, the Government will need to provide sufficient funding to ensure the effective implementation of these reforms.

          Greenpeace has recommended $500 million over four years to establish the National Environment Agency — the body responsible for enforcing and monitoring the new laws — and a further $50 million to Environment Information Australia for providing critical information and tools.

          Further resourcing will also be required to fulfil the crucial goal of fully protecting 30% of Australian land and seas by 2030. This should include $1 billion towards ending deforestation by enabling farmers and loggers to retool away from destructive practices, $2 billion a year for restoring degraded lands, $5 billion for purchasing and creating new protected areas, and $200 million for expanding domestic and international marine protected areas.

          Conclusion

          This is not the first time that conflict overseas has triggered an energy crisis, or that a budget has been preceded by a summer of extreme weather disasters, highlighting the urgent need to phase out fossil fuels. What’s different in 2026 is the availability of solutions. Renewable energy is now cheaper and more accessible than ever before. Global momentum is firmly behind the transition away from fossil fuels. The Albanese Government, with its overwhelming majority, has the chance to set our nation up for the future, or keep us stranded in the past. Let’s hope it makes some smart choices.

          The 2026 budget test: Will Australia break free from fossil fuels?

          Continue Reading

          Climate Change

          What fossil fuels really cost us in a world at war

          Published

          on

          Anne Jellema is Executive Director of 350.org.

          The war on Iran and Lebanon is a deeply unjust and devastating conflict, killing civilians at home, destroying lives, and at the same time sending shockwaves through the global economy. We, at 350.org, have calculated, drawing on price forecasts from the International Monetary Fund (IMF) and Goldman Sachs, just how much that volatility is costing us. 

          Even under the IMF’s baseline scenario – a de facto “best case” scenario with a near-term end to the war and related supply chain disruptions – oil and gas price spikes are projected to cost households and businesses globally more than $600 billion by the end of the year. Under the IMF’s “adverse scenario”, with prolonged conflict and sustained price pressures, we estimate those additional costs could exceed $1 trillion, even after accounting for reduced demand.

          Which is why we urgently need a power shift. Governments are under growing pressure to respond to rising fuel and food costs and deepening energy poverty. And it’s becoming clearer to both voters and elected officials that fossil dependence is not only expensive and risky, but unnecessary. 

          People who can are voting with their wallets: sales of solar panels and electric vehicles are increasing sharply in many countries. But the working people who have nothing to spare, ironically, are the ones stuck with using oil and gas that is either exorbitantly expensive or simply impossible to get.

          Drain on households and economies

          In India, street food vendors can’t get cooking gas and in the Philippines, fishermen can’t afford to take their boats to sea. A quarter of British people say that rising energy tariffs will leave them completely unable to pay their bills. This is the moment for a global push to bring abundant and affordable clean energy to all.

          In April, we released Out of Pocket, our new research report on how fossil fuels are draining households and economies. We were surprised by the scale of what we found. For decades, governments have reassured people that energy price spikes are unfortunate but unavoidable – the result of distant conflicts, market forces or geopolitical shocks beyond anyone’s control. But the numbers tell a different story. 

            What we are living through today is not an energy crisis. It is a fossil fuel crisis. In just the first 50 days of the Middle East conflict, soaring oil and gas prices have siphoned an estimated $158 billion–$166 billion from households and businesses worldwide. That is money extracted directly from people’s pockets and transferred, almost instantly, into fossil fuel company balance sheets. And this figure only captures the immediate impact of price spikes, not the permanent economic drain of fossil dependence. Fossil fuels don’t just cost us once, they cost us over and over again.

            First, through our bills. Every time there is a war, an embargo or a supply disruption, fossil fuel prices surge. For ordinary people, this means higher costs for energy, transport and food. Many Global South countries have little or no fiscal space to buffer the shock; instead, workers and families pay the price.

            Second, through our taxes. Governments around the world continue to pour vast sums of public money into fossil fuel subsidies. These are often justified as a way to protect the most vulnerable at the petrol pump or in their homes. But in reality, the benefits are overwhelmingly captured by wealthier households and corporations. The poorest 20% receive just a fraction of this support, while public finances are drained.

            Third, through climate impacts. New research across more than 24,000 global locations gives a granular account of the true costs of extreme heat, sea level rise and falling agricultural yields. Using this data to update IMF modelling of the social cost of carbon, we found that fossil fuel impacts on health and livelihoods amount to over $9 trillion a year. This is the biggest subsidy of all, because these massive and mounting costs are not charged to Big Oil – they are paid for by governments and households, with the poorest shouldering the lion’s share. 

            Massive transfer of wealth to fossil fuel industry

            Adding up direct subsidies, tax breaks and the unpaid bill for climate damages, the total transfer of wealth from the public to the fossil fuel industry amounts to $12 trillion even in a “normal” year without a global oil shock. That’s more than 50% higher than the IMF has previously estimated, and equivalent to a staggering $23 million a minute.

            The fossil fuel industry has become extraordinarily adept at profiting from instability. When conflict drives up prices, companies do not lose, they gain. In the current crisis, oil producers and commodity traders are on track to secure tens of billions of dollars in additional windfall profits, even as households face rising bills and governments struggle to manage the fallout.

            Fossil fuel crisis offers chance to speed up energy transition, ministers say

            This growing disconnect is impossible to ignore. Investors are advised to buy into fossil fuel firms precisely because of their ability to generate profits in times of crisis. Meanwhile, ordinary people are told to tighten their belts.

            In 2026, unlike during the oil shocks of the 1970s, clean energy is no longer a distant alternative. Now, even more than when gas prices spiked due to Russia’s invasion of Ukraine in 2022, renewables are often the cheapest option available. Solar and wind can be deployed quickly, at scale, and without the volatility that defines fossil fuel markets.

            How to transition from dirty to clean energy

            The solutions are clear. Governments must implement permanent windfall taxes on fossil fuel companies to ensure that extraordinary profits generated during crises are redirected to support households. These revenues can be used to reduce energy bills, invest in public services, and accelerate the rollout of clean energy.

            Second, we must shift subsidies away from fossil fuels and towards renewable solutions, particularly those that can be deployed quickly and equitably, such as rooftop and community solar. This is not just about cutting emissions. It is about building a more stable, fair and resilient energy system.

            Finally, we need binding plans to phase out fossil fuels altogether, replacing them with homegrown renewable energy that can shield economies from future shocks. Because what the current crisis has made clear is this: as long as we remain dependent on fossil fuels, we remain vulnerable – to conflict, to price volatility and to the escalating impacts of climate change.

            The true price of fossil fuels is no longer hidden. It is visible in rising bills, strained public finances and communities pushed to the brink. And it is being paid, every day, by ordinary people around the world.

            It’s time for the great power shift

            Full details on the methodology used for this report are available here.

            The Great Power Shift is a new campaign by 350.org global campaign to pressure governments to bring down energy bills for good by ending fossil fuel dependence and investing in clean, affordable energy for all

            Logo of 350.org campaign on “The Great Power Shift”

            Logo of 350.org campaign on “The Great Power Shift”

            The post What fossil fuels really cost us in a world at war appeared first on Climate Home News.

            What fossil fuels really cost us in a world at war

            Continue Reading

            Climate Change

            Traditional models still ‘outperform AI’ for extreme weather forecasts

            Published

            on

            Computer models that use artificial intelligence (AI) cannot forecast record-breaking weather as well as traditional climate models, according to a new study.

            It is well established that AI climate models have surpassed traditional, physics-based climate models for some aspects of weather forecasting.

            However, new research published in Science Advances finds that AI models still “underperform” in forecasting record-breaking extreme weather events.

            The authors tested how well both AI and traditional weather models could simulate thousands of record-breaking hot, cold and windy events that were recorded in 2018 and 2020.

            They find that AI models underestimate both the frequency and intensity of record-breaking events.

            A study author tells Carbon Brief that the analysis is a “warning shot” against replacing traditional models with AI models for weather forecasting “too quickly”.

            AI weather forecasts

            Extreme weather events, such as floods, heatwaves and storms, drive hundreds of billions of dollars in damages every year through the destruction of cropland, impacts on infrastructure and the loss of human life.

            Many governments have developed early warning systems to prepare the general public and mobilise disaster response teams for imminent extreme weather events. These systems have been shown to minimise damages and save lives.

            For decades, scientists have used numerical weather prediction models to simulate the weather days, or weeks, in advance.

            These models rely on a series of complex equations that reproduce processes in the atmosphere and ocean. The equations are rooted in fundamental laws of physics, based on decades of research by climate scientists. As a result, these models are referred to as “physics-based” models.

            However, AI-based climate models are gaining popularity as an alternative for weather forecasting.

            Instead of using physics, these models use a statistical approach. Scientists present AI models with a large batch of historical weather data, known as training data, which teaches the model to recognise patterns and make predictions.

            To produce a new forecast, the AI model draws on this bank of knowledge and follows the patterns that it knows.

            There are many advantages to AI weather forecasts. For example, they use less computing power than physics-based models, because they do not have to run thousands of mathematical equations.

            Furthermore, many AI models have been found to perform better than traditional physics-based models at weather forecasts.

            However, these models also have drawbacks.

            Study author Prof Sebastian Engelke, a professor at the research institute for statistics and information science at the University of Geneva, tells Carbon Brief that AI models “depend strongly on the training data” and are “relatively constrained to the range of this dataset”.

            In other words, AI models struggle to simulate brand new weather patterns, instead tending forecast events of a similar strength to those seen before. As a result, it is unclear whether AI models can simulate unprecedented, record-breaking extreme events that, by definition, have never been seen before.

            Record-breaking extremes

            Extreme weather events are becoming more intense and frequent as the climate warms. Record-shattering extremes – those that break existing records by large margins – are also becoming more regular.

            For example, during a 2021 heatwave in north-western US and Canada, local temperature records were broken by up to 5C. According to one study, the heatwave would have been “impossible” without human-caused climate change.

            The new study explores how accurately AI and physics-based models can forecast such record-breaking extremes.

            First, the authors identified every heat, cold and wind event in 2018 and 2020 that broke a record previously set between 1979 and 2017. (They chose these years due to data availability.) The authors use ERA5 reanalysis data to identify these records.

            This produced a large sample size of record-breaking events. For the year 2020, the authors identified around 160,000 heat, 33,000 cold and 53,000 wind records, spread across different seasons and world regions.

            For their traditional, physics-based model, the authors selected the High RESolution forecast model from the Integrated Forecasting System of the European Centre for Medium-­Range Weather Forecasts. This is “widely considered as the leading physics-­based numerical weather prediction model”, according to the paper.

            They also selected three “leading” AI weather models – the GraphCast model from Google Deepmind, Pangu-­Weather developed by Huawei Cloud and the Fuxi model, developed by a team from Shanghai.

            The authors then assessed how accurately each model could forecast the extremes observed in the year 2020.

            Dr Zhongwei Zhang is the lead author on the study and a researcher at Karlsruhe Institute of Technology. He tells Carbon Brief that many AI weather forecast models were built for “general weather conditions”, as they use all historical weather data to train the models. Meanwhile, forecasting extremes is considered a “secondary task” by the models.

            The authors explored a range of different “lead times” – in other words, how far into the future the model is forecasting. For example, a lead time of two days could mean the model uses the weather conditions at midnight on 1 January to simulate weather conditions at midnight on 3 January.

            The plot below shows how accurately the models forecasted all extreme events (left) and heat extremes (right) under different lead times. This is measured using “root mean square error” – a metric of how accurate a model is, where a lower value indicates lower error and higher accuracy.

            The chart on the left shows how two of the AI models (blue and green) performed better than the physics-based model (black) when forecasting all weather across the year 2020.

            However, the chart on the right illustrates how the physics-based model (black) performed better than all three AI models (blue, red and green) when it came to forecasting heat extremes.

            Accuracy of the AI models
            Accuracy of the AI models (blue, red and green) and the physics-based model (black) at forecasting all weather over 2020 (left) and heat extremes (right) over a range of lead times. This is measured using “root mean square error” (RMSE) – a metric of how accurate a model is, where a lower value indicates lower error and higher accuracy. Source: Zhang et al (2026).

            The authors note that the performance gap between AI and physics-based models is widest for lower lead times, indicating that AI models have greater difficulty making predictions in the near future.

            They find similar results for cold and wind records.

            In addition, the authors find that AI models generally “underpredict” temperature during heat records and “overpredict” during cold records.

            The study finds that the larger the margin that the record is broken by, the less well the AI model predicts the intensity of the event.

            ‘Warning shot’

            Study author Prof Erich Fischer is a climate scientist at ETH Zurich and a Carbon Brief contributing editor. He tells Carbon Brief that the result is “not unexpected”.

            He adds that the analysis is a “warning shot” against replacing traditional models with AI models for weather forecasting “too quickly”.

            The analysis, he continues, is a “warning shot” against replacing traditional models with AI models for weather forecasting “too quickly”.

            AI models are likely to continue to improve, but scientists should “not yet” fully replace traditional forecasting models with AI ones, according to Fischer.

            He explains that accurate forecasts are “most needed” in the runup to potential record-breaking extremes, because they are the trigger for early warning systems that help minimise damages caused by extreme weather.

            Leonardo Olivetti is a PhD student at Uppsala University, who has published work on AI weather forecasting and was not involved in the study.

            He tells Carbon Brief that “many other studies” have identified issues with using AI models for “extremes”, but this paper is novel for its specific focus on extremes.

            Olivetti notes that AI models are already used alongside physics-based models at “some of the major weather forecasting centres around the world”. However, the study results suggest “caution against relying too heavily on these [AI] models”, he says.

            Prof Martin Schultz, a professor in computational earth system science at the University of Cologne who was not involved in the study, tells Carbon Brief that the results of the analysis are “very interesting, but not too surprising”.

            He adds that the study “justifies the continued use of classical numerical weather models in operational forecasts, in spite of their tremendous computational costs”.

            Advances in forecasting

            The field of AI weather forecasting is evolving rapidly.

            Olivetti notes that the three AI models tested in the study are an “older generation” of AI models. In the last two years, newer “probabilistic” forecast models have emerged that “claim to better capture extremes”, he explains.

            The three AI models used in the analysis are “deterministic”, meaning that they only simulate one possible future outcome.

            In contrast, study author Engelke tells Carbon Brief that probabilistic models “create several possible future states of the weather” and are therefore more likely to capture record-breaking extremes.

            Engelke says it is “important” to evaluate the newer generation of models for their ability to forecast weather extremes.

            He adds that this paper has set out a “protocol” for testing the ability of AI models to predict unprecedented extreme events, which he hopes other researchers will go on to use.

            The study says that another “promising direction” for future research is to develop models that combine aspects of traditional, physics-based weather forecasts with AI models.

            Engelke says this approach would be “best of both worlds”, as it would combine the ability of physics-based models to simulate record-breaking weather with the computational efficiency of AI models.

            Dr Kyle Hilburn, a research scientist at Colorado State University, notes that the study does not address extreme rainfall, which he says “presents challenges for both modelling and observing”. This, he says, is an “important” area for future research.

            The post Traditional models still ‘outperform AI’ for extreme weather forecasts appeared first on Carbon Brief.

            Traditional models still ‘outperform AI’ for extreme weather forecasts

            Continue Reading

            Trending

            Copyright © 2022 BreakingClimateChange.com