Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

The Impact of "Small" Volcanic Eruptions on Earth's Climate 11

Status
Not open for further replies.

Maui

Materials
Mar 5, 2003
1,917
These "small" volcanic eruptions are being viewed by some scientists as potentially having a greater influence on earth's climate than was previously believed:


Please do not allow the vitriolic verbal pyrotechnics of your fellow contributors overshadow the points that you are attempting to make in your replies.

Maui
 
Replies continue below

Recommended for you

rb1857,

I highly recommend digging around the Berkeley Earth Surface Temperature team website. It has a lot of answers to the questions you ask as it was specifically designed by a (former) climate change skeptic to address the major concerns of the temperature data set (and includes their raw data). The paper on the method used from BEST can be found here.

GTTofAk has stated that “bucket adjustment which is still used today and adds a large warming trend to the SST data”. As with most things he says, it’s half true. Bucket measurements have a cooling bias and so the “bucket adjustment” increases the temperature from the raw measurement. However, to imply that this means that “bucket adjustments” have artificially imposed a warming trend in the 20th century is wrong.

Bucket measurements were predominantly used in the early 1900’s and have steadily become a smaller portion of the total sea temperature measurement as engine room and, more recently, buoys took over the share. Engine room measurements have a heating bias and so the “engine room adjustment” decreases the temperature from the raw measurement. A more up-to-date image on the fractional breakup of the different measurement methods can be found in Kennedy et al. 2011.
[image ]

What this means is that the temperature adjustments of the raw data moved from increasing the temperature of the raw value in the early 20th century to decreasing The first image below is plot of the data from Kennedy et al. 2011 by Kevin Cowtan (York University) showing the adjustments to the raw data over the 20th century. The second image is Figure 4 from Kennedy et al. 2011 which shows the unadjusted data (red line) and adjusted data (black line). Both demonstrate that the adjustments have worked to reduce the warming trend (by imposing a cooling adjustment). Climate scientists have done the exact opposite of tinker with the data to create a warming trend (as GTTofAk would love to believe), they’ve actually corrected the data that reduces the warming trend.
[image ]
[image ]
 
GTTofAk said:
So did climate scientists go back and correct their work? Did they admit that Folland was wrong? Hell no!
Oh, and I forgot to mention, all of GTTofAk's hand-waving about the climate scientists not correcting their assumption is also wrong. See the difference between HadSST2 adjustments, which go to zero after 1940, and HadSST3 adjustments, which account for the error in the previous iteration. They did correct it. Either GTTofAk is not up-to-date on the science (and chooses to cling to a 2007 blog post by Steven McIntyre) or he is purposefully trying to mislead.

Beyond that, this improvement occurred in 2011 and has been incorporated into the scientific understanding ever since. It has very little impact on the temperatures and certainly did not change anything major regarding anthropogenic climate change.
 
yes, i know about the berkeley data, that's why i asked my question. a thermometer moves, should you ...
1) append the new data onto the end of the old data by shifting the new results by the difference of the average of new readings compared to the old ? or
2) consider them as separate data streams ?

am i right in thinking that the original temperature record has been overwritten with corrected temperatures ?

another day in paradise, or is paradise one day closer ?
 
Nice attempt at obfuscation rconnor. However, rconner tries to play a game of Loki's wager here. He knows well enough to know that he term "bucket adjustment" especially in the context used here referse to the adjustment of post WWII temperatures under the assumption made by Folland in Folland 1984. The adjustment for this assumed change over from buckets to intake is a negative adjustment that pulls post WWII temperatures down. This adds a warming bias of ~0.3-0.5C.

The source of rconnor's last graph says this explicitly

"when they come from warm-biased engine room intakes, a negative adjustment is required. There is a big shift from buckets to engine room intakes in 1941-1942; this is the 'bucket correction' implemented in existing datasets such as HadSST2. "

So we can safely conclude one of two things, either rconnor read it and didn't understand it or he read it and instead chose to lie about it.
 
What are you talking about? What 0.3-0.5 deg C warming bias? You mean in the pre-1940 data? Yes, and then it went to near zero around 1970...which is when global temperatures started to rise. So what's your point? They shouldn't have added in the warming correction in the pre-1940 data? If that were true then that would make the recent warming trend even greater.

The point of the matter is do the corrections to the raw data add an artificial warming trend? The answer is no. Not only that but the corrections actually reduce the warming trend by warming the early data and (very slightly) cooling the more current data. Both images demonstrate this clearly.
 
rb1857,

rb1857 said:
a thermometer moves, should you ...
1) append the new data onto the end of the old data by shifting the new results by the difference of the average of new readings compared to the old ? or
2) consider them as separate data streams ?

BEST do something closer to your #2. From the BEST Methods paper:
BEST Methods Paper said:
we incorporate a procedure that detects large discontinuities in time in a single station record. These could be caused by undocumented station moves, changes in instrumentation, or just the construction of a building nearby. These discontinuities are identified prior to the determination of the temperature parameters by an automated procedure. Once located, they are treated by separating the data from that record into two sections at the discontinuity time, creating effectively two stations out of one

However, the paper mentions that other temperature data sets do something closer to your #1.
BEST Methods Paper said:
Other groups typically adjust the two segments to remove the discontinuity; they call this process homogenization. We apply no explicit homogenization; other than splitting, the data are left untouched. Any adjustment needed between the stations will be done automatically as part of the computation of the optimum temperature baseline parameters.

What this means is that the reading from the station is untouched. Corrections may be applied to the baseline temperature for that station or through the reliability/outlier weighting. But these are not done directly to the raw data.

The section called “Homogenization and the scalpel” has much more detail on the process used at BEST.

For a look at how other temperature data sets do this, see [link ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-williams2009.pdf]Menne & Williams 2009[/url] (NOAA) or Jones & Moberg 2003 (HadCRUT) or Hansen et al 1999 (NASA GISS).

rb1957 said:
am i right in thinking that the original temperature record has been overwritten with corrected temperatures ?
BEST retains the raw data and does not overwrite with the corrected temperatures. You can find it all on their website.

But beyond all this, I feel your concern over the adjustments is that they’ve significantly influenced the temperature trend. However, that is not the case:
[image ]
Note the image is for land only. However, as discussed above, the adjustments to sea surface temperatures actually reduces the warming trend.

I hope that this addresses your questions. If not, please let me know what I can clarify.
 
You seem to be rellying heavily on Kennedy which is specualtive crap. How did Kennedy undo the bucket adjustemnt without undouing the bucket adjustemnt? Well remember how Folland made up some crap in his paper. Well Kennedy made also made up some crap.

"It is likely that many ships that are listed as using buckets actually used the ERI method (see end Section 3.2). To correct the uncertainty arising from this, 30+-10% of bucket observations were reassigned as ERI observations. For example a grid box with 100% bucket observations was reassigned to have, say, 70% bucket and 30% ERI."

So Kennedy claims that he can override 30% of observed data based on what?

"It is probable that some observations recorded as being from buckets were made by the ERI method. The Norwegian contribution to WMO Tech note 2 (Amot [1954]) states that the ERI method was preferred owing to the dangers involved in deploying a bucket"

Oh a Norwegian anecdote. Then he goes onto find data where none exists.

"Some observations could not be associated with a measurement method. These were randomly assigned to be either bucket or ERI measurements. The relative fractions were derived from a randomly-generated AR(1) time series as above but with range 0 to 1 and applied globally."

Remember the graph I posted from Kent

sstbucket.gif


There is no justification for this method. It will result in ERI method being chosen at a rate not supported in the data. In the end after getting rid of 30% of the observed buckets and adding ERI from "unknown" data at Kennedy was able to keep the bucket adjustment mainly intact.

It doesn't surprise me that you cant see when the author of a paper is putting forth supposition as fact.

 
So if we have to peridocally move the tempeture measuring station because a heat island moves closer, would you also need to biasis the tempeture down to account for this? Are we seeing that happen?
 
Yes. If a station was moved from a (cooler) rural area to a (warmer) urban area, BEST would raise the baseline temperature for that station, thus reducing the anomaly. However, despite much humming and hawing from a certain weatherman and his blog, the impact of UHI isn't that large.

BEST examined the temperature trend for all stations in their data set and then only stations that were far away from urban areas. They found almost no discernible difference between the two. Actually, they found that rural sites read hotter than all sites combined. From the BEST paper on UHI, Wickham et al (2013):
Wickham et al 2013 said:
We observe the opposite of an urban heating effect over the period 1950 to 2010, with a slope of -0.10 ± 0.24°C/100yr (2σ error) in the Berkeley Earth global land temperature average. The confidence interval is consistent with a zero urban heating effect, and at most a small urban heating effect (less than 0.14°C/100yr, with 95% confidence) on the scale of the observed warming (1.9 ± 0.1°C/100 yr since 1950 in the land average from Figure 5A).

Our results are in line with previous results on global averages despite differences in methodology. Parker [2010] concluded that the effect of urban heating on the global trends is minor, HadCRU use a bias error of 0.05°C per century, and NOAA estimate residual urban heating of 0.06°C per century for the USA and GISS applies a correction to their data of 0.01°C per century. All are small on the scale of global warming.

We note that our averaging procedure uses only land temperature records. Inclusion of ocean temperatures will further decrease the influence of urban heating since it is not an ocean phenomenon.
[image ]

You’ll note papers by Mckitrick are discussed in the BEST paper. So before someone says, “But Mckitrick says otherwise!”, actually read the paper.
 
(rb1957, sorry I've noticed that I've included typos in your handle a couple of times now. My apologies. Please let me know if I addressed your question or if my reply spurred more questions/concerns. I also wanted to say that I greatly appreciate (and find extremely refreshing) your sincere and honest questions. Your willingness to digest (but still be appropriately skeptical of) the information presented to you is commendable. It greatly improves the quality of the discussion. Your posts give me a glimmer of hope that a worthwhile discussion can be had.)
 
This thread is SUPPOSED to be discussing Volcano impacts, isn't it?

The last time the word 'volcano' appeared, and then only in a peripheral fashion, was Feb 12.

Time to 'eject' from this thread/pissing contest.
 
It's better to have a discussion like this, than a fight on the playground. I still get little things out of it, so let it go on.
 
Tinfoil,

I’m in agreement that the quality of the discussion is poor and I share a large part of that guilt. However, there’s a difference between being off-topic and the topic evolving. An example of the former is “Has anyone else read this (completely unrelated) newspaper article/blog post/paper?”. An example of the latter is “If models cannot predict volcanic events, what does this mean for long-term model projections?” and then the topic shifting to the long-term impact of volcanic events on modeling. The former degrades the quality of the discussion, the latter (to me) is fine and natural. Climate change science is a very interdependent discipline and it’s difficult to discuss one topic in isolation of everything else (and it still be meaningful). So rather than having 1000 threads on 1000 different topics, it makes more sense to let a conversation evolve to address many different issues so long as a logical narrative exists.

I have been guilty of feeding the former and causing the topic to be sidetracked. I regret spending as much time arguing about Grant Foster and Figure 1.4 as it was completely off-topic and pointless.

As for the “pissing contest”, I also apologize for getting dragged into trying to prove a poster wrong rather than addressing the issues with the idea presented. There is a subtle yet very important difference between the two. Posts should be geared towards providing accurate and thorough information (or correcting misinformation) and not towards proving someone wrong or ourselves right. My frustration gets the better of me at times and I forget this. I’ll do better to prevent that in the future.

As cranky108 pointed out, I feel there are some examples of very good discussions from time-to-time. I’ve recently acknowledged rb1957 for contributing to this. It is possible to have a meaningful, interesting discussion. And that doesn’t mean people’s minds will change, they won’t. That means that interesting, important questions will be asked and honest, thorough information will be provided.
 
Back on the paper. I think that the second thing you would have to do is show that 21st century aerosols are somehow higher than late 20th century aerosols between 15km and the tropopause. However, the authors have no way of showing that as that data does not exist pre AERONET. So given 2 facts, that there were more aerosols than previously thought and the presence of the pause they assume that there has been an increase in the early 21st century. The third option of ‘there were always more aerosols than we thought’ is ignored for the favored conclusion.

All this paper managed to show is that there are more aerosols between 15km and the tropopause than previously thought. The attribution of these aerosols to increased volcanic activity is pure speculation on the part of the authors. The fact that they don’t try and support this claim with hard data suggests to me that they did attempt to do so but found the evidence lacking. In my experience with “climate science” when you ask yourself ‘why didn’t they do X since it’s so obvious’ the answer usually is that they did and didn’t get the answer they wanted.
 
I think that the second thing you would have to do is show that 21st century aerosols are somehow higher than late 20th century aerosols between 15km and the tropopause
I fail to see how this is relevant. Pre-2000 SAOD data was incorporated into models and not assumed to be zero due to Mount Pinatubo. Models do, however, assume that the post-2000 SAOD is negligible (see below) because no major volcanic events occurred after that point. I therefore cannot see any relevance between this comment and the paper because pre-2000 SAOD data is already incorporated but perhaps someone could explain the relevance to me.

The relevant issue is whether the assumption that post-2000 SAOD is negligible is valid or not. Vernier et al 2011, Ridley et al 2014 and Santer et al 2014 demonstrate that this assumption is incorrect and so models miss the cooling impact introduced by smaller volcanic events that occurred after 2000 (most of which occurred after 2005).

Perhaps a time line would be helpful:
1992 to 2000 – Large aerosol increase caused by Mount Pinatubo in 1992 and fading to near zero around 2000 (Source – Columbia University). This is factored into models.
1993 – First year of AERONET data (source).
1995 – The first year of AERONET data used by Ridley et al 2014.
2000 – The year in which models include no stratospheric aerosol impacts. From Ridley et al 2014, “The climate model simulations evaluated in the IPCC fifth assessment report [Stocker et al., 2013] generally assumed zero stratospheric aerosol after about 2000, and hence neglect any cooling effect of recent volcanoes (see Figure 3 of Solomon et al., 2011).” The Stocker et al 2013 (IPCC AR5 Technical Summary) reference points to Box TS.3
2004/2005 – Year when the "average" of model runs started to deviate from observations (see image from Schmidt et al 2014). This is subjective, of course, maybe one would suggest 2002/3. It matters little.
2005 – Year when Ridley et al 2014 find a notable increase in SAOD in the data.

To find the relevant impact on models, you need to study the change in SAOD post-2000. Again, this is done in the paper:
Ridley et al Figure 3
[image ]
Fig. 3 - (a) Estimated global mean radiative forcing is shown for datasets from Sato et al. (orange), Vernier et al. (blue) and AERONET mean (black) with 25th to 75th percentile range (grey). The dotted line indicates the baseline model used in many climate model studies to date, which includes no stratospheric aerosol changes after 2000.
(b) The temperature anomaly, relative to the baseline model, including the AERONET mean (black), median (white), and 25th to 75th percentile range (grey), Vernier et al. (blue), and Sato et al. (orange) forcing computed for each dataset
(c) the total global temperature change predicted by the Bern 2.5cc EMIC in response to combined anthropogenic and natural forcing, including the reduced warming when considering the stratospheric aerosol forcing from the three datasets.

The paper demonstrates that the incorrect assumption that SAOD past 2000 was zero misses a cooling impact of increased SAOD, most note ably after 2005. Therefore, part of (but certainly not all) of the recent discrepancy between models and observations can be explained by the fact that models do not include up-to-date volcanic aerosol information. Incorporating this correction would drop the "average" model outputs closer to observations. This is an example of observation improving model projections, so I'm unsure why those skeptical of models would try to (erroneously) discredit this research.

However, the authors have no way of showing that as that data does not exist pre AERONET
Again, I fail to see how this is relevant. Pre-2000, models incorporated SAOD data following the Mount Pinatubo eruption (Stocker et al, 2013). It was only after 2000 that models incorrectly assumed SAOD was zero. Therefore, this is the period of focus for the research. This entire period occurs when AERONET data exists. I should note that prior to AERONET data, there was SAGE II, CALIPSO, COMOS/ENVISAT and OSIRIS/Odin satellites (as used in Vernier et al, 2011). So even if it were relevant, I don’t believe the statement is true.

So given 2 facts, that there were more aerosols than previously thought and the presence of the pause they assume that there has been an increase in the early 21st century. The third option of ‘there were always more aerosols than we thought’ is ignored for the favored conclusion.
1. The post-2000 SAOD was greater than 0, yes.
2. The post-2000 SAOD increases, most notably from 2005-2011, which adds a cooling trend that is currently unaccounted for in models due to the incorrect assumption.
3. This appears to be incorrect. The pre-2000 SAOD data is incorporated into models.

The attribution of these aerosols to increased volcanic activity is pure speculation on the part of the authors
I don’t believe this is true. What information do you have to support this claim? I also claim if this is relevant. The issues is that aerosols have impacted the SAOD post-2000, unlike the assumption carried in models, which would have imposed a cooling trend on global temperatures. I fail to see how or why it's important to anything relevant if they come from volcanoes, other natural sources or anthropogenic sources.

The paper itself seems to suggest otherwise. Ridley et al 2014 describes numerous volcanic events that correspond to increases in SAOD. See Figure 1 from Ridely et al 2014:
[image ]
Fig. 1 (a) The SAOD time series for the period 1995 – 2013 for satellite data from Vernier et al. (blue), Sato et al. (orange), AERONET mean, averaged from 30-45°N, (white) with 25th to 75th percentile uncertainty (grey shading), Tsukuba lidar retrievals (36.1°N, 140.1°E) above the tropopause (thick black line) and 15 km (thin black line), and aerosol sonde measurements at Laramie (41°N) above the tropopause (red dots) and 15 km (red crosses). Potentially important equatorial (solid lines) and mid-to-high latitude (dashed lines) volcanic eruptions are shown for Ulawun (Ul), Shiveluch (Sh), Ruang (Ru), Reventador (Re), Manam (Ma), Soufrière Hills (So), Tavurvur (Ta), Kasatochi (Ka), Sarychev (Sa), Eyjafjallajökull (Ey), and Nabro (Na). (b) Ratio of integrated optical depth above the tropopause to that above 15 km from three different lidars and from the in situ observations. The inset contains the same data on a log scale to indicate the ratios greater than 5 that are cropped for clarity on Fig. 1 (a).

The fact that they don’t try and support this claim with hard data suggests to me that they did attempt to do so but found the evidence lacking. In my experience with “climate science” when you ask yourself ‘why didn’t they do X since it’s so obvious’ the answer usually is that they did and didn’t get the answer they wanted.
This seems completely false. It is certainly not supported by anything else said in the post because everything else said in the post is unsupported by the paper or any evidence. I don’t believe such unsupported statements belong in this conversation. Especially ones that attempts, with zero supporting evidence, to degrade all published climate science. In keeping with the comments regarding the quality of discussion on climate change, I don’t believe a quality conversation can happen as long as comments like this persist.
 
It's late. I was sure you would take your time to put together the usual fallacies.

Some points.

First you need to learn the difference between the stratosphere and the tropopause troposphere boundry. That you think they are interchangeable doesn't speed well.

Second, your assertion that just because the system started to be installed in the 90s does not mean we have good data before the early 2000s. Alarmist seem to like to live in these I stall periods. You've done the same thing with the ARGO data seeing castles in the clouds with a admittedly incomplete data.

Third, you still fail to see the point. You are confusing better observation with actual trends. All you paper shows is that we are better able to see these small effects that have always been there.
 
Since the effect has always been there it is already part of the models as it was incorporated. These are tuned models. Unknown constants are already incorporated what Ridley is doing is double counting.

P.s. I don't really see a very strong correlation either. Unless of course the 2002-03 eruptions traveled back in time. While that might look like a good correlation to the naked eye the thinking man is left to wonder how these eruptions are time traveling.
 
At least these charts show some level of uncerenty. And some of this has been used to bring about leaking gas issues.

Never liked how slopy some industries are. It shows a don't care attitude.

Sort of a side note: exactly why are land fills bad, if they capture carbon? One of the biggest dislikes of the green movement is the constant complaints of so many things.
So now there are fights over transmission lines going to a wind farm, and because it is a business they want the transmission line at least cost.
 
To the figure 1 SAOD, it looks more like its making the case that aerosols are causing volcanic eruptions not the other way around. Really just a childish trek through not only cherry picking but bad cherry picking.

Ridley14.png


Now to the supposed trend, rconner tries to argue that AERONET goes all the way back to 1993. This is a lie of omission. AERONET first started to be installed in 1993. We however the network was not complete and providing a full accurate data set until mid 2000s.

You can see this rather well in rconner’s link.


As with most alarmist arguments it is a superficial one dependent on you taking the argument at face value and not looking any further if you simply go one step further and start clicking on the individual years you will see that rconner’s argument falls apart.

1993
bamgomas_maps


1998
bamgomas_maps


2000
bamgomas_maps


2005
type_piece_of_map_opera_v2_new


2010
bamgomas_maps


Present
bamgomas_maps


What he have here isn’t a trend in anything but better instrumentation and coverage. It’s no different than such previous false claims like an increase in hurricanes or tornadoes. We are simply able to see what we didn’t see before. As AERONET finished its final stages with better instrumentation at the poles we found more aerosols than we theoretically expected to be there, but they were always there.
 
Perhaps I need to repeat it again:
Pre-2000, models incorporated SAOD data following the Mount Pinatubo eruption (Stocker et al, 2013). It was only after 2000 that models incorrectly assumed SAOD was zero.

So the only impact this study has is on post-2000 model projections.

The relevant issue is whether the assumption that post-2000 SAOD is negligible is valid or not.

I’m unsure what anything said in the previous few posts has to do with that. However, there’s a focus on AERONET data, so we can discuss that.

I completely agree that the early AERONET data was not good (it’s why I linked the AERONET data), I apologize if you inferred otherwise. I did not explicitly mention the accuracy/uncertainty of the pre-2000 data because the pre-2000 AERONET data is irrelevant. However, the post-2000 data is better (and that is the time frame that is relevant).

Beyond that, AERONET is not the only observational aerosol data set. There are many satellites measuring aerosols as well - SAGE II, CALIPSO, COMOS/ENVISAT and OSIRIS/Odin. Vernier et al (2011) uses satellites and comes to a similar conclusion than Ridley et al 2014 (note that the results from Vernier et al are plotted on the Ridley figure).

While, yes pre-2000 AERONET data can be spotty (especially in the first few years), (1) pre-2000 data is irrelevant to the subject at hand and (2) it's not the only data set we have to go on. So, I fail to see what your point is.

you still fail to see the point. You are confusing better observation with actual trends. All you paper shows is that we are better able to see these small effects that have always been there.
Ok, so you postulate that the increase in SAOD observed over the 21st century is an artifice caused by improvements in AERONET data quality and nothing to do with actual trends.

Firstly, observational data sets are improved by either increasing the number of measurements and/or correcting biases in the measurement. The thought that improvements in AERONET data creates an artificial trend in the data set needs to explain how increasing the number of stations or correcting biases created such an artificial trend. I’m unaware of any explanation but if you bring one forward, maybe we could discuss it. Spotting hurricanes is not the same as measuring aerosols.

Secondly, and much more importantly, if the trend in AERONET is artificially caused by improvements in the quality of the data set, then the trend would not exist in satellite data. However, satellites show a similar trend. From Venier et al, 2011:
[image ]

So there appears to be an actual trend in SAOD from 2000 to 2011, whether you look at satellite or ground-based measurements. Furthermore, and getting back to the matter-at-hand, it is very clear that SAOD is non-zero post-2000. Therefore, the assumption that there is no impact caused by SAOD post-2000 is wrong. This means that models that carried this assumption will calculate temperature slightly hotter than they should. This explains part of, but certainly not all (see my previous posts for other factors), the discrepancy between the “average” model runs and observations. The extent of this impact (and that of other factors) is explored in Schmidt et al 2014 and Huber and Knutti 2014.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor