Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

How to tell if a return period value is a good estimate? 1

Status
Not open for further replies.

buoy

Marine/Ocean
Feb 18, 2011
34
I am using the helpful package WAFO to estimate a survival load for a structure, but I am having trouble interpreting the results.
Background:I have a measured time series of the load, and I identify peak values using the "peaks over threshold" method. The identified peaks are then fit to an expected distribution for extreme events (General Pareto Distribution) and the fitted distribution is used to generate the expected value at the chosen return period of 3 hours.
Problem: WAFO returns confidence intervals on its return period estimate, and it also plots comparisons between the fitted and measured distributions. Sometimes the fit is poor (highest peaks do not fall on the fitted distribution), yet the confidence bounds are narrow (indicating that estimate of the return period is precise). Sometimes the fit is good, but the confidence bounds are wide. There seems to be no relation between the quality information returned from the two sources (confidence bounds vs. fit evaluation).
Question: Why would I get narrow confidence bounds when the fit is poor? Which is a better indicator of the quality of the return period estimate: the confidence bounds, or the closeness of the fit to the measured distribution?
 
Replies continue below

Recommended for you

Perhaps the program is confident that the fit is poor.

"People will work for you with blood and sweat and tears if they work for what they believe in......" - Simon Sinek
 
What I want is some metric indicating the fit is poor, and that metric is *not* the confidence bounds, I think. Instead I think the confidence bound is a measure of how noisy the fitted data is. So perhaps a better measure of fit goodness is the RMS value of the difference between the fitted and actual distributions.
 
Probably correct.
I read it as a certain confidence that the returned value was good even though the fit might have been bad.

"People will work for you with blood and sweat and tears if they work for what they believe in......" - Simon Sinek
 
You are exactly right. I have been working through 'An introduction to statistical modeling of extreme values' by Stuart Coles - highly recommended. On p. 57, 'estimates and their measures of precision (i.e. the confidence interval that I was looking at) are based on an assumption that the model is correct.' So you look at the quantile-quantile plot to see if the fit is good. You also look at the return period plot with its confidence intervals, but you know that they are 'lower bounds that could be much greater if uncertainty due to model correctness were taken into account.' [And as a side note, the book showed that I needed to estimate the CIs by the profile likelihood method not the delta method which I was using previously - that made a big difference to the results.]
 
[thumbsup2]
Not many other fields would be at all interested in how confident they were about a bad answer, but statistics ... somehow yes.

"People will work for you with blood and sweat and tears if they work for what they believe in......" - Simon Sinek
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor