Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

150kV cable failure: Rampion Wind farm - UK 1

Status
Not open for further replies.

RRaghunath

Electrical
Aug 19, 2002
1,716
I read that the subject wind farm, commissioned in 2018, is down since 26th Oct'2019 due to failure of under-sea HV cable.
Any details how it failed, is it a straight-through joint or some thing else!!
 
Replies continue below

Recommended for you

A future fault? (It's only the 13th Nov now).

Remember - More details = better answers
Also: If you get a response it's polite to respond to it.
 
Thanks LitleInch for pointing out my mistake. Corrected - 26th Oct'19.
 
Sorry, being a bit smart.

There doesn't seem much out there in public, but they did replace section back in 2017 before the start so maybe something failed in that replacement.

Remember - More details = better answers
Also: If you get a response it's polite to respond to it.
 
And something in the original placement? Starting to add up to something bigger now.
 
ax1e, you mean the plant is still not back on line!
Cable terminations and joints are always weak spots in the cable and if it is not done properly first time, we are in for tough time.
 
At present, the damaged section of the cable has been removed. Before reinstallation, the project team will carry out rubble removal work in relevant waters.
However, this is not the first such failure for this wind farm. As early as last December, the eastern one of the two sea cables were damaged due to unknown reasons, and part of it was dismantled and reinstalled in March and April this year. This time the problem is the western cable. Is it an accident that two sea cables were damaged in a short time? Is it not buried deep enough?
 
No, I no longer believe in the "one off theory". Low event occurrences are more like quantum mechanics, simply observing one changes (increases) the probability of observing another similar event in the future. Again, this time, a supposedly 1 in 300,000,000 hour event produced another within a far too short time period. It has to mean that either the initial probability was totally wrong, or the probablity actually changed in the meantime. Which is it? The FAA needs to find out soon.


 
Probability doesn't require anything of a single event. It is based on the distribution of events as you approach infinity. Even though a coin is fair it can turn up 5 or 10 times in a row without violating any principles.

Even so, when I see close occurrences of supposedly rare events, I can't help suspecting that there is some commonality, such as a design/manufacturing/installation/maintenance defect, sabotage, etc.
 
Probability estimates often assume that events are independent. But a failure in one part of a system can increase stress in other parts of the system, or otherwise change the rest of the system. So you can end up with cascading failures, sometimes further apart than you'd suspect. In these cases the original MTBF estimates were wrong, since they were based on incorrect assumptions.
 
Exactly. Probabilities are calculated by ignoring potential commonalities, yet when accidents happen, they are rarely due to the occurrence of a single event. A chain of even extremely low probability events, that certainly must share some kind of of commonality, can occur that seemingly integerate into a resulting catastrophy of a far higher probability than you would get, if all such events in the chain were considered to have been independent. Probabilities of events resulting from chaining independent events lead us to believe that they are even less likely to occur, if not practically impossible, than if a chain did not cause the event. Continuing with customary examples, the probability of drawing a royal flush, a chain of independent events, is practically impossible during the finite time limit of my poker game night, unless I bring some extra cards to the table, which you all might suspect, if I turned up with one in your game parlor. That is the same logic that brought on the 737 Max investigation. Two crashes occurring within a short timespan looked suspicious. Why? Because that result was impossible when assuming independent event calculations. Yet, even though it was theoretically (nearly) impossible, that result actually occurred and triggered the "class action" investigation. My question now is, why did not the occurrence of just one event initiate such an investigation? Should not the occurrence of one very unlikely event automatically suggest that the calculations were amiss and further action might immediately be needed. In fact the more unlikely the event, the more immediate the need may be. That leads me to believe that even calculating a probability of an extremely unlikely event could be cause for concern and might in itself warrent a deeper study into how valid such a calculation really might be, especially if it involved chaining a number of supposedly "independent" events. The underlying problem is that probabilities are calculated using actual (or simulated actual) events and that extreme events, by definition, dont happen enough to obtain good models of them. So is there some kind of factor we miss when chaining independent events? Should probabilities of chained independent events be modified by 1/ψ * number_of_events, or n^2, or some other factor to account for dependent complexities in models that are not fully well understood?
 
Do we have a statistic of how many of these MTBF calculations are wrong?

"Ok we have 100 parts with a MTBF of about ten years but we also expect 3 (we don't know which!) of these MTBF calcualtions to significantly underestimate MTBF"
 
Single-part MTBFs are probably fine. It's trying to estimate it for more complex systems that gets hairy. You can, of course, use dependent probabilities, but it's more effort. You have to know about the whole system. It's not hard mathematically, it should be taught in every engineering introductory probability & statistics class, but it's conceptually harder. So where it's not required people skip it.
 
There are some related concepts in the various (cosmological) Anthropic Principles, where some reasonable assumptions can be made even when presented with a sample size of one (us as observers). Basically, one assumes that one is relatively normal (given our existance) and that we are roughly in the middle of our possible range of parameters.

For a related failure example, if somebody claims that a given event is so unlikely that it won't happen for "a million years", and then it happens almost immediately, it is reasonable to conclude that their claim is almost certainly incorrect.

Another example is, if you walk past a haystack and happen to notice a needle, then it's not reasonable to assume that the haystack contains just one needle.

Single events can be meaningful evidence, if their mere existance is claimed to be unlikely, and yet they exist surprisingly early and/or surprisingly often.

MTBF story: Many years ago, we procured a batch of a custom designed product. Inevitably we requested the vendor provide an estimated MTBF for the product. The answer came back something like "Calculated MTBF = 156,746.9286544293 hours", an output from an automatic analysis tool (based on the BoM input). We suggested that measuring several decades of time scale to the nearest microsecond was a bit overly-ambitious (i.e. significant figures). And when all the products had to be returned for minor design corrections, it reinforced our suspicions about the usefulness of such claims.

 
Another classic example is the 100 year storm or cyclone, etc, etc. The conditions that produce the weather event don't cease to exist just because the weather event has finished.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor