Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

probability of an undetected error in crc-code word 2

Status
Not open for further replies.

gerczek

Mechanical
Jan 22, 2010
4
DE
I am trying to calculate the probability of an undetected error in a data-frame that is
sent through an embedded wireless network (ieee 802.15.4).
The length of the data-frame is 1000 bits plus 16 bits FCS(Frame Check Sequence), generated by the CRC-CCITT
x^16+x^15+x^5+1 polynomial.
I know from a research-paper that the CRC-CCITT can detect any 1,2 and 3 bit-errors and any
uneven bit-errors. So only the 4,6,8,.... bit-errors might lead into an undetected error.
To calculate the propability of an undetected error for a data-frame, i would have to
calculate the probability that a 4,6,8,.. bit-error even occurs (given the
bit-error-rate), multiply it by the probability that these specific bit errors actually
lead into an undetected error and sum it all up. for example for 4 bit-errors there exist
1016^4 (simplified) different possible combinations, but only a few of these lead into an
undetected error. (Am I right so far?).

The problem now is, that I don't know how many 4,6,8,.. bit-errors lead into undetected
errors and thus I can't calculate exactly the possibility of an undetected error.
So I made an assumption. In the frame there 2^1000 correct Codewords, that share 2^16
FCS. That means on average 2^1000/2^16 share 1 FCS, but for a specific frame only one is
correct and the others are in incorrect, but are not detectable as such. The total amount
of possible incorrect frames is of cours 2^1016 -1. The average possibility of an
undetected error is the amount of undetectable error devided by the total amount of
errors. (2^984 -1)/(2^1016 -1). Finally I multiplied that by the probability that
error-corruption can even occur as mentioned above and had my results.

I hope I expressed myself coherently. If you have any questions feel free to ask. What I
would like to know is. Does my assumption make sense? And is calculated probality higher
than the actual probability, so that I am still on the safe side.
Do you know where I can get the actual amount of undetectable errors for each bit-error
for my specific data-length CRC-polinomial?
thanks a lot for your time
 
Replies continue below

Recommended for you

gerzcek said:
And is calculated probality higher
than the actual probability
Wouldn't that mean you calculated incorrectly? [ponder]

Dan - Owner
Footwell%20Animation%20Tiny.gif
 
Yes my calculation is not 100% correct. The assumption is actually a simplification because I don't know the exact properties of the codewords that are generated by the CRC.
 
I am sending information (data-frames) from one point to another over a wireless channel. In that channel the frames might get corrupted. Normally the receiver can detect these errors with the help of the Frame Check Sequence. But there is a slight chance that these errors cannot be detected. I want to know how big that chance is, so I can say how reliable the data is.
The probability I calculated for the worst case, was around 1^-7 . If I am off 50% that, would mean 5^-7 or 5^-8. I could still live with that as long as I am not near the threshold-probability of 1^-6.
 
There are problems in trying to compute an error rate for which the CRC will fail to detect a bad communication. First, whether or not the error gets through will be highly dependent on the message. Second, CRC was designedto catch short bursts of contiguous bit errors. If the number of error bits is greater than this, there is a chance that CRC will fail.

Here is an example: Consider these two hex strings:
1) 4B04B6F9BF002F002C002E002CD12E
2) 00260010BF002F002C002E002CD12E

This is from an actual application and uses a 16 bit CRC algorithm to calculate a check value. In both cases the check value is 0x2ED1 (byte swapped). The CRC is the standard CRC used in Modbus, CCITT believe.

Mathematically speaking, an X bit CRC will detect all consecutive bit errors less than X bits. When the error burst is bigger than X, the an X bit CRC will detect at a rate of 1-2^-X . For example,a 16 bit CRC has a 99.99847412109375% chance of catching the error burst.

As you can see from my example above with the two strings, in reality, error bursts longer than X bits DO HAPPEN AND DO GET THROUGH!
 
A Star for Noway2.

1) One must be careful in calculating probabilities that all events have an equal probability of occuring [unless you account for that].

2) There's probably [certainly] published data on the BER for the CRC you are using.
 
I know that errors can get through. In literature they are called undetectable errors, which is the whole point of my opening post. I want to calculate the probability of an undectable error, depending on the attributes of the CRC-Code and on the bit-error-rate of the channel.

There are quite a lot of CRC-Codes out there with different attributes. For example CRC-Code A can detect more 4-Bit errors then CRC-Code B. But B can detect all uneven Bit errors. Thus I am pretty sure, it's not possible to say there is 1-2^-x chance to detect all error-burst bigger than X (size of the FCS), because not all CRC-Codes are different. It's just a simplification and my original question was how good that simplification is.

I found this paper on the net. It was very helpful in understanding CRC-Codes. I hope this post makes my first post more understandable.
 
I don't think that there is a "practical" answer to your question. By "practical" I mean ones that makes sense and is usable by a real world engineer and this thread is heading dangerously towards the academic.

While I think I understand what you are asking, I don't see how to develop a reasonable answer that doesn't depend on seriously advanced concepts of probability theory regarding stochastic processes. There are simply too many variables, not the least of which is the science behind the different CRC algorithms. It is well known that the polynomial choice has a dramatic impact, I don't have a clue as to how to pick a particular one and try to quantitatively predict the results given a theoretical communication channel SNR or error rate.

Perhaps an investigation into how to analyze random events based upon probability would lead you to a better solution. This type of analysis is commonly used in DSP algorithms where noise is concerned and the nature of the noise is non deterministic.
 
Assemble a Beowulf cluster from all those old computers you have, and let them beat on a simulation of the problem for a few weeks.



Mike Halloran
Pembroke Pines, FL, USA
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top