WhiskeyHammer
Mechanical
- Sep 26, 2013
- 4
I'm preparing to do some statistical analysis that relies on instrumentation with a known ±%, specifically optical chronographs and pressure gauges. My question is on how the measurement values are distributed across the error range: is it random or is normally distributed?
The method of distribution seems important to the confidence of the final result and to the testing methodology. Random distribution suggested that my confidence will be limited no matter the sample size so I can use the bare minimum, whereas normative suggests that my confidence would increase with the sample size so I would benefit from a much larger group.
The method of distribution seems important to the confidence of the final result and to the testing methodology. Random distribution suggested that my confidence will be limited no matter the sample size so I can use the bare minimum, whereas normative suggests that my confidence would increase with the sample size so I would benefit from a much larger group.