Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Randomness, How? 3

Status
Not open for further replies.

BuzzViper

Electrical
Oct 17, 2003
7
Hi to all,

Most electronics I've doing so far are concentrating on how make a gate or such as reliable as possible - least probabily for error. i.e. if you are meant to get a '1' as your output then that's what you get and not oscillations.

But then come to think of the random function Rand() and Seed() in C/C++, an question appears across my mind on how this can be implemented using digital circuitry. Is there a circuit configuration that can generate randomness and can be altered to change the degree of randomness? How is it done in a prebuilt machine - PC?

Website or anything can be helpful.
Thanks in Adv.
 
Replies continue below

Recommended for you

The random number generator in a pc isn't really random.

It's pseudo-random.

People have made careers out of random number generators.

The British Premium Bond machine allegedly uses the noise generated by a zener diode as the source for a random number generator.

This is checked for randomness before and after a draw is made.

You can also get meta-stable states in logic where it doesn't know whether it's a 1 or a 0, which is a bit awkward.

rgds
Zeit.
 
If you are not too picky, you can make an oscillator from
three inverters and sample it with a D-FF clocked with
a much lower frequency.



<nbucska@pcperipherals.com>
 
My reading suggests that there are problems even trying to define 'randomness', far less generate arandom number. If there are still people arguing about what it means to be random, then actually generating random numbers is a bit of a lost cause!

I believe older PCs generated a random number by just sampling the least significant byte or word in the real-time clock, relying on the time between samples being random. Not too good if the sampling is inside a regular, repeating program loop.


Bung
Life is non-linear...
 
There are interesting issues with randomness.

The simplest is having a sequence of numbers whose values are unknowable enough that they serve a useful function. Pseudo-random generators produce such numbers with known, predictable repeat intervals. But if you were a gambler and knew the algorithm, every number would be predictable. On the other hand, radioactive decay is truly unpredictable.

A related issue is uniqueness. Things like microsoft GUID's are 128 bit values that are not terribly random, but are guaranteed unique. No matter how many you generate, no two anywhere in the universe generated by any number of people will ever collide. (In theory)

Writing a program or creating an algorithm requires defining the specification for the 'random' output, then creating something that exceeds your requirements.

Rand() is pretty useless by itself except in the simplest of applications, and in some cases has a 16 bit or less repeat interval. If the seed is not changed then you have exactly the same sequence every time it is used.

These are simply practical problems. As Bung mentioned above, defining the problem usually allows solving it, because part of any real problem is a definition of how good the solution has to be.
 
Rand Corporation ran a study a long time ago to create a table of random numbers. They ran it through multiple level testing, not just looking at each successive digit, but also looking a two digit, three digit, skipped digit randomness. There's something like 1 million digits that are supposed to be as close truly random as possible.

As far as most random number generators used in computers or programs, they're usually done with a cyclic polynomial generator. As mentioned above, the polynomial will essentially repeat after a while.

TTFN
 
I believe that some intelligence agencies use the electromagnetic noise from outer space as a random number generator for encryption purposes.

On the other hand, the guy who designed my CD player apparently knew nothing about randomness, and designed a player that will play the same tracks over and over when you hit "random play".
Most annoying.

 
Philosophical question: if you know that your table of 1 million digits is that &quot;random&quot;, how predictble has it become? After all, you have specifically excluded any non-randomness, any from of pattern at all, then you must have reduced the set of possible sequences. A sequence with a pattern to it may be truly non-random in some applications, and completely random in another, could it not?

Is the chaotic behaviour of non-linear processes under some conditions random?

I agree with DspDad - the best way to choose a &quot;random&quot; number is to do it based on knowing why you want it, and so what constitutes randomness for your application.


Bung
Life is non-linear...
 
BuzzViper, if you really want randomness, then stop doing all the things you do to avoid randomness and start doing the exact opposite (Ha-Ha).

I once read an article in some trade magazine about a random number generating algorithm written in C (sounds like an oxymoron). It was basically some crazy mathematical algorithm calculating some otherwise useless floating point values and doing something else with them (I can't quite remember). I believe at the end of the algorithm the numbers are made to be between 0.0 and 1.0.
 
What about using a high resolution A/D converter 12 bit or
more and measuring some external enviromental variable
like temperature and use the least significant digit
of each conversion in decimal with adequate time
between samples and combine as many of these as you
need to form a random value.
comments ?
rodar
 
You're assuming that the noisefloor of the A/D is &quot;random.&quot; True randomness is difficult to achieve. The A/D noise may be dominated by something like AC line noise, which may result in a highly non-random acquisition sequence.

TTFN
 
One of the truly random things in nature is radioactive decay... If you set up a slowly decaying isotope and run, say, a 16 bit counter that you read whenever a decay is detected you'll have a random number....

Fred Saberhagen's entire Beserker series is even based on this. Great SiFi.
 
Good overview article:

As stated before, computers or logical circuits can not generate infinite random number sequences due to their inherent nature, i.e., a 16 bit ALU (arithmetic logic unit) has a finite set of possible integers (2^16 combinations) it can generate. The patterns of 1's and 0' generated by a logical circuit will always be finite and will always repeat eventually. If, however, the set of possible combinations is large enough, one can safely assume pseudo-randomness. This is why, as dspDad pointed out, it is important to define the nature (no pun intended) of randomness needed.

Electronic circuits dedicated to random number generation are better (but not perfect) than ALU's in my opinion. Off the shelf boards can be purchased. A circuit idea using digital techniques can be found at:


Modern CPU's, e.g., Intel Celeron, have a built-in random number generator based on thermal noise and quantum mechanics (intended for security puposes)!

There are numerous computer algorithims to generate psuedorandom numbers. The algorithm implemented by QBasic's RND command can be used to generate complex fractal (chaotic) patterns. An interesting irony!

The following book is an excellent source of info:
Knuth, D.E.: The Art of Computer Programming, volume 2: Seminumerical Algorithms. Addison-Wesley, Reading, MA, 3rd edition, 1997.

The following site generates random numbers from a radioactive decay process for all to download and use. However, once again, &quot;random&quot; must be defined to be useful.


Finally, randomness in nature is an illusion. Current research in choas theory is set to dispel the myth of randomness in quantum mechanical events such as radioactive decay. One day soon we will accept Einstein's view that &quot;God does not play dice with the Universe&quot;.
 
I would like to clarify my idea somewhat.
To me the A/D noise floor does not matter because
I would like to measure a natural phenomena like
wind speed or temperature variation. Suppose
the naturaly occuring variable typically created
+/- 100 counts of varitation between samples of
this natural process. What factors of the measuring
circuit could be discernable when the external
process is producing a sample to sample variation
of at least 100 counts or more.
Finally what would happen if once a suitable amount
of random numbers were accumulated they were subject
to a swapping algorithem that is driven by the same
random number source such that these fixed values are
shuffled repeatedatly by the random process. The
original values are all still there just their order of
placement in the array has been shuffled excessively.
Finally isn't the simple test of autocorrelation applied
to a sequence enough to determine randomness. With a
flat autocorrelation there is no information with which
to make a guess of the next value that is any better than
randomness.
Comments??
 
amptramp: Great link there to Formilab! It sucked me in totally. Very well done. Thanks for the link.
 
60hzhmmm:
No problem! I have enjoyed this thread very much. I hope we can keep the thread sewing!

I am interested in reading responses to a comment in rodar's last post about autocorrelation. Everything I have read indicates that although a &quot;flat&quot; autocorrelation does imply independency, it is also important the random number sequence be uniformly distributed within its range (typically normalized from 0 to 1). In other words, statistically speaking, more than one test is required to determine some level (once again, definition is required) of psuedorandomness.

For example, would it be possible, if the sequence is &quot;crowded&quot; (not uniformly distributed), for a crypto hack to use this &quot;failure&quot; to crack the sequence, eventhough the sequence is independant?

To regress somewhat, one of the most common psuedorandom number generator algorithms is the following recursive formula:

x_{n+1} = (a*x_n + c) mod m

where:
a,c = &quot;optimization&quot; constants
m = modulus (affects the period of the sequence)
x_n = &quot;seed&quot;

It is referred to as a linear congruential generator. For true RNG geeks, there are many hours of &quot;enjoyment&quot; that can be spent &quot;optimizing&quot; the random sequence so it is &quot;random&quot; (passes the tests).

 
Amptramp:
The CPU wordlenght has no connection to the lenght of the integer it can generate : an 8 bit CPU can generate easily
a 256 bit integer -- 8 bits at a time. Working in decimal
number system we still can use 2,3 digit or even larger
numbers !


<nbucska@pcperipherals.com>
 
Thanks everyone for the links and stuff. This has turned out more interesting than I thought.
 
nbucksa:

You took my statement out of context. I was discussing generating random numbers - not integers.

Also, you implied it takes &quot;eight bits at a time to generate the 256 bit integer. Is this not a connection? And is this not a connection based on the &quot;word length&quot; of the CPU. I am afraid your position was not well defended.

But just for fun...

If I understand you correctly, you are implying either the use of concatenation to generate &quot;large&quot; binary integers (numbers) or the use of codes, i.e. B.C.D., to generate representations of &quot;large&quot; integers (numbers) in other base systems, specifically decimal.

To preface:

It is understood we are discussing the counting numbers (integers) only - not floating point numbers, for example.

It is understood all &quot;numbers&quot; used by CPU's are representations of numbers - even binary numbers.

It is also understood the inherent number system of a CPU is binary (radix 2). All numbers represented by a CPU are either &quot;true&quot; binary numbers (1011), converted binary numbers (to octal, decimal, or hex, etc.), or coded numbers (B.C.D., ). In our discussion, only true binary numbers or coded numbers are relevant. Converted binary numbers are just that - binary numbers.

To begin:

Although it is true concatenation of bits allows for large binary integers to be enumerated and coding can represent large decimal (if you choose) integers as well, stating &quot;CPU wordlenght has no connection to the lenght of the integer it can generate&quot; is like saying a &quot;2 by 4&quot; has nothing to do with building a house!

(Would you make the same statement if our CPU had a word length of one bit?)

Again, the maximum number of combinations a sequence of &quot;n&quot; binary bits can represent is 2^n.
The maximum number of signed integers that can be represented by these combinations are 2^(n-1) - 1.

The implication of the previous statement for concatenation:

Even if you concatenate, there will still be an &quot;n&quot; bit sequence based upon (as you implied) a multiple of the &quot;CPU word length&quot;.

Further, even assuming the use of a recursive algorithm and concatenation, there is still a finite number (integer) that can be generated (represented) due to the fact that the algorithm (recursive steps) must &quot;halt&quot; to be useful.

Next, coding requires an algorithm to generate the number. The algorithm must be executed within the constraints of the CPU in use - and one of the fundamental constraints of most CPU's is the word length. Ever heard of the Von Neumann bottleneck?

For coding, every &quot;symbol&quot;, i.e., digit, is represented by a &quot;group&quot; of bits (where the minimal form of grouping is one bit, or in other words, the simplest code is plain old binary).

In B.C.D., 4 bits are used for each digit (symbol); therefore, two decimal digits can be represented by an eight bit CPU for a maximum (without using concatenation!) of decimal 99. However, using binary, the same eight bits can represent a maximum of decimal 255 (obviously a larger number).

The implication is coding is less efficient than concatenation for generating large integers; therefore, we dismiss coding as a viable method.

We are left with concatenation as our method of choice. And it should be obvious concatenating the maximum grouping of bits available is most efficient - and the &quot;maximum grouping available&quot; is the &quot;CPU word length&quot;!

Now, all this being said, we return to the original subject - random number sequences. Looking again at the linear congruential generator algorithm:

x_{n+1} = (a*x_n + c) mod m

And remembering &quot;m&quot; affects the period of the &quot;random&quot; sequence, it is apparent any limitation on integer size limits the size of the random number sequence before it repeats. This fact is why I made my statement:

computers or logical circuits can not generate infinite random number sequences due to their inherent nature, i.e., a 16 bit ALU (arithmetic logic unit) has a finite set of possible integers (2^16 combinations) it can generate.

There are numerous algorithms to generate random number sequences. Obviously, I am not familiar with all of them. All I am familiar with are constrained by the maximum integer size that can be plugged into the algorithm.
 
I think you are confusing word length with the algorithm.

So long as the algorithm is properly executed, any length integer can be accommodated, so there is absolutely no problem with even an 8-bit processor running 64-bit arithmetic, it just takes a LOOONG time.

The same principle is how the 80x86 family was able to do 32-bit floating point without 80x87 co-processors.

Carry-add arithmetic is how a human being can multiply 20-digit numbers with only 10 fingers.

TTFN
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor