When a computer implements pseudo random, it usually uses the internal clock (which has millisecond resolution) as a seed, but a lot of people don't know exactly what it's seeding. In hardware, when I want to implement a random-number, I use a
maximal length linear feedback shift register, which manages to emulate the behavior of a random number sequence while truly being a normal sequence. Imagine that if I counted for 1 to 4 like this: 4, 1, 3, 2. Seems random enough. 1 to 5 like this: 5, 1, 4, 2, 3. Also seems random enough. Won't seem random at all if I take it to 100: 100, 1, 99, 2, 98, 3, ... but it just goes to show that given an appropriate set, known fixed behavior can still appear to have random results. The idea behind an LFSR is that since all numbers can be represented in binary, there merely has to be a random pattern by which the 1's and 0's are distributed.
Instead of my previous sequence of MAX, MIN, MAX-1, MIN-1, MAX-2, MIN-2...., imagine that it's applied to the location of the 1 in a binary number loaded with 0's:
10000
00001
01000
00010
00100
Okay, translated to decimal, that sequence becomes 16, 1, 8, 2, 4. You can still see the same pattern, in truth the pattern ended up devolving in 2
MAX, 2
MIN, 2
MAX-1... but now imagine what would happen if you had a varied combination of 1's and 0's, and the current set of 1's and 0's depends on the previous set of 1's and 0's (hence the notion of linear feedback). In other words, it's non-intuitive on what the 50
th number in the sequence is unless you know what the 49
th number is. That, in essence, is what an LFSR is all about. In a normal sequence, you can easily determine what the N
th sequence would be, but by constructing a sequence where the N
th element is only intuitively known through the (N-1)
th element, then you have a sequence that appears to be random.