Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 27 Feb 2001 06:39:02 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        kris@FreeBSD.ORG (Kris Kennaway)
Cc:        tlambert@primenet.com (Terry Lambert), n@nectar.com (Jacques A. Vidrine), kris@FreeBSD.ORG (Kris Kennaway), arch@FreeBSD.ORG
Subject:   Re: cvs commit: ports/astro/xglobe/files patch-random
Message-ID:  <200102270639.XAA14165@usr05.primenet.com>
In-Reply-To: <20010226204224.A91585@citusc17.usc.edu> from "Kris Kennaway" at Feb 26, 2001 08:42:24 PM

next in thread | previous in thread | raw e-mail | index | archive | help
> As as scientist, you naturally care about your PRNG giving good
> statistical randomness, so you don't get skewed results from your
> simulation.
> 
> rand() does not appear to give statistically random output - in fact,
> visual inspection shows it to be patterned.  As a good scientist, you
> did TEST the properties of your PRNG before using it as the foundation
> for your simulations, didn't you?
> 
> By fixing the algorithm, we are preventing future generations of
> scientists from making the same mistake, and thereby ensuring that
> FreeBSD used as a research platform gives good science, not bad
> science.  Our children and children's children will thank us!  Onward,
> mighty FreeBSD, platform for the future!!

Please donate the code to Sun, SCO, Linux, and so on, then,
and let us know when we should turn it on by default.

When one uses a pseudo-random number generator to crank out
a set of test data points, it is less the randomness of the
data points which is important than it is that the data be
replicable and "sufficiently random" for the number of samples
involved.

This is because the test data is not an end in itself, but
is instead a data set against which algorithms may be applied
in order to test how well theory predicts reality.

The use with which I'm most familiar for this is in the output
of relativistically invariant P-N and N-N collisions which
result in pair production (this is also Berkeley code, BTW, it
just happens to be FORTRAN).

The value in doing this is not in the pairs produced, but is
instead in the pairs discarded by constraints on which pair
productions are "possible" or "impossible" according to the
theory being tested.

This is generally applied via matrix mechanics, involving the
soloution of multiple Feynman-Dyson diagrams.

Thus it is more important to test one theory vs. another on
the same set of pair productions, than it is for the data to
be "perfectly random".

Without using the same event sets each time, one can not do
this reasonably.

This is the same argument that was given for a pseudo-random
generated network topology and traffic, where the important
issue was to test routing algorithms vs. each other on the
same network, using the same set of generated traffic.

The rand() generator is perfectly reasonable for this use; the
randomness is irrelevent when you exceed the resoloution of
the number of bits in the generator worth of events by 4 orders
of magnitude, when you run the simulation.

You might as well complain about the randomness being damaged
by the limit on the number of bits the generator is capable of
returning.  In which case, your replacement is just as damaged
as the current algorithm.

The "pseudo" is more important than the "random", for all the
uses which I (and others) have pointed to as examples.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200102270639.XAA14165>