If you ever, ever, state a statistic, make sure you understand it first. Now, I’ve just stated the precepts after the conclusion, so clearly I don’t know how to do it right.

Why do I even bring this up?

I’m peer-reviewing publications for a primary radiation conference. This is a Very Big Deal for a several hundred people. Sure, you’ve never heard if it… except you Miss Scott… but in the world at large, a publication is a big deal. The publication itself is the final point of a lot of work, sometimes years of work. SO, you’d think somebody took some time to get it right.

Signal to Noise Ratio – www.radio-electronics.com

One paper said that the signal to noise ratio was 40 to 1, or 16 decibel. This gave them a probability of greater than 99%. (This is a big selling point of their technology.)

40 to 1 is identical to 40/41…. which is pretty easy to calculate. I mean, most people know that 99 to 1 is 99%. Sure, we shorthand to 100 to 1, 20 decibel, or .9901. easy peasy lemon squeasy. So how is someone with an advanced degree missing an obvious point like that?

They don’t do their own math. They have some kid running around telling them he understands math. Don’t let it happen to you. Read a book on Probability theory.

E. T. Jaynes “Probability Theory: The Logic of Science.”

Hugh Young “Statistical Treatment of Experimental Data.”

or something similar.

Statistics is actually easy, in practice. Theory is a bit of a bear, as the equations have a lot of big scary summations and a lot of little letters. Chi squared theorems and all that look rough the first couple of times, but again, if you hold your own data in your hands, do your own math, you get over the “big scary math” pretty quick. It really comes down to a methodology. You work through it once or twice then program it into excel or matlab. Then give it to your students to screw around with.

People have been lying with math so long, that they start to believe their own B.S. Don’t do it to yourself. Check your starting precepts. Where did this data come from? How accurate is it? Do I make any assumptions that fail the laugh test? (Sit a knowledgeable person in front of you and tell them what you assumed. When they chuckle…find out why. “Really? You are only counting flying turkeys? Your sample is going to be somewhat at variance from the standard farmyard.” )

Then check your math. Then get a similar set of data and do the same math. Compare ChiSquared numbers. Take pieces of the data and compare to other pieces. Did you have a run of “Good Luck”? Don’t believe in luck, if the same answer comes up several times in a row, there was an experimental error. (Or some kid realized that he was writing down the wrong number and changed it.)

Finally, make a pretty picture of your data. Something that makes a sensible beginning and end. Look at the data and make sure it makes sense. If it doesn’t, go back to the beginning. Data does – sometimes – show us something unusual. Usually, data is to show us what we know, better. If your data is inconsistent with what you expected, find out why. Also, if your data is exactly what you expected, check your experimental apparatus again. Garbage in, Garbage out.

Seriously, the real take away here is don’t take math for granted. People who play with math can make it do anything, adjust the results to find your model and make a good match. Look at the games they have played with the Higgs Boson. Maybe they have found a mass in the right energy range, but any comparison to Higgs is based on the most tweaked equation ever. I don’t expect that there is any real comparison between those numbers and anything like reality.

Traces of two high-energy photons measured in the Compact Muon Solenoid (CMS) experience (AFP/CERN/File)

Joe, Hi,What you are describing are dfreeifnt consequences of uncertainty relationships, were the uncertainty relationships could be seen as dfreeifnt aspects of a very general principle in quantum physics.One of the (many) interesting things about this very general principle is that is pops up in quantum mechanics in many ways and in many forms.These many ways and forms (equations) offer dfreeifnt insights into what this principle means and implies.One of such forms in the noncommuting operators form, while another form is the wave packet deduction of the uncertainty relationships.Let’s stick with these last forms of relationships for a while, as they are both very simple and plenty of insights at the same time.Δx.Δp >= h/2ΠΔE.Δt >= h/2ΠBoth expressions are scalar expressions, in the sense that the result is a scalar value. A scalar is a physical magnitude that only requires a single value to completely express its measure.In the first expression, we have the an operation that involves two vectors (position and linear momentum) while the result is a scalar. A vector is a physical magnitude that requires three values to completely express its measure: modulus (size), sense and direction.In the second expression, we have an operaton that involves two scalars (energy and time).Regarding the first expression, it tells us a lot of things. We know that we can determine the position of a particle with as much precision as we want. It also tells us that we can determine the linear momentum of a particle with as much precision as we want. But it also tells us that we cannot determine at the same time the position and the linear momentum of a particle with as much precision as we want.The key factor in this concept is at the same time and as much precision as we want .The more precision we want to use to determine one of the magnitudes, the less precision we can get while determining the other magnitude at the same time.But this is not the only consequence we may expect from this relationship. To illustrate other interesting consequences, I have an anecdote of my own from my college days.A remarkable experiment that does not seem to be related to this principle is the the experiment that explains the rationale behind the Kelvin temperature scale, or the absolute temperature scale.One rather intriguing aspect of these experiments is that, no matter how hard they try, experimental scientists can’t get to absolute zero, even though they have been getting closer and closer over time.The reason for this rather strange behaviour is the uncertainty principle. As we freeze matter (atoms), atoms and its particles loose energy up to such a point were they get really close to stopping any kind of movement (or so we hope), but that cannot happen if the uncertainty relationship has to hold true: if atoms and its particles were to stop moving completely, both its position and its linear momentum could be determined at the same time with as much precision as we want or care (if atoms were to stop completely all motion, we would have that Δx = 0 and Δp = 0 at the same time, so we would get that Δx.Δp = 0, which is not possible if Δx.Δp has to be larger than or equal to h/2Π).So, now we know that matter can’t stop from moving and, in fact, we now know and understand that another consequence of this is that no single point in space-time can have zero energy (because of the same line of thinking!: any particle that happens to be in that particular single spec of space-time would also have no energy, that is, no motion, and so, we are back to the same inconsistency!)Which means that there cannot be such a thing as empty space-time, or any given spec of space-time that has zero energy.So. If any given spec of space-time is not allowed to have zero energy, what does the universe do at any given spec of space-time that gets too close to zero energy: well, the universe borrows some energy from the surrounds of such spec and uses that borrowed energy to create a pair of particles (a particle and its anti-particle) for just a very small fraction of time, plays with the particles a bit and then returns the energy back to its surrounds very quickly.That funny game is called quantum field fluctuations. But how is this game played? On a first approximation to this game, the rules are very simple.Let’s use the other uncertainty relationship:ΔE.Δt >= h/2ΠThis equation sets the stage for the rules of engagement of this game.The laws of physics (the universe) can create any pair of particles (particle-antiparticle) of any given mass (energy, following Einstein’s equation ΔE = Δmcb2), as long as the interval of time it takes to play with it and return the energy back to its surrounds is consistent with the expression ΔE.Δt >= h/2Π.So, now we know how quantum field fluctuations work, what is the zero-point energy constraint , what mandate forces pairs of unused particles to get back together into energy, and what is the rationale behind it.Kind regards, Gastf3n