Richard Feynman was a Nobel prize winning physicist. I have enjoyed his thoughts and thinking methods for decades. It is interesting how his thoughts on science have application in every field of study. From psychology to botany to international trade.
One such respects the futility of holding beliefs too long.
“It is impossible to find an answer which someday will not be found to be wrong.”
Many of us have been challenged by how much information there is to have. We have advanced greatly in our ability to access information. I vaguely recall Einstein offering the advice that you should not remember anything you can look up. That may be a little over the line but given the internet, not so far as you might first think.
The advantage of not remembering things is that you won’t become confused when they change from right answer to incomplete or wrong answer.
Unlearning is as important as learning, and strangely, far more difficult.
When you realize that knowing and remembering a lot of things is not so useful today as it might have been in 1900, you will be a little more humble. Using the information is more important than what the information is. Once you lose the idea of knowing a lot of things, you will be free of ego, and possibly useful.
Thinking is far harder than people believe. In its place, some people have adopted the “received wisdom” model where they ingest the outcome of a thinking process. If they don’t know they are using that process, or they don’t challenge the received wisdom by comparison to other things that could be chosen instead, they will be worse off than before.
Thinking is not optional.
Received wisdom seldom comes with evidence you can challenge. You may have noticed evidence is never presented in court without the other side challenging how it was collected, its chain of custody, its implication, and its application to the case at bar. There are types. Is it eye-witness evidence or is it physical, or is it circumstantial evidence?
Drawing meaning from a mass of evidence is difficult and people seldom understand the results presented. Life is complicated and there are few, if any, large tests that will give unambiguous results. Smaller studies may show interesting correlations. People find correlation to be compelling. Scientists find correlations suggestive of a place to study more deeply to discover causation.
Statistical analysis is difficult. It is quite likely there can be more than one interpretation of the meaning found in a large data set. When I was at university a person in my residence was a civil engineer doing a masters degree specializing in traffic management. He had data that fitted a lovely Bell curve, but for a few outliers. Upon further examination, the outliers were the real data and the others were a collection of errors generated by the faulty equipment used in the study. Oops.
If people have preconceived ideas about what will be found, interpretation errors occur.
Most of us cannot reliably analyze data even when available. We can however, learn enough to know how to assess the conclusion presented.
This is not so hard.
Look for the p-value. That is the number that will tell you if the conclusion might be reliable. The p-value is the probability that there could be another sample that would generate contradictory assessments. A p-value of .01 means there is one chance in a hundred that this set of data is unreliable. So 99% reliable. You’ll see the reliability referred to in many studies. Political polls for example. A poll with 95% confidence, says candidate A is 45% favoured and Candidate B is 47% favoured. It will specify a margin of error. Often plus or minus 3%.. The meaning is Candidate A’s is favoured by 42 -48 percent of those surveyed and B by 44-50 percent. The 95% confidence means there is one chance in twenty that the error range is bigger. 95 % confidence (p-value .05) is very high in social assessments, but still means there is a once chance in twenty it is wrong.
It is not difficult to design a questionnaire to bias results.
Understand the idea of random sampling. Statistical analysis works when the points of data are independent of each other. Without care, a person’s answer may be influenced by others. Taking both spouses might get a different answer than surveying two people in different states.
Sometimes the method biases the result. If I only survey people with a landline I am unlikely to have a random sample of all people, no matter how carefully I arrange the sample.
In longitudinal studies the data may not reflect conditions unchanged throughout. For example if I take temperature data from a weather station that has been in the same place for 150 years it will likely show gradual warming. In 1870 it was in a farmer’s field a mile from town, now it is alongside a shopping center with a 20 acre asphalt parking lot.
Sometimes the data is adjusted. You would want to know how. Some adjustments make sense, like adjusting for the parking lot’s heat gain.
Is all the data included? You would not have to throw out very many observations to completely change the fabric of the study. Most studies dismiss some of the chosen candidates. You should know why. The process is a reasonable one. Sound studies disclose how they selected.
If the presenters cannot or will not disclose their data and their reasoning, they don’t want to. Usually, there is more than one way to interpret it and they like just one of them.
Received wisdom is dangerous.
Be a skeptic. Use your common sense to help you validate what you are shown.
Science is never settled. “Today we say that the law of relativity is supposed to be true at all energies, but someday somebody may come along and say how stupid we were.” Feynman
There is always something else to know. It might be the important thing.
Curiousity is your friend
I help people have more retirement income and larger, more liquid estates.
Call in Canada 705-927-4770, or email firstname.lastname@example.org