In 2012, Cornell University researchers published a study that concluded that children between 8 and 11 years old would choose an apple over a cookie if the apple had a sticker of a popular cartoon character. Childhood obesity rates had skyrocketed across the United States, and this simple solution to help children make better food choices received a lot of buzz.
It turns out the findings were too good to be true. Last October, JAMA Pediatrics, the journal that published the study, was forced to retract the study’s findings.
The problem? Faulty data and faulty conclusions.
The College of Education’s Tiffany Whittaker wants students to learn to interpret data and statistics, so she designed a new educational psychology course: Statistical Literacy and Reasoning. Whittaker is an associate professor in the Department of Educational Psychology.
The course is open to undergraduates across the university and is taught by educational psychology doctoral students. It’s designed to introduce students to statistical applications and their interpretations in daily life.
The course can replace a math requirement and introduces undergraduate students to coursework in educational psychology—which may have the extra benefit of enticing them to earn a minor in the subject.
Students often enter the course with a “blank slate related to statistical literacy,” says Molly Cain, a doctoral student who taught the class last fall. The students don’t have preconceived notions about statistics. But they also have no real facility with deciphering statistical data.
In a world teeming with numbers and stats to prove the validity of ideas and opinion and to influence public policy, “statistical literacy is critical,” Cain says. “We want students to become critical consumers of data reported in media. We want them to be actively engaged with what they consume and to approach things with a healthy dose of skepticism.”
Whittaker says, “We want students to ask: What’s going on behind the numbers?” Specific questions can help students think critically about what’s going on behind those numbers: How were the data gathered? What methods were used? Who conducted the survey? Was bias introduced? What do you know about the sample—such as its size or population? Are there lurking or hidden variables that might explain an association?
“Correlation does not equal causation,” says Whittaker. “For example, the number of children in a home correlates with a toaster being in the home. But the toaster didn’t cause there to be more children in the home.”
“Psychology studies are difficult,” Cain says. “Often, researchers will choose subjects who are convenient to study, like college freshmen, just because they are available.” But samples should reflect the actual population that researchers want to draw conclusions about, she says, and college freshmen may not be representative of the population they actually want to understand.
That was one of the problems with the apple vs. cookie study. It was conducted with 3 to 5-year-old children, but the findings were applied to 8 to 11 year-olds—a population likely to be less motivated to choose an apple with a sticker of Elmo over a cookie.
“Data can generally be trusted if you use the correct techniques and methods,” Cain says, adding that correct interpretation is also a must. “Knowing how to analyze data will help you in any discipline. Even a rudimentary understanding means you are light years ahead.”