To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Sooner or later, scientists with National Science Foundation support will experience some sort of evaluation, whether they like it or not. This is particularly true for larger projects with a greater emphasis on outcomes and impacts. The traditional notion in some principal investigators’ (PI) minds, that funds are awarded for researchers to “do good things” without eventual accountability, is unrealistic in today’s world. For a variety of reasons, the bar has been raised on being able to demonstrate the success of NSF projects, and in so doing, the value of the investment made using taxpayer funds. The problem with evaluation is that unless you are already involved in educational or psychological research, most STEM professionals do not understand what this process entails, how it is done, and how expensive it can be to do it right. There are also different kinds of evaluation procedures depending upon the project being analyzed. It also follows that, if done properly, evaluation is in itself a science with accepted protocols and best practices.
This chapter provides an overview of why science, and in a broader context STEM, are of fundamental importance to the progress of nations and their citizens in the twenty-first century. We will return to some of the topics in subsequent chapters, but here they provide the foundation and rationale for the fundamental importance of science and STEM in society. The focus of this chapter is the context of science and STEM in the United States, primarily during the second half of the twentieth century and beginning of the twenty-first century. Nevertheless, in a globally connected world, much of what is described here also pertains to other STEM-enabled countries as well.
Some educators in the United States refer to formal education as K–16, which implies a seamless transition between grades 12 (high-school senior year) and 13 (college freshman year). For a variety of reasons, however, this transition is far less seamless than any other in this supposed K–16 continuum. In particular, this potentially rocky transition relates to the different cultures and expectations of K–12 teachers versus “grades” 13–16 professors, and how the students that they teach learn. It is for this reason that two separate chapters are presented on formal education.
Nowadays in academia mentoring is taken seriously and oftentimes is highly structured. This can include many steps along the pipeline, including at-risk students transitioning from high school, undergraduates, graduate students, postdocs, and early-career faculty working toward tenure. In reality, mentoring of one form or another, whether it is structured or informal, occurs throughout one’s academic career. Published studies from a variety of disciplines, ranging from STEM, to medicine (e.g., Detsky & Baerlocher, 2007), to the humanities (Pye et al., 2016), have highlighted the positive benefits of mentoring, including increased productivity, professional success, and career satisfaction. While mentoring or coaching has been practiced for millennia in academia, over the past several decades it has become more intentional.
In the early 1950s, a group called the Seekers formed in a suburb of Chicago, based on the belief that they were receiving messages from a greater intelligence through a process called “automatic writing.” Automatic writing occurred when a medium (in this case, a woman named Dorothy Martin) entered a trance-like state that allowed her to write out channeled messages from a greater being called Sananda. Martin’s hand would basically take on a mind of its own and messages from Sananda would come forth on paper. An entire belief construct was derived from these messages, including an understanding that they were coming from a faraway planet named Clarion and that UFOs from Clarion were frequently visiting Earth.
So far, we’ve discussed how common it is for humans to misperceive individual events or groupings of random occurrences. However, a higher level of complexity can occur when one is assessing causal associations; i.e., one thing that appears to cause another.
There is a whole series of books, including New York Times bestsellers, about how the little “coincidences” we experience in life do not occur by chance – they are actually God speaking directly to us and are called “godwinks.” After all, what other likely explanation could there be? One coincidence might happen by freak chance, but so many people have so many stories that seem so unlikely that this must reflect a greater thing, a greater force – this must be the voice of God speaking to us personally. More than 1 million copies of Squire Rushnell’s “godwinks” books have been sold, so clearly this idea appeals widely to people. Of course, I cannot rule out, nor can anyone else, that God is actually speaking to us by using coincidence as his language – maybe this is just the way that God communicates with humans. Indeed, such is the basis for a vast number of belief systems, the number of adherents to which exceed the number of professional scientists in the world by far (it’s not even close). Can it be possible that so many people are wrong?
Based on the discussions in this book, the following definition of science is suggested to my fellow scientists and nonscientists alike. First and foremost, science is an outgrowth of normal human observation, reasoning, conclusion, and prediction. Scientists and nonscientists both depend upon induction and the assumptions it entails – assumptions that are imperfect and don’t always hold. They assume that the future will resemble the past to a greater extent than by guessing alone, and they also assume that what one has encountered today is more representative of things not yet encountered than can be arrived at by random guessing. Both scientists and nonscientists retroduce causes for the effects they observe, a form of reasoning that suffers from the fallacy of affirming the consequent. As a result of this fallacy, scientists and nonscientists both retroduce hypotheses of causal things that likely never existed, such as phlogiston being the cause of heat, a vital force being required for the types of chemicals that come from living things, and the great Sananda causing a prophet’s pen to write. One needs ongoing observation, and if possible experimentation, to further assess which retroduced causes one should hold onto (at least for now) and which should be rejected (at least for now). Scientists and nonscientists both use deduction (or at least a form of reasoning that resembles deduction but may not adhere to strict standards of formal logic) to make further predictions based on their retroduced hypotheses. Scientists and nonscientists both have fallacies in their hypothetico-deductive (HD) thinking, make mistaken observations, have cognitive biases, and fall in love with their hypotheses, noticing observations that confirm and ignoring observations that refute. Scientists and nonscientists are both susceptible to social pressures, social biases, and manipulation (intentional and unintentional) by the groups and societies in which they find themselves.
The terms “science” and “scientific” have come to have a special meaning and to carry a special weight in modern society. Professional scientists tell us that genetically modified foods are safe to eat, that industrial emissions are causing global warming, that vaccines don’t cause autism, and that some medications are safe and effective while others are not. A consumer product seems more trustworthy if it’s described as “scientifically proven” or if “clinical studies have demonstrated its effectiveness.” Politicians and lobbyists often evoke “scientific proof” in arguing for certain positions or policies. Our federal government invests taxpayer dollars in “scientific research” of different varieties. Whether something can be categorized as “science” determines if we allow it to be taught in our public school science curricula, as in the ongoing debate over teaching evolution vs. intelligent design theory.
While hypothetico-deductive (HD) coherence is required for science to be performed, it is the observable predictions of the theories that most scientists investigate; in other words, the phenomena of the natural world. Science depends upon natural phenomena as the final metric of validity. Humans are persuaded by all manner of things, many of which are emotional or authoritative in nature, and in some ways the actual practice of science is no different. However, in an ideal scientific world – the world that scientific practice strives for – the final word on “truth” is not authority, revelation, or statements of a definitive text; rather, ongoing observation of the natural world around us is the determinant of how we evaluate specific scientific facts and theories.1 Most people recognize that scientists perform studies and experiments, which are essentially a way to “check in” with the natural world – to determine whether a theory’s prediction is what actually occurs. The importance of this process of checking in – of using the natural world and natural phenomena as the ultimate arbiter of legitimate knowledge claims – cannot be overestimated. Creative thinking, to be sure, is a large part of the process that leads to scientific progress. Without great creativity, novel hypotheses cannot be retroduced, innovative auxiliary hypotheses cannot be generated, and new technologies to test predictions cannot be invented; however, creative thinking and imagination are not the “scientific” part of the process. Rather, the scientific application of innovative and creative thinking is found in the abilities of new ideas or explanations to resolve current violations of HD coherence where predictions and observations are misaligned, or to give rise to new predictions of the natural world, which can then only be tested by observation or experimentation.
In 1924, a South African named Josephine Salmons made a visit to the home of Pat Izod, a family friend. She noticed an odd, humanlike skull sitting on his mantelpiece. Curious, she asked him its origin and learned that it had been found by a miner working at the Buxton Limeworks. This miner wasn’t focusing on questions of human origin nor was he testing any particular hypothesis; rather, he was blasting through limestone in an effort to increase the output of the mine. He was no different than someone who goes out for a walk and notices an interesting tree or is taken by the shape and glimmer of a particular puddle. He noticed the skull and gave it to his employer, E. G. Izod, who was a visiting director of the Northern Lime Company, which managed the mine. E. G. Izod gave it to his son, who put it on his mantle. Josephine Salmons happened to be a young graduate student working in the laboratory of Dr. Raymond Dart at the University of Witwatersrand in Johannesburg. Dr. Dart was an anthropologist of Australian origin who had taken the position of professor two years earlier.1
Traditionally, scientists and philosophers of science have worked under the assumption that humans are pretty good at making observations of the natural world. Many thinkers, as far back as antiquity, recognized that experience could lead us astray and thus favored deductive systems of reasoning; however, to justify deduction, early philosophers argued for humans’ innate ability to perceive fundamental truths and correct base axioms. Empiricists clearly rejected this idea, favoring our ability to observe nature by using our senses over some perception of fundamental truths. However, both camps seemed to accept that humans could observe, or at least gather base information, about the natural world in a meaningful way, although there has not been uniform agreement on this.1
Young students of science may choose a career in research for a number of different reasons. Some are driven by an intrinsic curiosity about the world and a love of understanding how nature works. For others, the possibility of recognition and esteem are a driving force. Still others have a meticulous nature, and the notion of gaining and maintaining some control over experimental systems appeals to them. As is inevitably the case, some students pursue science because of the expectations of others rather than from their own interests and ambitions. Finally, some go into science because they’ve been in school their whole lives, haven’t given much thought to what the next step should be (other than moving on to the next grade, as they’ve always done in school), and really can’t figure out what else to do. Given the complexities of human behavior, for many, it is a combination of these factors and additional factors not mentioned here.