Imagine sitting in front of your television in 1969, watching the Apollo lunar landing, and noting the marvels of modern engineering. The person sitting next to you responds, “Oh yes, but this is a passing fad; soon we will return to premodern engineering. Groping in the dark requires so much less intellect and education. And we will have crowdsourced groping. Who needs all of this tedious experimental design, mathematical modeling, and validation?”
Your jaw might have dropped, but your friend would have been prescient. Consider the words of entrepreneur Peter Thiel:
If you look at the last few decades, since the seventies, I think there has been enormous progress in the world of computers, internet, mobile internet, the world of information technology. But there are many others where I think things have stalled rather badly, if we were to be honest about it. Everything in the world of atoms, rather than bits, has seen much less progress.
Over the last several decades, the United States has made two huge errors in science and educational policy that have severely limited progress in the world of atoms, and which have profound implications for the country’s industrial and military competitiveness. First, it has turned its back on the conceptual framework underlying three centuries of modern science, from Isaac Newton’s Principia in 1687 through the extraordinary achievements of the first three quarters of the twentieth century. Second, it has allowed an alarming deterioration in the quality of mathematical education.
The risk is obvious: a nation’s industrial and military might rests squarely on its engineering capacity, and advances in engineering rely upon advances in scientific knowledge and the ability to mathematically leverage that knowledge to achieve operational objectives. Perhaps the United States could afford a cavalier attitude towards its declining science and engineering capacity if its advantage over competitors were insurmountable, but that is not the present reality.
Suppose that during the Cold War, the United States focused on designing video games while the Soviets were building ICBMs. Or, to reduce the hyperbole, suppose the Soviets had been applying the mathematical methods of control, signal processing, statistics, and information theory, while we were satisfied with arithmetic. The Cold War might have ended very differently.
The Scientific Malaise
In the 1950s and 1960s, the United States had great industrial research laboratories, Bell Laboratories being the gold standard. Today, Bell Laboratories no longer exists. What about the remaining corporate labs? Compare the mathematics training of their researchers with that of the researchers at Bell Laboratories in 1960. Compare Los Alamos National Laboratory in the days of Oppenheimer to today. Compare NASA during the Apollo program to today.
Following the great breakthroughs in science, mathematics, and engineering in the first half of the twentieth century, good universities moved quickly to establish strong faculties. The mathematics necessary for a myriad of advances was quickly put into the university curriculum. For instance, the theory of random processes was once required knowledge for graduate engineering students. But in recent decades, this requirement has been dropped in most of our universities, including those with research ranking. By contrast, students at good Iranian universities are required to study random processes at the undergraduate level.
The impoverishment of American science and engineering, which continues today, began in the late 1960s. This decline has occurred while the mathematical demands of science and engineering have grown. The great push for education during the Eisenhower and Kennedy years was quickly followed by disinterest, and even overtly negative decisions, on the part of subsequent administrations.
On December 1, 1969, less than five months after Neil Armstrong stepped onto the moon, the United States government performed the first draft lottery of the Vietnam War, thereby ending the policy of student deferments. Presidents Eisenhower and Kennedy recognized the nation’s pressing need to produce scientists, mathematicians, and engineers. President Nixon, with incomprehensible blindness, decided otherwise. Not only were tens of thousands of American youths pointlessly dying in a gratuitous conflict fought to satiate the psychological quirks of a cadre of politicians, but now the nation would also be deprived of thousands of future scientists and engineers.
Another factor contemporary with the draft decision that also contributed to the decline in homegrown scientific expertise was the job market. It was jokingly said, throughout the 1970s, that the main occupation of a physics PhD was driving a taxi cab. While the latter occupation is important, it does not require ten years of university education involving extremely difficult mathematics that only a tiny fraction of the population is capable of practicing. Not only were students being driven out of the university, those entering perceived that, even if they were able to navigate through the system, the rewards were not worth the effort.
Thus, the vitality of the 1950s and 1960s proved to be short-lived. Science during that period had been given impetus by the European professors who migrated to the United States before and after World War II, and the American students of those professors. A student could sense the excitement in the halls of the university in those days. No doubt, the Vietnam War and the job market played a role in the dissipation of this excitement. But the problem was much deeper.
There was a generational change taking place, the seeds of which had been sown long before. As Thomas Kailath, a leading engineer of the second half of the twentieth century, commented in 1974:
It was the peculiar atmosphere of the sixties, with its catchwords of “building research competence,” “training more scientists,” etc., that supported the uncritical growth of a literature in which quantity and formal novelty were often prized over significance and attention to scholarship. There was little concern for fitting new results into the body of old ones; it was important to have “new” results!
The baby boomers did not orchestrate this change. They lacked the power. It was inaugurated by a segment of the “greatest generation” that rejected greatness, politicians like Lyndon Johnson, Richard Nixon, and Jimmy Carter, and the academics so aptly described by Allan Bloom in The Closing of the American Mind (1987).
When President Ronald Reagan reinvigorated science and engineering in the 1980s, he wisely brought in outstanding foreign talent, but his administration did not reverse America’s own educational decline, which has only grown worse. American mathematical research in communication, control, and related areas of signal processing has collapsed due to lack of funding. Programs and courses are drying up.
The Epistemology of Modern Science
To understand the present situation, one must appreciate the epistemology, the concept of knowledge, underlying modern science and engineering. Modern science, whose beginning might be marked with the publication of Francis Bacon’s New Organon in 1620, aimed to apply reason differently than it had been in the past. New Organon set science on a new course, in which empirical observation and reasoning were integrated within the same scientific process. Bacon writes:
Those who have handled sciences have been either men of experiment or men of dogmas. The men of experiment are like the ant, they only collect and use; the reasoners resemble spiders, who make cobwebs out of their own substance. But the bee takes a middle course: it gathers its material from the flowers of the garden and of the field, but transforms and digests it by a power of its own.
Bacon divides the students of nature into three types: The ant observes without planning, while the spider makes up theories without tying them to observation. But the bee plans experiments so that his observations are related to the knowledge he desires. In other words, reasoning without experience is nonscientific, but experience alone is not sufficient for science.
In another passage, Bacon more explicitly develops the thinking behind the experimental method:
There remains simple experience which, if taken as it comes, is called accident; if sought for, experiment. But this kind of experience is no better than a broom without its band, as the saying is—a mere groping, as of men in the dark, that feel all round them for the chance of finding their way, when they had much better wait for daylight, or light a candle, and then go. But the true method of experience, on the contrary, first lights the candle, and then by means of the candle shows the way; commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it educing axioms, and from established axioms again new experiments.
A century and a half later, Immanuel Kant noted the importance of the experimental method: “To this single idea must the revolution be ascribed, by which, after groping in the dark for so many centuries, natural science was at length conducted into the path of certain progress.”
Galileo and Newton further refined Bacon, in the first place by demarcating science from metaphysics. In particular, they ruled out causality as part of science because it cannot be empirically verified. In his Principia, Newton wrote, “I here design only to give a mathematical notion of these forces, without considering their physical causes and seats.” Newton, moreover, formulated his scientific principles mathematically, because science concerns relations among quantitative variables. Science is constituted in mathematical statements and validated by checking the concordance of predictions with observations. Scientific knowledge and its verification are necessarily constituted within mathematics. Thus, one’s ability to do science is limited by one’s mathematical toolbox.
While scientific epistemology has gone through further refinement in the centuries since Newton, especially in regard to the probabilistic nature of complex systems, the fundamentals have remained. A valid scientific theory satisfies four conditions:
- There is a mathematical model expressing the theory.
- Precise relationships, known as “operational definitions,” are specified between terms in the theory and measurements of corresponding physical events.
- There are validating experimental data: there is a set of future quantitative predictions derived from the theory and the corresponding measurements.
- There is a statistical analysis that supports acceptance of the theory, that is, supports the concordance of the predictions with the physical measurements—including the mathematical theory justifying application of the statistical methods.
The Methods of Modern Engineering
Seventy years ago, Norbert Wiener, the father of modern engineering, stated, “The intention and the result of a scientific inquiry is to obtain an understanding and a control of some part of the universe.” The basic engineering problem is to control some aspect of nature: to design an operation to transform a given system in a desirable manner. Science provides knowledge of the physical world; engineering changes the world.
One could use trial and error to solve engineering problems—trying one operation after another and observing the result. Such groping does not preclude the use of scientific knowledge, but it does not apply that knowledge in an optimal way. The Romans, for example, were great engineers, but they did not synthesize their engineering operations optimally within a mathematical framework.
Optimal operator design, developed by Wiener in the 1940s (and independently by Andrey Kolmogorov in the Soviet Union), begins with a mathematical model constituting the relevant scientific knowledge. The model is used to mathematically derive an operator for optimally accomplishing the desired operational objective under the constraints imposed by the circumstances. Within twenty years of its development, optimal operator design permeated engineering, leading to achievements previously inconceivable.
In optimal operator design, one begins with a scientific model, for instance, a set of differential equations or a probabilistic graphical network characterizing genetic regulation, and expands the model by adjoining an operator with which to alter system behavior. In the case of genetic regulation, a mutation may have resulted in a regulatory structure leading to a cancerous phenotype. A criterion is posited to judge the degree to which the operational objective is achieved, for instance, lowering the likelihood of metastasis. The goal, in this example, is to find an optimal strategy for drug intervention.
The basic protocol involves four steps:
- Construct the mathematical model expressing the scientific theory.
- Define an operational objective based on the model, including a set of possible operators.
- Define a cost function to quantify the degree to which an operator meets the objective.
- Mathematically solve for an optimal operator, one minimizing the cost function.
Just as in constructing a scientific theory, finding an optimal operator involves mathematics at every step. Wiener noted the “essential unity of the set of problems centering about communication, control, and statistical mechanics, whether in the machine or in living tissue.” The same kinds of mathematics provide the knowledge to build and control physical systems, whether electrical, mechanical, or biological systems. The more complex the system, say a human cell, the more extensive the mathematics.
In practice, there is an additional step in the engineering process: physically construct the operator derived from the theory, or one that is very close to optimal. Construction is limited by technological and computational feasibility. For complex systems, computation is almost always a limiting factor, and the need for more powerful computers is ever present.
A caveat: preferring optimal design does not always imply recommending the abandonment of trial-and-error engineering. There are many instances where intuition can lead to a workable solution when no valid scientific theory is available. Primitive man did not need to await a theory of combustion before developing useful ways of using fire, but the efficiency and scope of this invention were severely limited in the absence of such a theory. Furthermore, the inadequacy of intuition grows more severe with complex nonlinear systems because small perturbations to the system can result in large events unanticipated by intuition. But this is precisely the setting in which it is very unlikely that there will be a valid scientific theory. And that is our current conundrum, and a topic for another article.
The Abandonment of Modern Engineering
In 1930, the Spanish philosopher José Ortega y Gasset warned that science was in peril. “Has any thought been given to the number of things that must remain active in men’s souls in order that there may still continue to be ‘men of science’ in real truth?” he asked. “Is it seriously thought that as long as there are dollars there will be science?” Amidst one of the greatest periods of scientific progress in history, Ortega y Gasset saw trouble ahead.
Giving credence to Ortega y Gasset’s warning, the successful engineering methods of the 1950s and ’60s were discarded with relatively little thought, and with no evidence that a new approach could achieve better results than the extraordinary accomplishments of 1940 to 1969. Due to the generational changes and other reasons discussed above, the “things active in men’s souls” necessary for science seemed to dissipate. The complex planning and mathematical exertions of optimal operator design were gradually abandoned, and the easier process of trial-and-error groping in the dark returned. More outputs could be produced much faster without experimental design, mathematical modeling, and validation. It seemed far easier to train new scientists and engineers if the requirements were lowered. one would no longer have to struggle through books on random processes and mathematical statistics. It came to be believed that engineering should be no more difficult, and no less attractive, than the activity of a child playing with toys!
The ultimate toy, the computer with a graphical user interface, was just over the horizon. Massive amounts of data could be processed, and giant simulations could provide impressive outputs, which could be conveyed by stunning visualizations requiring no scientific understanding—but which make one feel knowledgeable and provide entertaining slide shows. Surely all of this data, computation, and charming imagery must mean something. Or does it? We can compute much faster today, and this is important to science and engineering, but if the experimental process itself is allowed to atrophy as a consequence, then what is the payoff?
The biggest hype of recent years has been around “big data.” The idea is to gather huge amounts of data that may have something to do with whatever problem is being studied, and then process (“mine”) it via computer algorithms to find relations among this data. The experimental method involves postulating a theory to guide experiments, incorporating the data into the theory, and, if needed, repeating the cycle. With data mining, however, there are no experiments. This high-performance groping in the dark is supposed to be more efficient than the traditional experimental method. But this remains to be proven.1
Norbert Wiener characterized ad hoc data collection as follows: “An experiment is a question. A precise answer is seldom obtained if the question is not precise; indeed, foolish answers—i.e., inconsistent, discrepant or irrelevant experimental results—are usually indicative of a foolish question.” With data mining, there is not even a question. Bacon’s ants are assumed to be correct, as long as they are provided with sufficiently fast computation.
An experimental scientist plans experiments to address specific questions, and makes conjectures about possible relations between phenomena that might answer these questions. He organizes these relations in a mathematical model, and tests it by making predictions. If the predictions hold, then the model is provisionally accepted until new observations violate it. A new model must then be constructed. on the other hand, in the big-data world, the “scientist” postulates some complicated relational structure with thousands or millions of unknown parameters, and then applies some algorithm to the data to estimate the parameters and thereby “learn” the model. The idea is that if one postulates a sufficiently complex computational form, then with appropriate choices of the parameters, it can closely match a valid model.
The problems with this approach are manifold. What does it mean to closely match? How does one construct the learning algorithm? How much data are required not to “overfit” the data (meaning that the resulting structure matches the training data but does not generalize to new data, which is the whole point of the scientific enterprise)? What are the statistical foundations of the method? What understanding is provided by a computational structure typically involving no more than high school–level mathematics? Does anyone really believe that data mining could produce the general theory of relativity?2
Since there is no mathematical model consisting of equations whose structures reflect physical processes, such as derivatives and second derivatives corresponding to velocity and acceleration, visualizations take the place of mathematical knowledge. Various colorful schemes are designed to visualize relations in the phenomena. But what do these mean? Since the meaning must lie within the operational definitions connecting the model to the observations, the delightful pictures mean nothing. Nevertheless, they allow the ant to morph into a spider, who can spin yarns about the pictures.
Even when mathematical theory is used, today the person using the model often does not understand it, so a computer is used to compress the information into some form that the user can comprehend. The programmer who implements the visualization may not know what knowledge represented in the equations is being lost or distorted by the compression. Recall James Jeans’s warning in The Mysterious Universe (1930): “We go beyond the mathematical formula at our own risk.”
Do we really expect people who do not understand the relevant mathematics to accomplish the scientific task of building a theory? Wiener provides the answer: “If the difficulty of a physiological problem is mathematical in essence, ten physiologists ignorant of mathematics will get precisely as far as one physiologist ignorant of mathematics.”
The State of Engineering in U.S. Universities
When evaluating PhD applicants, comparison of undergraduate transcripts is the first selection criterion. What is the level of the applicant’s mathematical education? American students rarely get beyond that criterion when compared to Chinese and Iranian students.
One measure of the present situation is the percentages of international students in graduate programs. According to the National Science Foundation, in 2015, the percentage of international full-time graduate students in electrical engineering, petroleum engineering, computer science, industrial engineering, and statistics, were 81, 81, 79, 75, and 69 percent, respectively.
The upside to such a high number of foreign students is that they provide desperately needed talent. The downside is that it hides the fact that fewer and fewer Americans are up to the task. In 1995, there were 9,399 U.S. full-time graduate students in electrical engineering and 8,855 international students. Jump ahead to 2015, and the numbers were 7,783 U.S. students and 32,736 international students. Our population increased by 23 percent over that period, but the number of U.S. graduate students in electrical engineering fell by 17 percent. Also consider the paucity of American-born electrical engineering faculty under sixty years old.
This trend is present at all levels. In recent years, universities have been accepting many self-paying international students. For state institutions, they pay out-of-state tuition, which is considerably more than that paid by in-state students. According to Inside Higher Ed, the University of Illinois at Urbana-Champaign, the state’s flagship university, enrolled 37 undergraduate Chinese students in 2000 and 2,898 in 2015. For graduate students, it was 649 in 2000 and 1,973 in 2015. Total enrollments in 2015 for South Korea and India were 1,268 and 1,167, respectively. Many of these will stay in the United States, and we want them. Not only are our universities failing, but the Asian population is many times greater than ours. Add to that the fact that 30 percent of Chinese university students study engineering, as opposed to only 7 percent of U.S. students.
Attracting and keeping top talent will not be easy. China and India are recruiting some of their best citizens and former citizens to come home, both at the senior and junior levels. For senior people, the inducement goes beyond salary. Research requires resources, which can be expensive, both for research staff and laboratory facilities. An offer is especially attractive when it provides secure funding for fundamental work, a rarity in the United States. When a leading scientist returns to his home country, there is a shift of research capital from the United States to a competitor. Given the fact that, in numerous areas, major contributions come from a small group of research teams, even a single shift can have a significant impact.
For recent PhDs the disparity in incentives between remaining in the United States and returning home can be even greater, and top graduates are being targeted. one alternative for them is to remain in a U.S. university, go through two to five years of postdoctoral work earning $50,000 per year, then apply for one of the few good open faculty positions. If fortunate, these promising scientists will then suffer through a six-year pre-tenure period writing grant proposals of meager scientific value in an environment where the main interest of the department head and dean is the number of dollars they bring in. The other alternative is to return to one’s homeland and receive an immediate faculty position at a top university, with funding to build a laboratory and support students.
The Way Forward
Today, we must live in the new environment. We must recognize the validity of Ortega y Gasset’s warning yet create a situation where the best scientific minds can succeed by applying what lessons we can from the successful eras of the past.
As a rule of thumb, fundamental scientific and engineering research is generally limited to the top 0.1 percentile in mathematical ability. There are exceptions, and there are vast differences within the top 0.1 percent. A good education, effort, and creativity are also required, though there is no way to judge the latter abilities until the student encounters deep problems. Given the relatively small population capable of functioning at this level, we need to ensure that the one-in-a-thousand student gets an excellent mathematical education, from childhood through graduate school. Especially in poor areas, this means finding children at a young age and getting them into high-performing school systems. Throughout the education process, evaluation must focus solely on academic ability, paying no attention to “holistic” balderdash. Our competitors employ unforgiving meritocratic systems that increasingly filter students. At the completion of high school, the best students are entered into the best universities—free of charge.
A rigorous mathematical education is needed for all scientists and engineers. The engineering problems we face today often involve complex, nonlinear, stochastic systems, in which ordinary human intuition is weak, and reliance on difficult mathematical reasoning is necessary. High-quality engineers are needed for product development, system maintenance, and upgrades in all industries, from aerospace to medical. To fulfill this demand, we need a national mathematics exam prepared by real mathematicians, physicists, and engineers (not people with graduate school of education degrees). Students scoring in the ninety-eighth percentile should be guaranteed a college scholarship, including full tuition and living expenses, provided that they major in mathematics, engineering, or science.
Regarding international students, we should do whatever is necessary to attract the best, and to keep them, even if our education system is improved. The Trump administration is tightening restrictions on Chinese students. It should be doing the opposite, easing immigration for top students and providing more funding to attract them. We already suffer from limitations on Iranian students that hurt basic research. Admittedly, attention must be paid to protecting industrial and military secrets, but this should have been thought about decades ago, before deciding to deprive our own children of the quality of education they would get in China or Iran.
In fairness, the Trump administration has been dealt a bad hand. The previous three administrations have been disastrous. Consider the numbers of students cited earlier for the years 1995 to 2015. But the problem goes beyond those numbers. In 1995, one could hardly take Chinese research seriously. They had fine undergraduate schools, but had yet to develop quality research except in very specialized areas. That has all changed. Now their top research universities challenge ours. Their high-tech exports exceed ours. But we are where we are, and this means that we need international talent more than ever before, and we need to keep that talent here, whatever the cost, in addition to developing more homegrown scientists.
When it comes to funding, university research should address fundamental problems, and funding needs to be redirected towards them, because these open up entire areas for progress. Currently, to obtain funding, good professors work on mundane problems. A move to fundamental research will require bringing in competent people to run the funding agencies—a critical requirement being a solid appreciation of scientific epistemology, so that they can separate the wheat from the chaff. In addition, the government should reduce the percentage of grant money that university administrations skim off the top as “indirect costs,” so that research money goes to research. This will have the side benefit of cutting administrative interference.
The inertia in our research laboratories can be broken if the current management is replaced by scientists of the highest caliber. Given the decline in our engineering capacity over the last fifty years, it may be hard to find such leadership, but it exists and can be found if the necessary effort is put forth. Perhaps Bell Laboratories cannot be recreated, but with sufficient investment and a vision to do great things, government labs can recruit the needed talent, both American and foreign, by offering a research environment emulating Bell Laboratories. Such an environment would also allow the many good people currently in the labs to pursue serious work.
The computer game playing—data mining, vacuous simulations, and superficially dazzling visualizations—needs to be dropped in order for there to be a return to real science. The gains from 1919 through 1969 dwarf those during the next half century (and at a fraction of the cost). only sheer ignorance, aided by the faddish (and equally ignorant) notion that all knowledge is socially constructed, could lead to the rejection of the scientific techniques that achieved extraordinary successes only a few decades ago.
Yet it is precisely this ignorance that makes corrective action difficult to imagine. It is not mainly that our elected leaders are ignorant of science, though it would certainly help if they had some rudimentary knowledge. The salient problem is the agency staffs and the people administering government research. How many have studied random processes, or even read David Hume’s Enquiry concerning Human Understanding (1748), which used to be a basic requirement for any person to be considered educated, whether scientist, politician, historian, or academic? Absent such a basic education, how can one begin to appreciate the policy implications of “a physical reality” whose understanding, in the words of Hannah Arendt, “seems to demand . . . a radical elimination of all anthropomorphic elements and principles, as they arise either from the world given to the five senses or from the categories inherent in the human mind?” Restorative actions will only be recognized by those with good educations—and here we come full circle to the woeful condition of our universities.
This article originally appeared in American Affairs Volume III, Number 1 (Spring 2019): 113–26.
Notes
2 E. R. Dougherty and M. L. Bittner, Epistemology of the Cell: A Systems Perspective on Biological Knowledge (New York: Wiley-IEEE, 2011).