The Dayside is the blog of Charles Day, Physics Today’s online editor. His short essays range all over the physics landscape and beyond.
The Dayside is the blog of Charles Day, Physics Today’s online editor. His short essays range all over the physics landscape and beyond.
This essay by Charles Day first appeared on page 88 of the January/February 2012 issue of Computing in Science & Engineering, a bimonthly magazine published jointly by the American Institute of Physics and IEEE Computer Society:
My title comes from a comment made on Physics Today‘s Facebook page by Fernanda Foertter, a physicist who programs high-performance computers for a biotechnology company.
Although Foertter’s computational science background lies mostly in molecular dynamics simulations of polymers, her comment was about this post I wrote on colliding galaxies:
Here’s a great example of using computer simulation to help interpret observations. Jennifer Lotz of Space Telescope Science Institute and her colleagues modeled pairs of galaxies merging into each other. Stills from her movies were then compared with Hubble images of galaxies that looked as though they had just merged or were about to merge. The comparison yielded a new, more accurate estimate of the galaxy merger rate.
Until I encountered Foertter’s enthusiastic outburst, I hadn’t thought of supercomputers as being inspirational. As a science writer, I’ve seen plenty of stunning simulations of exploding supernovae, wiggling proteins, and other phenomena. I’ve written about climate calculations that gobbled up weeks of supercomputer time. Several Nobel Prizes, I know, have been awarded for work that required the services of high-performance computers.
But now I’ve come to realize that supercomputers are not just useful, they’re glamorous, too. What’s more, their awesome power could be used to encourage schoolchildren to think about careers in computational science.
To see what I mean, consider what is perhaps the most ambitious, most glamorous field of physics: particle physics. When I was in high school, I read Nigel Calder’s The Key to the Universe: A Report on the New Physics (Viking Press, 1977), which I found in my local library. There within its pages, in accessible prose accompanied by photos and diagrams, was the quest to discover the ultimate constituents of matter and the laws that govern their behavior.
Back in 1977, the world’s most powerful particle accelerator was Fermilab’s Main Ring, whose circumference and maximum collision energy were 6.4 km and 400 gigaelectronvolts. The current record holder, CERN’s Large Hadron Collider, is 27 km in circumference and is designed to reach 7 teraelectronvolts. When the LHC ended its latest science run in October, it had smashed together 7 × 1014 protons and antiprotons.
To me, supercomputing—or high-performance computing, if you prefer—is the particle physics of computational science. The world’s fastest computer, K, consumes 10 megawatts of electricity to carry out 8 × 1015 floating-point operations per second. The problems that K and other supercomputers are programmed to tackle are among the toughest and most important in all of science, such as understanding Earth’s changing climate and figuring out how 1011 interconnected neurons form a thinking human brain.
As I write this column, Supercomputing 2011 is being held at the Washington State Convention Center in Seattle. I was glad to see that the meeting’s education track has 19 talks altogether, including one entitled “Parallel: HPC Overview” by Charlie Peck and his colleagues.
Attending a lecture or class is still work to a student, no matter how interesting the topic. But reading a captivating book is play, and therefore more likely to fire a student’s imagination. I’ve just looked on Amazon for an inspiring book on supercomputing. I couldn’t find one.
Two recent newspaper articles reminded me of the importance of clarity when writing about complex topics. In “Our feel-good war on breast cancer,” which was the cover article of last week’s New York Times magazine, Peggy Orenstein tackled the question of whether campaigns to raise awareness of breast cancer and urge women to have mammograms do more harm than good.
Orenstein’s reporting of the question’s medical, social, and economic aspects is impressive, as are her fluid narrative and engaging style. She also succeeds in clearly conveying the tricky topic of how risk is assessed and described. Five-year survival rate, I learned, is a potentially misleading statistic.
But to me, what makes her article admirably distinctive is her account of her own experiences with breast cancer. Even though she benefited from the early detection of a tumor, she does not advocate universal early screening. Quite the opposite. Her final paragraph reads:
It has been four decades since the former first lady Betty Ford went public with her breast-cancer diagnosis, shattering the stigma of the disease. It has been three decades since the founding of Komen. Two decades since the introduction of the pink ribbon. Yet all that well-meaning awareness has ultimately made women less conscious of the facts: obscuring the limits of screening, conflating risk with disease, compromising our decisions about health care, celebrating “cancer survivors” who may have never required treating. And ultimately, it has come at the expense of those whose lives are most at risk.
The other reminder of clarity’s importance came in the form of an editorial in Tuesday’s Washington Post. Under the title, “EPA speaks on how much radiation is too much,” the newspaper’s editorial board opined on a proposal, released on 15 April by the US Environmental Protection Agency, to update the agency’s guide to emergency services in the event of a nuclear accident or attack.
The Post‘s editorial board duly weighed activists’ objections to the proposal, yet found in favor of the EPA—but with this sting in the tail:
The activists are right, though, about one thing: The document is a confusing bore. If the EPA wants city, county and state officials to pay attention—if it wants to make the case for practicality over the activists’ hyperbole—the agency ought to rewrite the guidelines in plain English.
My first encounter with the controversy surrounding radiation protection guidelines arose when I was assigned to edit Zbigniew Jaworowski’s article “Radiation risk and ethics,” which appeared in Physics Today‘s September 1999 issue. The article amounted to a long, multifaceted argument against the assumption that any radiation dose, no matter how small, could cause cancer.
The article was easy to edit. Jaworowski had organized the article deftly and made his points directly and with well-chosen evidence to support them. I was gratified to see that it spawned 12 letters to the editor, which were split between the April and May 2000 issues. Whether they agreed with Jaworowski or not, the letter writers had evidently understood his arguments.
Of course, scientists should strive to be clear even when they’re not engaged in controversy. And they should be especially clear when they propose a revolutionary new theory or experimental result.
One of my favorite examples of a clear, bold proposal is the paper that launched the field of chaos theory: Edward Lorenz’s “Deterministic nonperiodic flow,” which appeared in the March 1963 issue of the Journal of Atmospheric Sciences. Here’s a sample of Lorenz’s style from the paper’s introduction:
Lack of periodicity is very common in natural systems, and is one of the distinguishing features of turbulent flow. Because instantaneous turbulent flow patterns are so irregular, attention is often confined to the statistics of turbulence, which, in contrast to the details of turbulence, often behave in a regular well-organized manner. The short-range weather forecaster, however, is forced willy-nilly to predict the details of the large-scale turbulent eddies—the cyclones and anticyclones—which continually arrange themselves into new patterns. Thus there are occasions when more than the statistics of irregular flow are of very real concern.
Although you might get bogged down in the main, technical section of the paper, the entire introduction is accessible. And if that extract has whetted your appetite for more clarity about chaos, I recommend Adilson Motter and David Campbell’s May 2013 Physics Today article, “Chaos at fifty,” which celebrates the half century of research that Lorenz’s paper begat.
My first professional encounter with the Monte Carlo method came not during my long-abandoned career as an astronomer when I might have used the computational technique, but years later when I ran Physics Today‘s Search and Discovery department.
In 2004, I faced the task of describing a new Monte Carlo algorithm. Devised by Erik Luijten (while taking a shower, he told me), the new algorithm could do what the standard one, the Metropolis algorithm, couldn’t: efficiently simulate a colloid whose suspended particles had widely different sizes.
Suspecting that some of my readers might be unfamiliar with Metropolis, I included a short tutorial. I pointed out that using an alternative, more direct simulation method—molecular dynamics (MD)—was impractical: It’s possible to calculate the forces acting on all the colloid’s particles, but only for a modest number of consecutive time steps. The movie-like simulation that MD produces would be too brief to provide physical insight.
But the Metropolis algorithm, I told my readers, doesn’t follow every particle all the time. Rather, it calculates snapshots of the system and uses statistical mechanics to combine them. Comparing the two methods, I wrote:
So, if MD is like a movie, the Metropolis algorithm is like a sparse set of shuffled snapshots. If you simulated a cocktail party with the Metropolis algorithm, you wouldn’t see dynamical events, such as guests arriving and departing, or rare events, such as a waiter refilling a punchbowl. But, taken together, the Metropolis snapshots would fairly represent the party in full swing. From them, you could deduce whether, on average, people had enjoyed themselves.
My latest brush with Monte Carlo happened last week. Looking for research to write about, I came across a paper by Luis Zamora and his colleagues entitled “A Monte Carlo tool to study the mortality reduction due to breast screening programs.”
Screening for breast cancer is difficult and controversial. It’s difficult because the principal method, x-ray mammography, cannot by itself determine whether a lesion is malignant. Because of that limitation, follow-up biopsies are essential, but most lesions—roughly 4 in 5—turn out to be benign.
Controversy surrounds the question of when to start screening. Not only is the disease harder to detect in young women, it’s also less prevalent. Definitive evidence in favor of screening women aged between 40 and 49 years is lacking. Yet doctors—who treat individuals, not populations—are reluctant to tell patients under 49 that they don’t need a mammogram yet. Why take even a small risk?
The tool that Zamora and his colleagues have built simulates the fate of a population of women who enter a screening program. You can adjust the program’s age range and participation rate. Clinically derived parameters, such as the probability of detecting a tumor, are incorporated into the tool.
Zamora and his colleagues present their results in graphs and tables, which are hard to summarize in a short column. They predict, for example, that breast cancer mortality can be reduced by 29% if 100% of women aged 50–70 are screened every two years.
But they did discover what appears to be a critical parameter. For a screening program to be effective, its participation rate must be at least 50%. In the US, where 16.3% of the population lacks health insurance, that target is unfortunately ambitious.
This essay by Charles Day first appeared on page 88 of the March/April 2013 issue of Computing in Science & Engineering, a bimonthly magazine published jointly by the American Institute of Physics and IEEE Computer Society.
The names of the five stars closest to the Sun exemplify how confusing (or historically rich) astronomical nomenclature can be.
Proxima Centauri is the closest star. The second and third closest form a binary and are known collectively as α Centauri (or Rigel Kentaurus) and individually as α Centauri A and α Centauri B. The fourth closest is Barnard’s Star. The fifth is WISE 1049−5319, which is also known as Luhman 16.
Just how the five stars got their names depends, in part, on when they were first observed. Alpha Centauri is the third brightest star in the night sky. As such, it has been named by several cultures. Its Chinese name, 南門 (Nán Mén), means “Southern Gate.” Arab astronomers called it جل القنطورس (Rijl Qanṭūris), “Centaur’s Foot.”
The name α Centauri originates in the first systematic stellar naming convention, which was devised in 1603 by the Bavarian astronomer Johann Bayer. “Centauri” (“of Centaurus”) indicates the constellation that the star belongs to; the Greek letter indicates the star’s brightness rank in the constellation.
The other stars in the top five are not visible to the naked eye and weren’t, therefore, cataloged by Bayer, whose naming convention predated the invention of the telescope by eight years. Proxima Centauri and Barnard’s Star are both red dwarfs. Proxima was named by the astronomer who discovered it in 1915, Robert Innes. Barnard’s Star was named after Edward Barnard, who was the first to measure the star’s velocity across the sky in 1916.
Remarkably, the discovery of WISE 1049−5319 was published on arXiv just last month. Kevin Luhman of Penn State University and his collaborators identified the star—which is, in fact, a brown dwarf binary—in observations made by NASA’s Wide-field Infrared Survey Explorer spacecraft. The numbers after the spacecraft’s abbreviated name are the binary’s celestial coordinates.
Names like WISE 1049−5319 are now the norm in astronomy. Space-based and ground-based observatories whose sensitivity exceeds their predecessors tend to find many new objects. Although you can’t tell from its name that WISE 1049−5319 is a brown dwarf binary, you can presume from the WISE part of the name that it’s an IR source and that it’s faint (because if it were bright, it would have been discovered and named earlier). Using the coordinates to identify sources might seem long-winded compared to a serial number, but the coordinates are astronomically meaningful, whereas a serial number wouldn’t be.
Rakhat, α Centauri Bb, or both?
I was reminded of the quirkiness of astronomical names earlier this week, when I read a news story in New Scientist entitled “Closest exoplanet sparks international naming fight.”
The dispute pits Uwingu, a startup company whose mission is to fund projects that inform the public about space science, against the International Astronomical Union, the world’s official arbiter of astronomical naming conventions and planetary nomenclature for planetary bodies.
By IAU-sanctioned convention, exoplanets are named after the the stars they orbit, with the addition of a lower-case letter: “a” designates the star; “b,” the first planet discovered; “c,” the second; and so on. Officially, the closest exoplanet to the Sun is called α Centauri Bb, but if you paid Uwingu $4.99, you could suggest a name. And if you paid $0.99, you could vote on the suggestions. Currently, the leading name for α Centauri Bb is Rakhat, which is what Mary Doria Russell called a planet that orbits the star in her 1996 science fiction novel, The Sparrow.
I think it’s great that a real planet is named after a fictional one. Granted, Rakhat conveys less astronomical information than α Centauri Bb does, but I don’t see why the two names can’t coexist. I doubt anyone would be confused.
What’s more, the IAU’s exoplanet naming convention, though simple and straightforward, does not necessarily yield neat, rational names. That’s because the stars that harbor exoplanets, like the five stars closest to the Sun, follow a mix of naming conventions. Exoplanet examples include 51 Pegasi b (the first one discovered), KIC 12557548b (the evaporating exoplanet), and HD 85512 b (a super-Earth).
Despite astronomy’s modest technological payoffs, the general public continues to fund astronomical research—thanks, in part, to the time and energy astronomers devote to engaging the public. By giving people the opportunity to name exoplanets, Uwingu is making them partners in a scientific enterprise. The IAU should support, not fight, such deep public engagement.
“If your experiment needs statistics, you ought to have done a better experiment,” Ernest Rutherford once declared. But when you work at the frontier of detection, as astronomers and particle physicists often do, you rely on statistical analysis to extract results. Indeed, if your experiment doesn’t need statistics, then you might be too far from the frontier to make an important discovery.
Despite such statistical triumphs as last year’s discovery of the Higgs boson, Rutherford’s disdain for—or at least suspicion of—statistics remains widespread. A recent statistical analysis demonstrated that visiting your doctor every year for a checkup doesn’t significantly prolong life. Of course, the practice doesn’t harm any individual patient, but its prevalence in the US raises the total cost of medical care, which harms society. Will the study make a difference? I doubt it.
I’m not sure what evidence would convince physicians to refrain from insisting on annual checkups, but they and anyone else who is skeptical of statistical analysis might be persuaded by a simmering scandal that boiled over recently in Atlanta, Georgia.
On 29 March the superintendent of the Atlanta school district, Beverly Hall, and 34 other educators were indicted in what a New York Times news story characterized as “the most widespread public school cheating scandal in memory.”
According to the indictment, the 35 educators conspired to raise students test scores by altering the tests after the students had taken them. Meeting in secret and wearing gloves to avoid leaving incriminating fingerprints, groups of teachers at various schools rubbed out wrong answers and replaced them with the correct ones.
Besides acclaim for appearing to fix badly performing schools, the conspirators also received cash bonuses. Hall’s totaled $500 000, according to the Times. One school, Parks Middle School, “improved” so much that it forfeited $750 000 in state and federal aid.
To gather evidence of a conspiracy that might convince a jury, Georgia state investigator, Richard Hyde, persuaded one of the teachers who was allegedly part of the scheme to wear a secret recording device. But evidence of a different kind had come to light five years earlier. In December 2008, the Atlanta Journal-Constitution drew attention to what seemed like suspiciously large and abrupt jumps in test scores. That initial investigation expanded into a five-year project in which three reporters and two database specialists gathered and analyzed test scores from 69 000 schools in 14 743 districts in 49 states.
The scores from Atlanta and few other districts stuck out as anomalous. As reported last June, some of those school districts are taking advantage of the Atlanta Journal-Constitution study to identify cheating educators.
Organized crime and electoral fraud
Similar statistical investigations can be found on the arXiv e-print server. Last month two physicists, Salvatore Catanese and Giacomo Fiumara and mathematician Emilio Ferrara, all from the University of Messina in Sicily, demonstrated that they could pick out organized criminal activity from cell phone records by looking for statistically anomalous behavior.
My favorite example—because it’s so similar to the Atlanta cheating scandal—was the study posted last year by Dmitry Kobak of the electrical and electronic engineering department of Imperial College London and two unaffiliated coauthors, Sergey Shpilkin and Maxim Pshenichnikov. Here’s the abstract:
Here we perform a statistical analysis of the official data from recent Russian parliamentary and presidential elections (held on December 4th, 2011 and March 4th, 2012, respectively). A number of anomalies are identified that persistently skew the results in favour of the pro-government party, United Russia (UR), and its leader Vladimir Putin. The main irregularities are: (i) remarkably high correlation between turnout and voting results; (ii) a large number of polling stations where the UR/Putin results are given by a round number of percent; (iii) constituencies showing improbably low or (iv) anomalously high dispersion of results across polling stations; (v) substantial difference between results at paper-based and electronic polling stations. These anomalies, albeit less prominent in the presidential elections, hardly conform to the assumptions of fair and free voting. The approaches proposed here can be readily extended to quantify fingerprints of electoral fraud in any other problematic elections.
As for Rutherford, I remain puzzled by his attitude toward statistics. The famous experiment that Hans Geiger and Ernest Marsden performed in 1909 at the University of Manchester under his direction revealed the existence of the atomic nucleus—after Geiger and Marsden had laboriously tallied the rare backward reflections of alpha particles from gold foil.
ScienceDaily is aptly named. The popular website has been posting copious news about science since its foundation 18 years ago. And I do mean “copious.” On 2 April, for instance, I counted 95 news items!
Given that ScienceDaily‘s staff page lists just two people, founder Dan Hogan and his wife Michele Hogan, the productivity seems remarkable—until you realize that all those stories, at least the ones I checked, are repackaged press releases from elsewhere.
As far as I can tell, the repackaging is minimal. Earlier this week, I posted a link on Physics Today‘s Facebook page to a Fraunhofer press release about a truck-mounted laser that can scan roads while the truck drives at highway speeds. The ScienceDaily version lacks the original’s figure, but the text is identical.
Further evidence of ScienceDaily‘s light editorial touch comes from a search for the British spellings “metre” and “litre.” As an American news outlet, ScienceDaily can be expected to swap the spellings for the American variants—if it did more than simply cut and paste the original British English press releases, that is.
ScienceDaily does not hide what it does. At the end of each story you’ll find a short description of the source, a note about editing, advice on citing the story, and a disclaimer. Here’s what’s appended to the piece about the truck-mounted laser scanner:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Need to cite this story in your essay, paper, or report? Use one of the following formats:
- MLA Fraunhofer-Gesellschaft (2013, April 2). Surveying roads at 100 km/h. ScienceDaily. Retrieved April 3, 2013, from http://www.sciencedaily.com /releases/2013/04/130402091250.htm
Note: If no author is given, the source is cited instead.
Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.
Whether ScienceDaily‘s behavior is unethical is not clear-cut. On the one hand, the website links to the original press release and to the institution that issued it. On the other hand, disclaiming the views in the article while recommending that ScienceDaily‘s version of the story be cited rather than the original comes across as a bid for the benefits of publication without the concomitant editorial responsibility.
But does it matter that ScienceDaily reproduces press releases? Could the practice even be good for the promotion of science?
Most, if not all, the science press releases I encounter are well-written and accurate. And although some of them sound overly enthusiastic, they tend not to exaggerate or misrepresent the implications of the research. Some press releases are better than the stories they prompt, perhaps because the people who write them spend more talking to researchers to get the science right than some reporters might.
There’s another reason to tolerate, if not welcome, what ScienceDaily and similar websites do. To quote the website’s advertising page,
ScienceDaily‘s Web site traffic averages about 45,000 daily visits, generating in excess of 150,000 page views a day, or a total of roughly 1.3 million visits / 4.5 million page views a month.
That’s a lot of people reading informative, professionally produced content about science.
Twelve years ago I edited a feature article for Physics Today entitled “So you want to be a professor!” Having recently landed a tenure-track job at San Diego State University, the article’s author, Matt Anderson, wanted to share his job-hunting experiences.
Among the advice that Anderson offered was this paragraph on what to wear to a campus interview:
When trying to decide what to wear for the interview, it is probably better to err on the side of being a little overdressed. For men, I recommend a comfortable suit and tie. The female candidates I conferred with generally wore suits—either a skirt-suit or pantsuit—stockings, and low heels. Although physicists generally dress casually, I urge you to look sharp. It is better to stand out a little because, after all, you are the candidate and people should know it! If you’re still uncertain, a good idea is to observe what the well-respected scientists wear to conferences. They generally dress in a style known as “business professional.” For my interviews I brought two outfits: a suit for the day of the colloquium, and a shirt and tie combo for the other day. Also, wear comfortable shoes! You will be on your feet for two days straight.
Fashion and levels of sartorial formality haven’t changed significantly since Anderson wrote his article. Indeed, modish men’s suits continue to follow the slim silhouette that Hedi Slimane introduced in 2001, soon after he joined Christian Dior to become the fashion house’s creative director for menswear. Women’s clothes also fit more closely than they did in the 1990s, when Giorgio Armani’s soft, loose style predominated.
Whether we like it or not, the fashions of London, Milan, Paris, and New York do influence our expectations of what it means to be well dressed. In a recent post to the blog Marketing for Scientists, Marc Kuchner asked image consultant Kasey Smith the question, How is a scientist supposed to dress? Her principal advice: Your clothes should not be baggy.
You could take your clothes to a tailor shop, or when you buy new clothes have them tailored to fit you. Men know this already. Men’s clothes come with the hems not even in there. They know that they have to mark the hems. Women just think that clothes should fit them off the rack, but that’s not true either. Just like men have to do these alterations, so do women.
Of course, interviewing for a job and giving a talk—two occasions when one might dress up—are not what most scientists do most of the time. Nevertheless, says Smith, even casual clothes should look neat and presentable.
I rather like the typical indifference of physicists to their clothing. We wear what we like when we like. What matters is our work, not our appearance. On the other hand, given that physics is one of the highest expressions of human civilization, and given that our collective image helps to attract (or repel) young people, we should perhaps pay attention to Smith’s advice.
In January 1800 Thomas Young wrote to the secretary of the Royal Society, Edward Whitaker Gray, to outline his recent “experiments and inquiries respecting sound and light,” as he titled the letter.
Among the findings that Young reported was the generic deflection of fluids near a boundary. Just as a stream of air blowing over water will raise a dimple, he wrote, a candle’s flame will be drawn toward a stream of air. In both cases, and in others, proximity to a surface reduces the local pressure, leading to a net force on the stream.
The effect is not named after Young, however. That honor was bestowed by the aeronautical engineer Theodore von Kármán on his fellow engineer and near contemporary, Henri Coandă. Born in 1886 in Bucharest, Romania, Coandă started off as an artillery officer in the Romanian army. But after six years of military service, during which he was able to pursue his passion for aeronautics, he left Romania for Paris in 1909 to enroll at the École Nationale Supérieure d’Ingenieurs de Construction Aéronautique.
Coandă’s first job after graduation was with Gianni Caproni’s aircraft factory in Milan, Italy. There, he designed and built an aircraft, the Coandă-1910, that made use of his namesake effect. Depicted on the two stamps above, the Coandă-1910 resembled other aircraft of the time, but with a crucial difference: Instead of spinning a propeller, its four-cylinder piston engine powered a fan that expelled air back along the fuselage. Whether the Coandă-1910 ever flew or was even capable of flight has not been established.
Coandă’s effect has had a more successful life. Its natural manifestations include the diversion of air streams over mountainous terrain and the squirting of blood from the heart’s left ventricle into the left atrium during a disorder known as mitral regurgitation. Among the effect’s practical applications are boundary-layer control systems, which blow engine exhaust over wings to boost lift at low air speeds, and HVAC diffusers, which extend the reach of cooled or heated air.
My favorite application of the Coandă effect is the Avro Canada VZ-9 Avrocar, a flying car developed in the 1950s for the US Air Force and US Army. The VZ-9 looked more like a flying saucer than a car with wings. A central, upward-facing intake drew air into a centrifugal compressor, which directed the flow into three jet engines that were arranged symmetrically around the outside of the compressor like the fireworks in a Catherine wheel. Ducts directed the exhaust down (for lift) and out (for control).
As you can tell from the video, the VZ-9 could actually fly, albeit in somewhat wobbly way. It flew at a maximum speed of 119 mph, but never lifted more than a few feet off the ground. The pitching evident in the video was never eliminated. The project was canceled in December 1961.
By contrast, the Coandă effect remains in good health. Using Google Scholar, I came across several recent papers that evoked it, including
I also found a paper posted last year to the arXiv e-print server by Teresa López-Arias of the University of Trento in Italy. Having discovered Young’s letter to Gray, she posed the question whether the Coandă effect should really be called the Young effect.
“Let us always keep before our mind’s eye an overheated and glowing stove and inside a naked man, supine, who will never be released from such pain. Does not his pain appear unbearable to us for even a single moment?”
Thus wrote the 15th-century theologian and mystic Denis the Carthusian in his tract about the Last Judgment, De quatuor hominis novissimus. When I encountered the passage in Johan Huizinga’s The Waning of the Middle Ages (1919), another, more recent book came to mind: Iain M. Banks’s science fiction novel, Surface Detail (2010).
The novel’s action takes place in our galaxy in AD 2970. By then, technology has reached the point that a person’s consciousness can be recorded and inserted into virtual, simulated worlds—including hells of such fiendishly imaginative gruesomeness that I’ll refrain from quoting a description. Some of the galaxy’s species support the hells as an effective means to discourage bad behavior; others decry them as a moral outrage. To settle the hells’ disputed existence, the various interested species have agreed to abide by the outcome of a vast simulated war game.
Virtual, simulated worlds have been featured in science fiction for some time. My first encounter with them—and perhaps yours, too—was in William Gibson’s Neuromancer (1984). The novel’s complex, thrilling plot involves two powerful and resourceful artificial intelligences and a cast of drug-addicted hackers, former special operations soldiers, plutocratic industrialists, and cyberpunk ninjas.
Gibson favored a mostly metaphorical description of computed reality. In Permutation City (1994), Greg Egan delves in more technical detail into the philosophical questions of simulated afterlives. Presciently, in Egan’s near-future world, computing power is available in abundance via the cloud. With such resources, Paul Durham, a computer scientist and entrepreneur, proposes to create a self-sustaining virtual world where scanned consciousnesses can live for eternity.
In reality, though, how likely is the prospect of scanning a consciousness and uploading it into a virtual world? Human brains contain 1011 neurons that form 1015 interconnections. Storing a static map of something that big isn’t beyond current technology. CERN has already amassed 2 × 1017 bytes of data from the Large Hadron Collider.
The bigger technological challenge, I think, lies in generating the map in the first place. Conceivably, neuroscientists could discover a modest set of principles that embody how our brains are networked, obviating the task of mapping individual neurons. But if they can’t, every neuron and synapse would have to be located. Super-resolution techniques such as Stochastic Optical Reconstruction Microscopy (STORM) and Photoactivation Localization Microscopy (PALM) can already map fluorescently tagged molecules with a spatial resolution of a few tens of microns, but only—so far—in samples just a few millimeters thick.
Although it’s not physically impossible, like faster-than-light travel, or physically impractical, like Star Trek–style teleportation, brain mapping remains scientifically out of reach, but comfortably within the realm of science fiction. As for Denis the Carthusian, he reported making mental excursions into purgatory, during which he received revelations and conversed with souls. That experience is not unlike a Neuromancer hacker “jacking into” cyberspace and meeting avatars.
This essay by Charles Day first appeared on page 104 of the January/February 2013 issue of Computing in Science & Engineering, a bimonthly magazine published jointly by the American Institute of Physics and IEEE Computer Society.
When I ran Physics Today‘s Search and Discovery department, I’d interview about 10 physicists a month. Most interviews took place on the phone, but occasionally I met physicists in their offices.
I prefer in-person interviews. Besides the face-to-face interaction, there’s the opportunity for interviewees to move over to their blackboards and write down an equation or draw a diagram. I remember with gratitude Victor Yakovenko explaining Majorana quasiparticles to me at the University of Maryland and Michael Hesse explaining magnetohydrodynamic reconnection at NASA’s Goddard Space Flight Center.
Recently, I’ve come to realize that those in-person interviews resembled the tutorials I received as an undergraduate at Imperial College London. Once a week, students in groups of three would meet their assigned tutor—invariably a member of the faculty—in his or her office. There, they’d discuss the answers to a problem sheet that one of the lecturers had set. And, if time allowed, they’d talk about other physics topics.
The problems were meant to test students’ understanding of the courses. Students had a few days to work on the answers. If they couldn’t solve the problems, the tutor would explain what they hadn’t grasped—by moving to the blackboard and writing down equations or drawing diagrams.
Learning physics entails understanding concepts far more than it entails memorizing facts. Because one student’s path to understanding may be different from another’s, the personal guidance that the traditional tutorial provides is ideal for teaching physics. I was glad to hear from Steve Blau, one of my Physics Today colleagues, that the tutorial method—or something close to it—was practiced at the small liberal arts college that he attended, Haverford, and the one where he taught physics, Ripon.
But it’s not the norm at big US universities. There, introductory courses are so large that mobilizing a department’s entire faculty would not yield tutorial sessions of one professor and three students. Instead, those big universities rely on graduate students as teaching assistants.
Some online innovations, such as massively open online courses, widen access to lectures. Others, such as learning management systems, improve administrative efficiency. Both outcomes are worthy. But as far as I can tell, no one has devised an online tool that matches small-group instruction in its effectiveness and, indeed, in the pleasure it adds to teaching and learning.
Make them smaller
If large classes impede learning, then one obvious solution is to make them smaller. Hiring more professors would achieve that goal, but at significant expense. A better way, I contend, is to reform the system that creates large classes in the first place: American universities should scale back the number of general education courses they require students to take.
That proposal might seem heretical. After all, employers say they want well-rounded graduates who can not only frame and solve problems, but also express themselves clearly and persuasively—just the sort of employee that a traditional US university education is meant to produce. Exposing nonscience majors to science is also a good thing. Jon Miller, who directs the University of Michigan’s International Center for the Advancement of Scientific Literacy, has found that the principal source of Americans’ scientific literacy is not the popular press, TV, or other media, but the science courses they take in college.
Nevertheless, I think it’s time that US universities reconsider the undergraduate degree. Hong Kong’s government did so. Last year, the territory’s public universities switched from the specialist, British-style three-year degree to the US-style four-year degree—but with a crucial and significant difference. Whereas US undergraduates typically spend half their time taking subjects outside their majors, Hong Kong undergraduates spend one quarter.
My proposal amounts to redistributing students among subjects. Do the numbers work out? Yes, I think so. The University of Delaware, my wife’s alma mater, has a total of 15 747 undergraduates and 1128 full-time faculty. Its 65 academic departments offer 147 majors. In the spirit of the physicist’s spherical cow, consider the case of evenly distributing the students and faculty members among the departments. Each department’s 17 professors would then be responsible for teaching 242 students per calendar year. Each professor could tutor students in groups of four.
There’s another advantage to reducing general education requirements. Students choose their majors out of interest; presumably they’d prefer to take more in-major courses than the current system permits. My wife did. Professors would benefit, too. They’d no longer face huge classes of students, a large fraction of whom are taking the course because they have to, rather than because they want to.
If my simplistic analysis has missed something or if you disagree with its premises, please leave a comment. My goal in writing this column is to start a debate—not to win one.
Switch to our mobile site