Chancellor Cynthia Barnhart announced today that alumnus Mike Massimino SM ’88, PhD ’92 will be the first-ever guest speaker at MIT’s Investiture of Doctoral Hoods, a ceremony for PhD candidates held the day before Commencement.

“We are thrilled that Dr. Massimino has accepted the invitation to speak on June 4,” says Barnhart, who hosts the hooding ceremony. “His words will motivate and inspire our doctoral candidates as they make this significant transition from student life.”

The 2015 Investiture of Doctoral Hoods takes place Thursday, June 4, at 11:30 a.m. in the Johnson Athletics Center Ice Rink. The ceremony is open to family and friends of doctoral candidates; no tickets are required.

Last fall, on the recommendation of Eric Grimson, chancellor for academic advancement and chair of the Commencement Committee, a proposal to invite a guest speaker was brought to that committee for consideration. Consultation with doctoral students and alumni indicated keen interest in a guest speaker, with preference for an MIT PhD who could speak empathetically to candidates as they complete their doctoral studies and begin their professional careers.

Grimson extended his appreciation to all who participated in the pilot speaker selection process: “I’d like to thank the Commencement Committee, the department heads who submitted nominations, and the doctoral candidate working group for their collaboration to enhance the ceremony. We couldn’t be more delighted with the outcome.”

Massimino earned his undergraduate degree from Columbia University, where he is now professor of professional practice at the Fu Foundation School of Engineering and Applied Science; he worked as a systems engineer at IBM before coming to MIT to begin graduate work. His research as an MIT graduate student in mechanical engineering — on human operator control of space robotics systems — ultimately led to two patents.

After receiving his PhD from MIT in 1992, Massimino worked at McDonnell Douglas Aerospace in Houston as a research engineer, during which time he was also an adjunct assistant professor of mechanical engineering and material sciences at Rice University. In 1995, he became an assistant professor at Georgia Tech’s School of Industrial and Systems Engineering, where he taught human-machine systems engineering and researched human-machine interfaces for space and aircraft systems.

After joining NASA in 1996, Massimino logged more than 570 hours in space, 30 hours of which were on spacewalks; the focus of his two missions was the servicing of the Hubble Space Telescope. He spoke at MIT in 2011 as part of the Institute’s sesquicentennial celebration, and last fall as part of events marking the centennial of the Department of Aeronautics and Astronautics.

Today, Massimino is known for his efforts to make aerospace engineering accessible to the public; at present, he has 1.31 million Twitter followers and a recurring role (as himself) on the CBS sitcom “The Big Bang Theory.”

“Mike’s service to the country during his nearly 20 years at NASA is a credit to MIT and to mechanical engineers everywhere,” says Gang Chen, the Carl Richard Soderberg Professor in Power Engineering and head of the Department of Mechanical Engineering. “His commitment to public education, combined with a distinguished teaching and technical career, is the very best of ‘mens et manus.’ … It will be a great privilege to welcome Mike back to MIT.”

By News Office

Apollo 11 astronaut Michael Collins was part of the three-person crew that flew on mankind’s first mission to land on the moon, but he was the one who remained in orbit and never got to the lunar surface. In a talk at MIT yesterday as part of a class in the Department of Aeronautics and Astronautics (AeroAstro), Collins said that he probably could have had a chance to walk on the moon after all, had he chosen to remain at NASA after that epochal mission.

“What I gave up probably was the opportunity to be the last person to walk on the moon,” Collins said in response to a question from the audience. Although there was no guarantee, he said, under the rotation system for crew selection at that time, he would likely have been named as commander of the Apollo 17 mission, which turned out to be the last to visit the moon.

Instead, Collins retired from NASA and went on to other things: writing a bestselling book about his experiences, called “Carrying the Fire,” and becoming director of the Smithsonian Air and Space Museum in Washington. After the success of Apollo 11 in fulfilling President John F. Kennedy’s call to send a man to the moon and back before the end of the 1960s, Collins said, “My mindset was, ‘It’s over, we did it’.”

Collins, who also appeared at MIT last year as part of AeroAstro’s 100th anniversary celebration, was back on campus this week to speak to a class on the history of the Apollo program. The course, “Engineering Apollo,” is taught by David Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing and a professor in the Department of Aeronautics and Astronautics.

A hero to many

“He’s a hero to many of us who have followed the world of Apollo,” Mindell said in introducing Collins. Among the numerous honors bestowed on Collins, Mindell said, perhaps the ones that would resonate most strongly at MIT were the naming of both an asteroid and a lunar crater after him. Collins’ book, Mindell added, is widely considered the best astronaut book from that era — or maybe ever.

Before flying the Apollo 11 mission, Collins graduated from the United States Military Academy at West Point, joined the Air Force, and became a test pilot, flying a variety of fighter jets. Though he had never anticipated flying into space, when President Dwight D. Eisenhower announced that U.S. astronauts would be selected from among qualified test pilots, Collins realized he was part of a select pool of perhaps 200 people, and decided to apply.

Prior to Eisenhower’s decision, Collins said, there were “a lot of crazy ideas” about how to choose astronauts, including suggestions for selecting people “accustomed to danger, including bullfighters,” or those used to having to breathe through special equipment, such as scuba divers.

Collins’ first mission was on Gemini 10, flying a two-person capsule that he recalled fondly as a “nice little flying machine.” On that mission, he became the fourth human ever to perform an “extra-vehicular activity,” or spacewalk.

The most challenging part of that mission, Collins said, was a first attempt at a rendezvous and docking between two spacecraft in orbit — an essential part of the preparations for the eventual lunar missions. “It was the rendezvous that probably bothered us more than anything,” he said, because of the technical complexity and risks of bringing together two vehicles flying in different orbits at high speed.

To train, the astronauts spent a lot of time in simulators, which faithfully reproduced the spacecraft’s interior and controls. That was a big improvement over preparation as a test pilot, he said, where there often were no simulators for the testing of experimental craft.

“I spent probably 600 hours in one simulator” while preparing for the Gemini mission, Collins said, in addition to many hours in other simulators. Overall, he said, “We were pretty well prepared” for both the Apollo and Gemini flights.

Basketballs and dancing

Asked about MIT’s role in producing the guidance system for the Apollo program, Collins recalled meeting Charles “Doc” Draper, then head of the MIT Instrumentation Lab, who was responsible for that system. “When I think of the instrumentation lab, I think of a basketball and ballroom dancing,” he said.

The “basketball,” he explained, was the heart of the inertial guidance system — a sphere that contained a timer, three gyroscopes, and three accelerometers. “You told it what time it was, and where it was,” he said, and “it knew where you were, and where you were going. That was the heart of Apollo.”

As for the ballroom dancing, he said, that referred to Draper himself — who, despite the hectic preparations for the lunar mission, “would disappear for weekends, and come back with these trophies for ballroom dancing. I thought that was so cool!”

Collins’ talk was open to the MIT community. He was asked about contingency planning in case the other two astronauts, Neil Armstrong and Buzz Aldrin, had been unable to return from the moon’s surface — which could have happened due to any number of malfunctions.

In that event, Collins said, “I’d go home,” leaving the others behind. “They knew that, and I knew that, but it’s not something we ever talked about. What’s the point?”

As for the future of the space program, Collins was emphatic: “I think NASA should be renamed NAMA,” he said. “They ought to make [Mars] their one overriding goal and destination.”

By David L. Chandler | MIT News Office

A handful of new stars are born each year in the Milky Way, while many more blink on across the universe. But astronomers have observed that galaxies should be churning out millions more stars, based on the amount of interstellar gas available.

Now researchers from MIT, Columbia University, and Michigan State University have pieced together a theory describing how clusters of galaxies may regulate star formation. They describe their framework this week in the journal Nature.

When intracluster gas cools rapidly, it condenses, then collapses to form new stars. Scientists have long thought that something must be keeping the gas from cooling enough to generate more stars — but exactly what has remained a mystery.

For some galaxy clusters, the researchers say, the intracluster gas may simply be too hot — on the order of hundreds of millions of degrees Celsius. Even if one region experiences some cooling, the intensity of the surrounding heat would keep that region from cooling further — an effect known as conduction.

“It would be like putting an ice cube in a boiling pot of water — the average temperature is pretty much still boiling,” says Michael McDonald, a Hubble Fellow in MIT’s Kavli Institute for Astrophysics and Space Research. “At super-high temperatures, conduction smooths out the temperature distribution so you don’t get any of these cold clouds that should form stars.”

For so-called “cool core” galaxy clusters, the gas near the center may be cool enough to form some stars. However, a portion of this cooled gas may rain down into a central black hole, which then spews out hot material that serves to reheat the surroundings, preventing many stars from forming — an effect the team terms “precipitation-driven feedback.”

“Some stars will form, but before it gets too out of hand, the black hole will heat everything back up — it’s like a thermostat for the cluster,” McDonald says. “The combination of conduction and precipitation-driven feedback provides a simple, clear picture of how star formation is governed in galaxy clusters.”

Crossing a galactic threshold

Throughout the universe, there exist two main classes of galaxy clusters: cool core clusters — those that are rapidly cooling and forming stars — and non-cool core clusters — those have not had sufficient time to cool.

The Coma cluster, a non-cool cluster, is filled with gas at a scorching 100 million degrees Celsius. To form any stars, this gas would have to cool for several billion years. In contrast, the nearby Perseus cluster is a cool core cluster whose intracluster gas is a relatively mild several million degrees Celsius. New stars occasionally emerge from the cooling of this gas in the Perseus cluster, though not as many as scientists would predict.

“The amount of fuel for star formation outpaces the amount of stars 10 times, so these clusters should be really star-rich,” McDonald says. “You really need some mechanism to prevent gas from cooling, otherwise the universe would have 10 times as many stars.”

McDonald and his colleagues worked out a theoretical framework that relies on two anti-cooling mechanisms.

The group calculated the behavior of intracluster gas based on a galaxy cluster’s radius, mass, density, and temperature. The researchers found that there is a critical temperature threshold below which the cooling of gas accelerates significantly, causing gas to cool rapidly enough to form stars.

According to the group’s theory, two different mechanisms regulate star formation, depending on whether a galaxy cluster is above or below the temperature threshold. For clusters that are significantly above the threshold, conduction puts a damper on star formation: The surrounding hot gas overwhelms any pockets of cold gas that may form, keeping everything in the cluster at high temperatures.

“For these hotter clusters, they’re stuck in this hot state, and will never cool and form stars,” McDonald says. “Once you get into this very high-temperature regime, cooling is really inefficient, and they’re stuck there forever.”

For gas at temperatures closer to the lower threshold, it’s much easier to cool to form stars. However, in these clusters, precipitation-driven feedback starts to kick in to regulate star formation: While cooling gas can quickly condense into clouds of droplets that can form stars, these droplets can also rain down into a central black hole — in which case the black hole may emit hot jets of material back into the cluster, heating the surrounding gas back up to prevent further stars from forming.

“In the Perseus cluster, we see these jets acting on hot gas, with all these bubbles and ripples and shockwaves,” McDonald says. “Now we have a good sense of what triggered those jets, which was precipitating gas falling onto the black hole.”

On track

McDonald and his colleagues compared their theoretical framework to observations of distant galaxy clusters, and found that their theory matched the observed differences between clusters. The team collected data from the Chandra X-ray Observatory and the South Pole Telescope — an observatory in Antarctica that searches for far-off massive galaxy clusters.

The researchers compared their theoretical framework with the gas cooling times of every known galaxy cluster, and found that clusters filtered into two populations — very slowly cooling clusters, and clusters that are cooling rapidly, closer to the rate predicted by the group as a critical threshold.

By using the theoretical framework, McDonald says researchers may be able to predict the evolution of galaxy clusters, and the stars they produce.

“We’ve built a track that clusters follow,” McDonald says. “The nice, simple thing about this framework is that you’re stuck in one of two modes, for a very long time, until something very catastrophic bumps you out, like a head-on collision with another cluster.”

The researchers hope to look deeper into the theory to see whether the mechanisms regulating star formation in clusters also apply to individual galaxies. Preliminary evidence, he says, suggests that is the case.

“If we can use all this information to understand why or why not stars form around us, then we’ve made a big step forward,” McDonald says.

“[These results] look very promising,” says Paul Nulsen, an astronomer at the Harvard-Smithsonian Center for Astrophysics who was not involved in this research. “More work will be needed to show conclusively that precipitation is the main source of the gas that powers feedback. Other processes in the feedback cycle also need to be understood. For example, there is still no consensus on how the gas falling into a massive black hole produces energetic jets, or how they inhibit cooling in the remaining gas. This is not the end of the story, but it is an important insight into a problem that has proved a lot more difficult than anyone ever anticipated.”

This research was funded in part by the National Science Foundation and NASA.

By Jennifer Chu | MIT News Office

Meteorologists sometimes struggle to accurately predict the weather here on Earth, but now we can find out how cloudy it is on planets outside our solar system, thanks to researchers at MIT.

In a paper to be published in the Astrophysical Journal, researchers in the Department of Earth, Atmospheric, and Planetary Sciences (EAPS) at MIT describe a technique that analyzes data from NASA’s Kepler space observatory to determine the types of clouds on planets that orbit other stars, known as exoplanets.

The team, led by Kerri Cahoy, an assistant professor of aeronautics and astronautics at MIT, has already used the method to determine the properties of clouds on the exoplanet Kepler-7b. The planet is known as a “hot Jupiter,” as temperatures in its atmosphere hover at around 1,700 kelvins.           

NASA’s Kepler spacecraft was designed to search for Earth-like planets orbiting other stars. It was pointed at a fixed patch of space, constantly monitoring the brightness of 145,000 stars. An orbiting exoplanet crossing in front of one of these stars causes a temporary dimming of this brightness, allowing researchers to detect its presence.

Researchers have previously shown that by studying the variations in the amount of light coming from these star systems as a planet transits, or crosses in front or behind them, they can detect the presence of clouds in that planet’s atmosphere. That is because particles within the clouds will scatter different wavelengths of light.

Modeling cloud formation

To find out if this data could be used to determine the composition of these clouds, the MIT researchers studied the light signal from Kepler-7b. They used models of the temperature and pressure of the planet’s atmosphere to determine how different types of clouds would form within it, says lead author Matthew Webber, a graduate student in Cahoy’s group at MIT.

“We then used those cloud models to determine how light would reflect off the atmosphere of the planet [for each type of cloud], and tried to match these possibilities to the actual observations from the Kepler mission itself,” Webber says. “So we ran a large set of models, to see which models fit best statistically to the observations.”

By working backward in this way, they were able to match the Kepler spacecraft data to a type of cloud made out of vaporized silicates and magnesium. The extremely high temperatures in the Kepler-7b atmosphere mean that some minerals that commonly exist as rocks on Earth’s surface instead exist as vapors high up in the planet’s atmosphere. These mineral vapors form small cloud particles as they cool and condense.

Kepler-7b is a tidally locked planet, meaning it always shows the same face to its star — just as the moon does to Earth. As a result, around half of the planet’s day side — that which constantly faces the star — is covered by these magnesium silicate clouds, the team found.

“We are really doing nothing more complicated than putting a telescope into space and staring at a star with a camera,” Cahoy says. “Then we can use what we know about the universe, in terms of temperatures and pressures, how things mix, how they stratify in an atmosphere, to try to figure out what mix of things would be causing the observations that we’re seeing from these very basic instruments,” she says.

A clue on exoplanet atmospheres

Understanding the properties of the clouds on Kepler-7b, such as their mineral composition and average particle size, tells us a lot about the underlying physical nature of the planet’s atmosphere, says team member Nikole Lewis, a postdoc in EAPS. What’s more, the method could be used to study the properties of clouds on different types of planet, Lewis says: “It’s one of the few methods out there that can help you determine if a planet even has an atmosphere, for example.”

A planet’s cloud coverage and composition also has a significant impact on how much of the energy from its star it will reflect, which in turn affects its climate and ultimately its habitability, Lewis says. “So right now we are looking at these big gas-giant planets because they give us a stronger signal,” she says. “But the same methodology could be applied to smaller planets, to help us determine if a planet is habitable or not.”

The researchers hope to use the method to analyze data from NASA’s follow-up to the Kepler mission, known as K2, which began studying different patches of space last June. They also hope to use it on data from MIT’s planned Transiting Exoplanet Survey Satellite (TESS) mission, says Cahoy.

“TESS is the follow-up to Kepler, led by principal investigator George Ricker, a senior research scientist in the MIT Kavli Institute for Astrophysics and Space Research. It will essentially be taking similar measurements to Kepler, but of different types of stars,” Cahoy says. “Kepler was tasked with staring at one group of stars, but there are a lot of stars, and TESS is going to be sampling the brightest stars across the whole sky,” she says.

This paper is the first to take circulation models including clouds and compare them with the observed distribution of clouds on Kepler-7b, says Heather Knutson, an assistant professor of planetary science at Caltech who was not involved in the research.

“Their models indicate that the clouds on this planet are most likely made from liquid rock,” Knutson says. “This may sound exotic, but this planet is a roasting hot gas-giant planet orbiting very close to its host star, and we should expect that it might look quite different than our own Jupiter.”

By Helen Knight | MIT News correspondent

On Oct. 8, 2013, an explosion on the sun’s surface sent a supersonic blast wave of solar wind out into space. This shockwave tore past Mercury and Venus, blitzing by the moon before streaming toward Earth. The shockwave struck a massive blow to the Earth’s magnetic field, setting off a magnetized sound pulse around the planet.

NASA’s Van Allen Probes, twin spacecraft orbiting within the radiation belts deep inside the Earth’s magnetic field, captured the effects of the solar shockwave just before and after it struck.

Now scientists at MIT’s Haystack Observatory, the University of Colorado, and elsewhere have analyzed the probes’ data, and observed a sudden and dramatic effect in the shockwave’s aftermath: The resulting magnetosonic pulse, lasting just 60 seconds, reverberated through the Earth’s radiation belts, accelerating certain particles to ultrahigh energies.

“These are very lightweight particles, but they are ultrarelativistic, killer electrons — electrons that can go right through a satellite,” says John Foster, associate director of MIT’s Haystack Observatory. “These particles are accelerated, and their number goes up by a factor of 10, in just one minute. We were able to see this entire process taking place, and it’s exciting: We see something that, in terms of the radiation belt, is really quick.”

The findings represent the first time the effects of a solar shockwave on Earth’s radiation belts have been observed in detail from beginning to end. Foster and his colleagues have published their results in the Journal of Geophysical Research.

Catching a shockwave in the act

Since August 2012, the Van Allen Probes have been orbiting within the Van Allen radiation belts. The probes’ mission is to help characterize the extreme environment within the radiation belts, so as to design more resilient spacecraft and satellites.

One question the mission seeks to answer is how the radiation belts give rise to ultrarelativistic electrons — particles that streak around the Earth at 1,000 kilometers per second, circling the planet in just five minutes. These high-speed particles can bombard satellites and spacecraft, causing irreparable damage to onboard electronics.

The two Van Allen probes maintain the same orbit around the Earth, with one probe following an hour behind the other. On Oct. 8, 2013, the first probe was in just the right position, facing the sun, to observe the radiation belts just before the shockwave struck the Earth’s magnetic field. The second probe, catching up to the same position an hour later, recorded the shockwave’s aftermath.

Dealing a “sledgehammer blow”

Foster and his colleagues analyzed the probes’ data, and laid out the following sequence of events: As the solar shockwave made impact, according to Foster, it struck “a sledgehammer blow” to the protective barrier of the Earth’s magnetic field. But instead of breaking through this barrier, the shockwave effectively bounced away, generating a wave in the opposite direction, in the form of a magnetosonic pulse — a powerful, magnetized sound wave that propagated to the far side of the Earth within a matter of minutes.

In that time, the researchers observed that the magnetosonic pulse swept up certain lower-energy particles. The electric field within the pulse accelerated these particles to energies of 3 to 4 million electronvolts, creating 10 times the number of ultrarelativistic electrons that previously existed.

Taking a closer look at the data, the researchers were able to identify the mechanism by which certain particles in the radiation belts were accelerated. As it turns out, if particles’ velocities as they circle the Earth match that of the magnetosonic pulse, they are deemed “drift resonant,” and are more likely to gain energy from the pulse as it speeds through the radiation belts. The longer a particle interacts with the pulse, the more it is accelerated, giving rise to an extremely high-energy particle.

Foster says solar shockwaves can impact Earth’s radiation belts a couple of times each month. The event in 2013 was a relatively minor one.

“This was a relatively small shock. We know they can be much, much bigger,” Foster says. “Interactions between solar activity and Earth’s magnetosphere can create the radiation belt in a number of ways, some of which can take months, others days. The shock process takes seconds to minutes. This could be the tip of the iceberg in how we understand radiation-belt physics.”

Barry Mauk, a project scientist at Johns Hopkins University’s Applied Physics Laboratory, views the group’s findings as “the most comprehensive analysis of shock-induced acceleration within Earth’s space environment ever achieved.”

“Significant shock-induced acceleration of Earth’s radiation belts occur only occasionally, but these events are important because they have the potential of suddenly generating the most intense and energetic electrons, and therefore the most dangerous conditions for astronauts and satellites,” says Mauk, who did not contribute to the study. “Earth’s space environment serves as a wonderful laboratory for studying the nature of shock acceleration that is occurring elsewhere in the solar system and universe.”

By Jennifer Chu | MIT News Office

Meteors that have crashed to Earth have long been regarded as relics of the early solar system. These craggy chunks of metal and rock are studded with chondrules — tiny, glassy, spherical grains that were once molten droplets. Scientists have thought that chondrules represent early kernels of terrestrial planets: As the solar system started to coalesce, these molten droplets collided with bits of gas and dust to form larger planetary precursors.

However, researchers at MIT and Purdue University have now found that chondrules may have played less of a fundamental role. Based on computer simulations, the group concludes that chondrules were not building blocks, but rather byproducts of a violent and messy planetary process.

The team found that bodies as large as the moon likely existed well before chondrules came on the scene. In fact, the researchers found that chondrules were most likely created by the collision of such moon-sized planetary embryos: These bodies smashed together with such violent force that they melted a fraction of their material, and shot a molten plume out into the solar nebula. Residual droplets would eventually cool to form chondrules, which in turn attached to larger bodies — some of which would eventually impact Earth, to be preserved as meteorites.

Brandon Johnson, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, says the findings revise one of the earliest chapters of the solar system.

“This tells us that meteorites aren’t actually representative of the material that formed planets — they’re these smaller fractions of material that are the byproduct of planet formation,” Johnson says. “But it also tells us the early solar system was more violent than we expected: You had these massive sprays of molten material getting ejected out from these really big impacts. It’s an extreme process.”

Johnson and his colleagues, including Maria Zuber, the E.A. Griswold Professor of Geophysics and MIT’s vice president for research, have published their results this week in the journal Nature.

High-velocity molten rock

To get a better sense of the role of chondrules in a fledgling solar system, the researchers first simulated collisions between protoplanets — rocky bodies between the size of an asteroid and the moon. The team modeled all the different types of impacts that might occur in an early solar system, including their location, timing, size, and velocity. They found that bodies the size of the moon formed relatively quickly, within the first 10,000 years, before chondrules were thought to have appeared.

Johnson then used another model to determine the type of collision that could melt and eject molten material. From these simulations, he determined that a collision at a velocity of 2.5 kilometers per second would be forceful enough to produce a plume of melt that is ejected out into space — a phenomenon known as impact jetting.

“Once the two bodies collide, a very small amount of material is shocked up to high temperature, to the point where it can melt,” Johnson says. “Then this really hot material shoots out from the collision point.”

The team then estimated the number of impact-jetting collisions that likely occurred in a solar system’s first 5 million years — the period of time during which it’s believed that chondrules first appeared. From these results, Johnson and his team found that such collisions would have produced enough chondrules in the asteroid belt region to explain the number that have been detected in meteorites today.

Falling into place

To go a step further, the researchers ran a third simulation to calculate chondrules’ cooling rate. Previous experiments in the lab have shown that chondrules cool down at a rate of 10 to 1,000 kelvins per hour — a rate that would produce the texture of chondrules seen in meteorites. Johnson and his colleagues used a radiative transfer model to simulate the impact conditions required to produce such a cooling rate. They found that bodies colliding at 2.5 kilometers per second would indeed produce molten droplets that, ejected into space, would cool at 10 to 1,000 kelvins per hour.

“Then I had this ‘Eureka!’ moment where I realized that jetting during these really big impacts could possibly explain the formation of chondrules,” Johnson says. “It all fell into place.”

Going forward, Johnson plans to look into the effects of other types of impacts. The group has so far modeled vertical impacts — bodies colliding straight-on. Johnson predicts that oblique impacts, or collisions occurring at an angle, may be even more efficient at producing molten plumes of chondrules. He also hopes to explore what happens to chondrules once they are launched into the solar nebula.

“Chondrules were long viewed as planetary building blocks,” Zuber notes. “It’s ironic that they now appear to be the remnants of early protoplanetary collisions.”

Fred Ciesla, an associate professor of planetary science at the University of Chicago, says the findings may reclassify chondrites, a class of meteorites that are thought to be examples of the original material from which planets formed.

“This would be a major shift in how people think about our solar system,” says Ciesla, who did not contribute to the research. “If this finding is correct, then it would suggest that chondrites are not good analogs for the building blocks of the Earth and other planets. Meteorites as a whole are still important clues about what processes occurred during the formation of the solar system, but which ones are the best analogs for what the planets were made out of would change.”

This research was funded in part by NASA.

By Jennifer Chu | MIT News Office

Life on an aquaplanet

April 27, 2015

Nearly 2,000 planets beyond our solar system have been identified to date. Whether any of these exoplanets are hospitable to life depends on a number of criteria. Among these, scientists have thought, is a planet’s obliquity — the angle of its axis relative to its orbit around a star.

Earth, for instance, has a relatively low obliquity, rotating around an axis that is nearly perpendicular to the plane of its orbit around the sun. Scientists suspect, however, that exoplanets may exhibit a host of obliquities, resembling anything from a vertical spinning top to a horizontal rotisserie. The more extreme the tilt, the less habitable a planet may be — or so the thinking has gone.

Now scientists at MIT have found that even a high-obliquity planet, with a nearly horizontal axis, could potentially support life, so long as the planet were completely covered by an ocean. In fact, even a shallow ocean, about 50 meters deep, would be enough to keep such a planet at relatively comfortable temperatures, averaging around 60 degrees Fahrenheit year-round.  

David Ferreira, a former research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), says that on the face of it, a planet with high obliquity would appear rather extreme: Tilted on its side, its north pole would experience daylight continuously for six months, and then darkness for six months, as the planet revolves around its star.

“The expectation was that such a planet would not be habitable: It would basically boil, and freeze, which would be really tough for life,” says Ferreira, who is now a lecturer at the University of Reading, in the United Kingdom. “We found that the ocean stores heat during summer and gives it back in winter, so the climate is still pretty mild, even in the heart of the cold polar night. So in the search for habitable exoplanets, we’re saying, don’t discount high-obliquity ones as unsuitable for life.”

Details of the group’s analysis are published in the journal Icarus. The paper’s co-authors are Ferreira; Sara Seager, the Class of 1941 Professor in EAPS and MIT’s Department of Physics; John Marshall, the Cecil and Ida Green Professor in Earth and Planetary Sciences; and Paul O’Gorman, an associate professor in EAPS.

Tilting toward a habitable exoplanet

Ferreira and his colleagues used a model developed at MIT to simulate a high-obliquity “aquaplanet” — an Earth-sized planet, at a similar distance from its sun, covered entirely in water. The three-dimensional model is designed to simulate circulations among the atmosphere, ocean, and sea ice, taking into the account the effects of winds and heat in driving a 3000-meter deep ocean. For comparison, the researchers also coupled the atmospheric model with simplified, motionless “swamp” oceans of various depths: 200 meters, 50 meters, and 10 meters.

The researchers used the detailed model to simulate a planet at three obliquities: 23 degrees (representing an Earth-like tilt), 54 degrees, and 90 degrees.

For a planet with an extreme, 90-degree tilt, they found that a global ocean — even one as shallow as 50 meters — would absorb enough solar energy throughout the polar summer and release it back into the atmosphere in winter to maintain a rather mild climate. As a result, the planet as a whole would experience spring-like temperatures year round.

“We were expecting that if you put an ocean on the planet, it might be a bit more habitable, but not to this point,” Ferreira says. “It’s really surprising that the temperatures at the poles are still habitable.”

A runaway “snowball Earth”

In general, the team observed that life could thrive on a highly tilted aquaplanet, but only to a point. In simulations with a shallower ocean, Ferreira found that waters 10 meters deep would not be sufficient to regulate a high-obliquity planet’s climate. Instead, the planet would experience a runaway effect: As soon as a bit of ice forms, it would quickly spread across the dark side of the planet. Even when this side turns toward the sun, according to Ferreira, it would be too late: Massive ice sheets would reflect the sun’s rays, allowing the ice to spread further into the newly darkened side, and eventually encase the planet.

“Some people have thought that a planet with a very large obliquity could have ice just around the equator, and the poles would be warm,” Ferreira says. “But we find that there is no intermediate state. If there’s too little ocean, the planet may collapse into a snowball. Then it wouldn’t be habitable, obviously.”

Darren Williams, a professor of physics and astronomy at Pennsylvania State University, says past climate modeling has shown that a wide range of climate scenarios are possible on extremely tilted planets, depending on the sizes of their oceans and landmasses. Ferreira’s results, he says, reach similar conclusions, but with more detail.

“There are one or two terrestrial-sized exoplanets out of a thousand that appear to have densities comparable to water, so the probability of an all-water planet is at least 0.1 percent,” Williams says. “The upshot of all this is that exoplanets at high obliquity are not necessarily devoid of life, and are therefore just as interesting and important to the astrobiology community.”

By Jennifer Chu | MIT News Office

When the Apollo astronauts returned to Earth, they brought with them some souvenirs: rocks, pebbles, and dust from the moon’s surface. These lunar samples have since been analyzed for clues to the moon’s past. One outstanding question has been whether the moon was once a complex, layered, and differentiated body, like the Earth is today, or an unheated relic of the early solar system, like most asteroids.

Ben Weiss, a professor of planetary sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences, and members of his laboratory have found remnants of magnetization in some lunar rocks, suggesting that the moon once emitted a substantial magnetic field, much like the Earth does today. The discovery has opened a new set of questions: How long did this magnetic field last? How strong was its pull? And what sparked and sustained it?

Weiss and former MIT student Sonia Tikoo have written a review, published today in Science, in which they explore the possibility of a lunar dynamo — a molten, churning core at the center of the moon that may have powered an intense magnetic field for at least 1 billion years. Weiss spoke with MIT News about the moon’s hidden history.

Q. How would a lunar dynamo have worked? What might have been going on in the moon, and in the solar system, to sustain this dynamo for a billion years?

A. Planetary dynamos are generated by the process of induction, in which the energy of turbulent, conducting fluids is transformed into a magnetic field. Magnetic fields are one of the very few outward manifestations of the extremely energetic fluid motions that can occur in advecting planetary cores. 

The motion of Earth’s liquid core is powered by the cooling of the planet, which stirs up buoyant fluid from the surrounding liquid — similar to what happens in a lava lamp. We have recently argued from magnetic studies of Apollo samples that the moon also generated a dynamo in its molten metal core.

Our data suggest that, despite the moon’s tiny size — only 1 percent of the Earth’s mass — its dynamo was surprisingly intense (stronger than Earth’s field today) and long-lived, persisting from at least 4.2 billion years ago until at least 3.56 billion years ago. This period, which overlaps the early epoch of intense solar system-wide meteoroid bombardment and coincides with the oldest known records of life on Earth, comes just before our earliest evidence of the Earth’s dynamo.

Q. Why is it so surprising that a lunar dynamo may have been so intense and long-lived? 

A.
Both the strong intensity and long duration of lunar fields are surprising because of the moon’s small size. Convection, which is thought to power all known dynamos in the solar system today, is predicted to produce surface magnetic fields on the moon at least 10 times weaker than what we observe recorded in ancient lunar rocks. 

Nevertheless, a convective dynamo powered by crystallization of an inner core could potentially sustain a lunar magnetic field for billions of years. An exotic dynamo mechanism that could explain the moon’s strong field intensity is that the core was stirred by motion of the solid overlying mantle, analogous to a blender. The moon’s mantle was moving because its spin axis is precessing, or wobbling, and such motion was more vigorous billions of years ago, when the moon was closer to the Earth. Such mechanical dynamos are not known for any other planetary body, making the moon a fascinating natural physics laboratory. 

Q. What questions will the next phase of lunar dynamo research seek to address?

A. We know that the moon’s field declined precipitously between 3.56 billion years ago and 3.3 billion years ago, but we still do not know when the dynamo actually ceased. Establishing this will be a key goal of the next phase of lunar magnetic studies.

We also do not know the absolute direction of the lunar field, since all of our samples were unoriented rocks from the regolith — the fragmental layer produced by impacts on the lunar surface. If we could find a sample whose original orientation is known, we could determine the absolute direction of the lunar field relative to the planetary surface. This transformative measurement would then allow us to test ideas that the moon’s spin pole wandered in time across the planetary surface, possibly due to large impacts.

By Jennifer Chu | MIT News Office

Losing air

April 27, 2015

Today’s atmosphere likely bears little trace of its primordial self: Geochemical evidence suggests that Earth’s atmosphere may have been completely obliterated at least twice since its formation more than 4 billion years ago. However, it’s unclear what interplanetary forces could have driven such a dramatic loss. 

Now researchers at MIT, Hebrew University, and Caltech have landed on a likely scenario: A relentless blitz of small space rocks, or planetesimals, may have bombarded Earth around the time the moon was formed, kicking up clouds of gas with enough force to permanently eject small portions of the atmosphere into space. 

Tens of thousands of such small impacts, the researchers calculate, could efficiently jettison Earth’s entire primordial atmosphere. Such impacts may have also blasted other planets, and even peeled away the atmospheres of Venus and Mars. 

In fact, the researchers found that small planetesimals may be much more effective than giant impactors in driving atmospheric loss. Based on their calculations, it would take a giant impact — almost as massive as the Earth slamming into itself — to disperse most of the atmosphere. But taken together, many small impacts would have the same effect, at a tiny fraction of the mass. 

Hilke Schlichting, an assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences, says understanding the drivers of Earth’s ancient atmosphere may help scientists to identify the early planetary conditions that encouraged life to form. 

“[This finding] sets a very different initial condition for what the early Earth’s atmosphere was most likely like,” Schlichting says. “It gives us a new starting point for trying to understand what was the composition of the atmosphere, and what were the conditions for developing life.”

Schlichting and her colleagues have published their results in the journal Icarus.

Efficient ejection

The group examined how much atmosphere was retained and lost following impacts with giant, Mars-sized and larger bodies and with smaller impactors measuring 25 kilometers or less — space rocks equivalent to those whizzing around the asteroid belt today. 

The team performed numerical analyses, calculating the force generated by a given impacting mass at a certain velocity, and the resulting loss of atmospheric gases. A collision with an impactor as massive as Mars, the researchers found, would generate a shockwave through the Earth’s interior, setting off significant ground motion — similar to simultaneous giant earthquakes around the planet — whose force would ripple out into the atmosphere, a process that could potentially eject a significant fraction, if not all, of the planet’s atmosphere.

However, if such a giant collision occurred, it should also melt everything within the planet, turning its interior into a homogenous slurry. Given the diversity of noble gases like helium-3 deep inside the Earth today, the researchers concluded that it is unlikely that such a giant, core-melting impact occurred. 

Instead, the team calculated the effects of much smaller impactors on Earth’s atmosphere. Such space rocks, upon impact, would generate an explosion of sorts, releasing a plume of debris and gas. The largest of these impactors would be forceful enough to eject all gas from the atmosphere immediately above the impact’s tangent plane — the line perpendicular to the impactor’s trajectory. Only a fraction of this atmosphere would be lost following smaller impacts.

To completely eject all of Earth’s atmosphere, the team estimated, the planet would need to have been bombarded by tens of thousands of small impactors — a scenario that likely did occur 4.5 billion years ago, during a time when the moon was formed. This period was one of galactic chaos, as hundreds of thousands of space rocks whirled around the solar system, frequently colliding to form the planets, the moon, and other bodies. 

“For sure, we did have all these smaller impactors back then,” Schlichting says. “One small impact cannot get rid of most of the atmosphere, but collectively, they’re much more efficient than giant impacts, and could easily eject all the Earth’s atmosphere.” 

Runaway effect

However, Schlichting realized that the sum effect of small impacts may be too efficient at driving atmospheric loss. Other scientists have measured the atmospheric composition of Earth compared with Venus and Mars. These measurements have revealed that while each planetary atmosphere has similar patterns of noble gas abundance, the budget for Venus is similar to that of chondrites — stony meteorites that are primordial leftovers of the early solar system. Compared with Venus, Earth’s noble gas budget has been depleted 100-fold. 

Schlichting realized that if both planets were exposed to the same blitz of small impactors, Venus’ atmosphere should have been similarly depleted. She and her colleagues went back over the small-impactor scenario, examining the effects of atmospheric loss in more detail, to try and account for the difference between the two planets’ atmospheres. 

Based on further calculations, the team identified an interesting effect: Once half a planet’s atmosphere has been lost, it becomes much easier for small impactors to eject the rest of the gas. The researchers calculated that Venus’ atmosphere would only have to start out slightly more massive than Earth’s in order for small impactors to erode the first half of the Earth’s atmosphere, while keeping Venus’ intact. From that point, Schlichting describes the phenomenon as a “runaway process — once you manage to get rid of the first half, the second half is even easier.”

Time zero

During the course of the group’s research, an inevitable question arose: What eventually replaced Earth’s atmosphere? Upon further calculations, Schlichting and her team found the same impactors that ejected gas also may have introduced new gases, or volatiles. 

“When an impact happens, it melts the planetesimal, and its volatiles can go into the atmosphere,” Schlichting says. “They not only can deplete, but replenish part of the atmosphere.”

The group calculated the amount of volatiles that may be released by a rock of a given composition and mass, and found that a significant portion of the atmosphere may have been replenished by the impact of tens of thousands of space rocks. 
 
“Our numbers are realistic, given what we know about the volatile content of the different rocks we have,” Schlichting notes.

Jay Melosh, a professor of earth, atmospheric, and planetary sciences at Purdue University, says Schlichting’s conclusion is a surprising one, as most scientists have assumed the Earth’s atmosphere was obliterated by a single, giant impact. Other theories, he says, invoke a strong flux of ultraviolet radiation from the sun, as well as an “unusually active solar wind.” 

“How the Earth lost its primordial atmosphere has been a longstanding problem, and this paper goes a long way toward solving this enigma,” says Melosh, who did not contribute to the research. “Life got started on Earth about this time, and so answering the question about how the atmosphere was lost tells us about what might have kicked off the origin of life.”

Going forward, Schlichting hopes to examine more closely the conditions underlying Earth’s early formation, including the interplay between the release of volatiles from small impactors and from Earth’s ancient magma ocean. 

“We want to connect these geophysical processes to determine what was the most likely composition of the atmosphere at time zero, when the Earth just formed, and hopefully identify conditions for the evolution of life,” Schlichting says.

By Jennifer Chu | MIT News Office

Plasma shield

April 27, 2015

High above Earth’s atmosphere, electrons whiz past at close to the speed of light. Such ultrarelativistic electrons, which make up the outer band of the Van Allen radiation belt, can streak around the planet in a mere five minutes, bombarding anything in their path. Exposure to such high-energy radiation can wreak havoc on satellite electronics, and pose serious health risks to astronauts.

Now researchers at MIT, the University of Colorado, and elsewhere have found there’s a hard limit to how close ultrarelativistic electrons can get to the Earth. The team found that no matter where these electrons are circling around the planet’s equator, they can get no further than about 11,000 kilometers from the Earth’s surface — despite their intense energy.  

What’s keeping this high-energy radiation at bay seems to be neither the Earth’s magnetic field nor long-range radio waves, but rather a phenomenon termed “plasmaspheric hiss” — very low-frequency electromagnetic waves in the Earth’s upper atmosphere that, when played through a speaker, resemble static, or white noise.

Based on their data and calculations, the researchers believe that plasmaspheric hiss essentially deflects incoming electrons, causing them to collide with neutral gas atoms in the Earth’s upper atmosphere, and ultimately disappear. This natural, impenetrable barrier appears to be extremely rigid, keeping high-energy electrons from coming no closer than about 2.8 Earth radii — or 11,000 kilometers from the Earth’s surface.

“It’s a very unusual, extraordinary, and pronounced phenomenon,” says John Foster, associate director of MIT’s Haystack Observatory. “What this tells us is if you parked a satellite or an orbiting space station with humans just inside this impenetrable barrier, you would expect them to have much longer lifetimes. That’s a good thing to know.”

Foster and his colleagues, including lead author Daniel Baker of the University of Colorado, have published their results this week in the journal Nature.

Shields up

The team’s results are based on data collected by NASA’s Van Allen Probes — twin crafts that are orbiting within the harsh environments of the Van Allen radiation belts. Each probe is designed to withstand constant radiation bombardment in order to measure the behavior of high-energy electrons in space.

The researchers analyzed the first 20 months of data returned by the probes, and observed an “exceedingly sharp” barrier against ultrarelativistic electrons. This barrier held steady even against a solar wind shock, which drove electrons toward the Earth in a “step-like fashion” in October 2013. Even under such stellar pressure, the barrier kept electrons from penetrating further than 11,000 kilometers above Earth’s surface.

To determine the phenomenon behind the barrier, the researchers considered a few possibilities, including effects from the Earth’s magnetic field and transmissions from ground-based radios.

For the former, the team focused in particular on the South Atlantic Anomaly — a feature of the Earth’s magnetic field, just over South America, where the magnetic field strength is about 30 percent weaker than in any other region. If incoming electrons were affected by the Earth’s magnetic field, Foster reasoned, the South Atlantic Anomaly would act like a “hole in the path of their motion,” allowing them to fall deeper into the Earth’s atmosphere. Judging from the Van Allen probes’ data, however, the electrons kept their distance of 11,000 kilometers, even beyond the effects of the South Atlantic Anomaly — proof that the Earth’s magnetic field had little effect on the barrier.

Foster also considered the effect of long-range, very-low-frequency (VLF) radio transmissions, which others have proposed may cause significant loss of relatively high-energy electrons. Although VLF transmissions can leak into the upper atmosphere, the researchers found that such radio waves would only affect electrons with moderate energy levels, with little or no effect on ultrarelativistic electrons.

Instead, the group found that the natural barrier may be due to a balance between the electrons’ slow, earthward motion, and plasmaspheric hiss. This conclusion was based on the Van Allen probes’ measurement of electrons’ pitch angle — the degree to which an electron’s motion is parallel or perpendicular to the Earth’s magnetic field. The researchers found that plasmaspheric hiss acts slowly to rotate electrons’ paths, causing them to fall, parallel to a magnetic field line, into Earth’s upper atmosphere, where they are likely to collide with neutral atoms and disappear.

Mary Hudson, a professor of physics at Dartmouth College, says the data from the Van Allen probes “are providing remarkably detailed measurements” of the Earth’s radiation belts and their boundaries.

“These new observations confirm, over the two years since launch of the Van Allen probes, the persistence of this inner boundary, which places additional constraints on theories of particle acceleration and loss in magnetized astrophysical systems,” says Hudson, who did not participate in the research.

Seen through “new eyes”

Foster says this is the first time researchers have been able to characterize the Earth’s radiation belt, and the forces that keep it in check, in such detail. In the past, NASA and the U.S. military have launched particle detectors on satellites to measure the effects of the radiation belt: NASA was interested in designing better protection against such damaging radiation; the military, Foster says, had other motivations.

“In the 1960s, the military created artificial radiation belts around the Earth by the detonation of nuclear warheads in space,” Foster says. “They monitored the radiation belt changes, which were enormous. And it was realized that, in any kind of nuclear war situation, this could be one thing that could be done to neutralize anyone’s spy satellites.”

The data collected from such efforts was not nearly as precise as what is measured today by the Van Allen probes, mainly because previous satellites were not designed to fly in such harsh conditions. In contrast, the resilient Van Allen Probes have gathered the most detailed data yet on the behavior and limits of the Earth’s radiation belt.

“It’s like looking at the phenomenon with new eyes, with a new set of instrumentation, which give us the detail to say, ‘Yes, there is this hard, fast boundary,’” Foster says.

This research was funded in part by NASA.

By Jennifer Chu | MIT News Office