Scientists have crunched data to predict crime, hospital visits, and government uprisings — so why not the price of Bitcoin?

A researcher at MIT’s Computer Science and Artificial Intelligence Laboratory and the Laboratory for Information and Decision Systems recently developed a machine-learning algorithm that can predict the price of the infamously volatile cryptocurrency Bitcoin, allowing his team to nearly double its investment over a period of 50 days.

Earlier this year, principal investigator Devavrat Shah and recent graduate Kang Zhang collected price data from all major Bitcoin exchanges, every second for five months, accumulating more than 200 million data points.

Using a technique called “Bayesian regression,” they trained an algorithm to automatically identify patterns from the data, which they used to predict prices, and trade accordingly.

Specifically, every two seconds they predicted the average price movement over the following 10 seconds. If the price movement was higher than a certain threshold, they bought a Bitcoin; if it was lower than the opposite threshold, they sold one; and if it was in-between, they did nothing.

Over 50 days, the team’s 2,872 trades gave them an 89 percent return on investment with a Sharpe ratio (measure of return relative to the amount of risk) of 4.1.

The team’s paper was published this month at the 2014 Allerton Conference on Communication, Control, and Computing.

“We developed this method of latent-source modeling, which hinges on the notion that things only happen in a few different ways,” says Shah, who previously used the approach to predict Twitter trending topics. “Instead of making subjective assumptions about the shape of patterns, we simply take the historical data and plug it into our predictive model to see what emerges.”

Shah says he was drawn to Bitcoin because of its vast swath of free data, as well as its sizable user base of high-frequency traders.

“We needed publicly available data, in large quantities and at an extremely fine scale,” says Shah, the Jamieson Career Development Associate Professor of Electrical Engineering and Computer Science. “We were also intrigued by the challenge of predicting a currency that has seen its prices see-saw regularly in the last few years.”

In the future, Shah says he is interested in expanding the scale of the data collection to further hone the effectiveness of his algorithm.

“Can we explain the price variation in terms of factors related to the human world? We have not spent a lot of time doing that,” Shah says, before adding with a laugh, “But I can show you it works. Give me your money and I’d be happy to invest it for you.”

When Shah published his Twitter study in 2012, some academics wondered whether his approach could work for stock prices. With the Bitcoin research complete, he says he now feels confident modeling virtually any quantity that varies over time — including, he says half-jokingly, the validity of astrology predictions.

If nothing else, the findings demonstrate Shah’s belief that, more often than not, what gets in the way of our predictive powers are our preconceived notions of what patterns will pop up.

“When you get down to it,” he says, “you really should be letting the data decide.”

By Adam Conner-Simons | CSAIL

Getting metabolism right

October 24, 2014

Metabolic networks are mathematical models of every possible sequence of chemical reactions available to an organ or organism, and they’re used to design microbes for manufacturing processes or to study disease. Based on both genetic analysis and empirical study, they can take years to assemble.

Unfortunately, a new analytic tool developed at MIT suggests that many of those models may be wrong. Fortunately, the same tool may make it fairly straightforward to repair them.

“They have all these models in this database at [the University of California at] San Diego,” says Bonnie Berger, a professor of applied mathematics and computer science at MIT and one of the tool’s developers, “and it turns out that many of them were computed with floating-point arithmetic” — an approximate numerical representation that most computer systems use to increase efficiency. “We were able to prove that you need to compute them in exact arithmetic,” Berger says. “When we computed them in exact arithmetic, we found that many of the models that were believed to be realistic don’t produce any growth under any circumstances.”

Berger and colleagues describe their new tool, and the analyses they performed with it, in the latest issue of Nature Communications. First author on the paper is Leonid Chindelevitch, who was a graduate student in Berger’s group when the work was done and is now a postdoc at the Harvard School of Public Health. He and Berger are joined by Aviv Regev, an associate professor of biology at MIT, and Jason Trigg, another of Berger’s former students.

Floating-point arithmetic is kind of like scientific notation for computers. It represents numbers as a decimal multiplied by a base — like 2 or 10 — raised to a particular power. Though it sacrifices some accuracy relative to exact arithmetic, it generally makes up for it with gains in computational efficiency.

Indeed, in order to perform an exact-arithmetic analysis of a data structure as huge and complex as a metabolic network, Berger and Chindelevitch had to find a way to simplify the problem — without sacrificing any precision.

Pruning the network

Metabolic networks, Chindelevitch says, “describe the set of all reactions that are available to a particular organism that we might be interested in. So if we’re interested in yeast or E. coli or the tuberculosis bacterium, this is a way to put together everything we know about what this organism can do to transform some substances into some other substances. Usually it will get nutrients from the environment, and then it will transform them by its own internal mechanisms to produce whatever it is that it wants to produce — ethanol, different cellular components for itself, and so on.”

The network thus represents every sequence of chemical reactions catalyzed by enzymes encoded in an organism’s DNA that could lead from particular nutrients to particular chemical products. Every node of the network represents an intermediary stage in some chain of reactions.

To simplify such networks enough to enable exact arithmetical analysis, Chindelevitch and Berger developed an algorithm that first identifies all the sequences of reactions that, for one reason or another, can’t occur within the context of the model; it then deletes these. Next, it identifies clusters of reactions that always work in concert: Whatever their intermediate products may be, they effectively perform a single reaction. The algorithm then collapses those clusters into a single reaction.

Most crucially, Chindelevitch and Berger were able to mathematically prove that these modifications wouldn’t affect the outcome of the analysis.

“What the exact-arithmetic approach allows you to do is respect the key assumption of the model, which is that at steady state, every metabolite is neither produced in excess nor depleted in excess,” Chindelevitch says. “The production balances the consumption for every substance.”

When Chindelevitch and Berger applied their analysis to 89 metabolic-network models in the San Diego database, they found that 44 of them contained errors or omissions: If the products of all the reactions in the networks were in equilibrium, the organisms modeled would be unable to grow.

Patching it up

By adapting algorithms used in the field of compressed sensing, however, Chindelevitch and Berger are also able to identify likely locations of network errors.

Compressed sensing exploits the observation that some complex signals — such as audio recordings or digital images — that are computationally intensive to acquire can, upon acquisition, be compressed. That’s because they can be converted into a different mathematical representation that makes them appear much simpler than they did originally. It might be possible, for example, to represent an audio signal that initially consists of 44,000 samples per second of its duration as the weighted sum of a much smaller number of its constituent frequencies.

Compressed sensing performs the initial sampling in a clever way that allows it to build up the simpler representation from scratch, without having to pass through the more complex representation first. In the same way that compressed sensing can decompose an audio signal into the constituent frequencies with the heaviest weights, Chindelevitch and Berger’s algorithm can isolate just those links in a metabolic network that contribute most to its chemical imbalance.

“We’re hoping that this work will provide an impetus to reanalyze a lot of the existing metabolic-network model reconstructions and hopefully spur some collaborations where we actually perform this analysis and suggest corrections to the model before it is published,” Chindelevitch says.

“This is not an area where one would expect there to be a problem,” says Desmond Lun, chair of the Department of Computer Science at Rutgers University, who studies computational biology. “I think [the MIT researchers’ work] will change people’s attitudes in the sense that it raises an issue that most people would have thought was not an issue, and I think it will make us a lot more careful.”

“Computers operate with limited precision because there are only so many digits that you can store — even though, I must say, they store a lot of digits,” Lun explains. “Through software, you can be more or less careful about how much precision you lose in that way. There are very, very good packages out there that try to minimize that problem. And mostly, I would have thought, and I think most people would have thought, that that would be sufficient for these metabolic models.”

Errors in the models may have gone unnoticed because analyses performed on them often comported well with empirical evidence. But “those floating-point errors vary from package to package,” Lun says. “Certainly, it would be very concerning to find that because somebody used this software package, they got these great results, and then if I used a different software package, I would not.”

By Larry Hardesty | MIT News Office

Hacking for good

October 24, 2014

Hacking is often done with malicious intent. But the two MIT alumni who co-founded fast-growing startup Tinfoil Security have shown that hacking can be put to good use: improving security.  

Through Tinfoil, Michael Borohovski ’09 and Ainsley Braun ’10 have commercialized scanning software that uses hacking tricks to find vulnerabilities in websites and alert developers and engineers who can quickly fix problems before sites go live.

Thousands of startups and small businesses, as well as several large enterprises, are now using the software. And around 75 percent of websites scanned have some form of vulnerability, Braun says. Indeed, a ticker on Tinfoil’s website shows that the software has caught more than 450,000 vulnerabilities so far.

“Our No. 1 goal is making sure we’re securing the Internet,” says Braun, Tinfoil’s CEO and a graduate of MIT’s brain and cognitive sciences program.

While at MIT, Braun and Borohovski ran with a group of computer-savvy students who extensively researched security issues, inside and outside the classroom. For his part, Borohovski, a lifelong hacker, took many classes on security and wrote his senior thesis on the topic of Web security.

Tinfoil started as an enterprise, however, when Braun and Borohovski reconnected in Washington after graduating, while working separate security gigs. As a hobby, they caught vulnerabilities in websites that required their personal information, and then notified site administrators.

“We’d get emails back saying they’d fixed the vulnerability. But we could exploit it again,” Braun says. “Eventually, we’d just walk them through how to fix it.”

When job offers started pouring in, the duo saw potential. “We said, ‘If people want to hire us to do this, then there’s a need,’” says Borohovski, Tinfoil’s chief technology officer, who helped build the firm’s software.

Returning to Boston, Braun and Borohovski founded Tinfoil, with the help of MIT’s Venture Mentoring Service, to launch the product. The startup has grown rapidly ever since: Recently, it partnered with CloudFlare, adding to a list of partnerships with Heroku, Rackspace, and others.

Finding vulnerabilities

Much like Google, the Tinfoil software works by crawling websites. “But instead of looking for text and images, we’re looking for anywhere we can inject code to exploit vulnerabilities,” Braun says.

The software uses techniques identical to those used by external hackers, says Borohovski, who studied computer science and engineering at MIT. “We don’t have access to source code or anything that an external hacker wouldn’t have access to. We just systematically go through every possible entry point and attempt to see if there’s a vulnerability,” he says.

Currently, the software has tactics to identify about 50 vulnerabilities, including the Open Web Application Security Project’s list of the top 10 Web app risks. For each vulnerability discovered, the software can conduct anywhere from 10 to hundreds of tests. The Tinfoil team — now five employees — constantly updates the software as new risks and attacks are discovered.

One of the most common risks, for instance, is insecure cookies (data containing personal information). If someone logs on to a website through, say, a public Wi-Fi spot, it’s possible for a hacker to steal an insecure cookie and pretend to be the user. Another popular vulnerability is one that allows hackers to inject arbitrary code into a website to wreak havoc.

On the user end, the developer sees a description of such vulnerabilities — including its location and impact on the website — and step-by-step instructions on how to fix the vulnerability (by patches or other means), tailored to specific programming languages.

Although vulnerability-scanning software has been available since the early 2000s, Tinfoil’s software is novel in that it’s geared more toward developers, who are able to fix vulnerabilities as part of their workflow, Borohovski says.

“Any large enterprise has maybe 1,000 developers and a much smaller security team — maybe a dozen, or 100 for really large places,” he says. While these developers have tests for some functional bugs, “there isn’t anything that’s part of that process for security scanning. We fit in there.”

This is especially important in today’s world, Braun adds, as websites make constant changes. Every tweaked line of code opens the risk of new vulnerabilities — and security teams may have trouble keeping up. “We can put some of that work on the developers,” she says.

Rolling out Tinfoil

Tinfoil launched in Boston in 2011. Winning the $100,000 MassChallenge startup competition grand prize later that year helped the firm relocate to Palo Alto, Calif. But while here, Borohovski and Braun received guidance from the MIT Venture Mentoring Service (VMS) that played an integral part in Tinfoil’s history; even today, Tinfoil still reaches out to the VMS for guidance.

Primarily, Braun says, the VMS helped them wade through the logistical intricacies of building a startup: creating a business plan, finding funding, hiring a lawyer, and more. (Braun’s mother, Lucille, a financial advisor, is a mentor for VMS, but didn’t serve on Tinfoil’s mentor team.)

“Before we launched a startup, we had a boss and structure. But then we had to do everything, like design a website, advertise, marketing and sales, business strategy, hiring, engineering,” Braun says. “The VMS helped us prioritize. They gave us homework and milestones we had to accomplish, so we held ourselves accountable.”

For Borohovski, MIT played an earlier role in his path to security entrepreneurship. “It was where some of the Web-security seeds got planted,” he says: At the Institute, he organized student teams for computer hacking competitions and took classes on the topic, including 6.857 (Computer and Network Security).

Additionally, he found encouragement in risk-taking and innovating for real-world applications. “The energy at MIT is all about building stuff,” he says. “Everywhere I went, there were people working on things, and I couldn’t stop being curious about them. I haven’t been able to find that intensive building mindset anywhere else.”

Pride in his alma mater is one reason why two of Tinfoil’s other engineers are MIT alumni: Ben Sedat ’09 and Angel Irizarry ’09. They had co-written a paper on securing authentication cookies with Borohovski — which later became Borohovski and Sedat’s senior thesis — and are now helping build the company. “Eighty percent of our company is MIT alumni,” Borohovski says. “I guess we’re trying to recreate our own little MIT here.” 

By Rob Matheson | MIT News Office

A first-ever standard “operating system” for drones, developed by a startup with MIT roots, could soon help manufacturers easily design and customize unmanned aerial vehicles (UAVs) for multiple applications.

Today, hundreds of companies worldwide are making drones for infrastructure inspection, crop- and livestock-monitoring, and search-and-rescue missions, among other things. But these are built for a single mission, so modifying them for other uses means going back to the drawing board, which can be very expensive.

Now Airware, founded by MIT alumnus Jonathan Downey ’06, has developed a platform — hardware, software, and cloud services — that lets manufacturers pick and choose various components and application-specific software to add to commercial drones for multiple purposes.   

The key component is the startup’s Linux-based autopilot device, a small red box that is installed into all of a client’s drones. “This is responsible for flying the vehicle in a safe, reliable manner, and acts as hub for the components, so it can collect all that data and display that info to a user,” says Downey, Airware’s CEO, who researched and built drones throughout his time at MIT.

To customize the drones, customers use software to select third-party drone vehicles and components — such as sensors, cameras, actuators, and communication devices — configure settings, and apply their configuration to a fleet. Other software helps them plan and monitor missions in real time (and make midflight adjustments), and collects and displays data. Airware then pushes all data to the cloud, where it’s aggregated and analyzed, and available to designated users.

If a company decides to use a surveillance drone for crop management, for instance, it can easily add software that stitches together different images to determine which areas of a field are overwatered or underwatered. “They don’t have to know the flight algorithms, or underlying hardware, they just need to connect their software or piece of hardware to the platform,” Downey says. “The entire industry can leverage that.”

Clients have trialed Airware’s platform over the past year — including researchers at MIT, who are demonstrating delivery of vaccines in Africa. Delta Drone in France is using the platform for open-air mining operations, search-and-rescue missions, and agricultural applications. Another UAV maker, Cyber Technology in Australia, is using the platform for drones responding to car crashes and other disasters, and inspecting offshore oilrigs.

Now, with its most recent $25 million funding round, Airware plans to launch the platform for general adoption later this year, viewing companies that monitor crops and infrastructure — with drones that require specific cameras and sensors — as potential early customers. 

A company from scratch

Airware’s roots date to 2005, when Downey, who studied electrical engineering and computer science, organized an MIT student team — including Airware’s chief technology officer, Buddy Michini ’07, SM ’09, PhD ’13 — to build drones for an intercollegiate competition.

At the time, drones were primarily used for military surveillance, powered by a “black box” that could essentially fly the drones and control the camera. There were also a handful of open-source projects — made by hobbyists — that let people modify drones, but the code was unreliable when tweaked. “If you wanted to do anything novel, your hands were tied,” Downey says.

The group’s decision: build a drone from scratch. But their advisor, Jonathan How, a professor of aeronautics and astronautics who directs of the Aerospace Controls Laboratory, told them that required too much time, and would cost them the competition.

“We said, ‘You’re right, but we’re MIT students, and we’d feel better getting last place and learning a lot doing it than winning the competition by repackaging a black-box solution,’” Downey says.

Sure enough, the team earned second-to-last place. “But we learned that black-box solution didn’t work if you’re trying to address new applications, and the open-source wasn’t reliable even though you could change the software,” Downey says.

A five-year stretch at Boeing — as an engineer for the U.S. military’s A160 Hummingbird UAV and as a commercial pilot — put Downey in contact with drone manufacturers, who, he found, were still using black boxes or open-source designs.

“They were basically facing the same challenges we faced as undergrads at MIT,” Downey says. Thus Airware was born in 2010 — first run only by Downey, then with Michini and a team of Boeing engineers — to make a military-grade “black box” system, but whose capabilities could be tweaked and extended.

Early prototypes were trialed by How’s group at MIT, before Airware entered two California incubators, Lemnos Labs and Y-Combinator, in 2013. Since then, they’ve raised $40 million from investors and expanded their team from five to more than 50 employees. “The last 18 months has been a rapid rise,” Downey says.

Not much of the early MIT drone designs made it into the final Airware platform. “But building that early drone at MIT, and having the idea to leverage an enterprise-grade platform that you can extend the capabilities of, very directly became what Airware is today,” Downey says.

“The DOS for drones”

Today, Downey says, the development of a standard operating system for drones is analogous to Intel processors and Microsoft’s DOS paving the way for personal computers in the 1980s. Before those components became available, hobbyists built computers using software that didn’t work with different computers. At the same time, powerful mainframes were only available to a select few — and still suffered software-incompatibility issues.

Then came Intel’s processors and DOS. Suddenly, engineers could build computers around the standard processor and create software on the operating system, without needing to know details of the underlying hardware.

“We’re doing the same thing for the drone space,” Downey says. “There are 600 companies building differing versions of drone hardware. We think they need the Intel processor of the drones, if you will, and that operating system-level software component, too — like the DOS for drones.”

The benefits are far-reaching, Downey says: “Drone companies, for instance, want to build drones and tailor them for different applications without having to build everything from scratch,” he says.

But companies developing cameras, sensors, and communication links for drones also stand to benefit, he adds, as their components will only need to be compatible with a single platform.

Additionally, it could help the Federal Aviation Administration (FAA) better assess the reliability of drones; Congress recently tasked the agency with compiling UAV rules and regulations by 2015. This could also help promote commercial drone use in the United States, which lags behind other countries around the world, primarily in Europe, Downey says.

“Rather than see a world where there’s 500 drones flying overhead, and every drone has different software and electronics, it’s good for the FAA if all of them had reliable and common hardware and software,” he says. “We think it’s valuable for everybody.”

By Rob Matheson | MIT News Office

In Song Kim was raised with an appreciation for the importance of closely tracking international relations. His grandfather was a Korean politician who lived through the Japanese occupation and World War II. He taught his family that the fate of small countries is often determined by negotiations among the big powers. Kim set his sights on having a role in international decision-making as a bureaucrat at the UN or World Bank, and he earned a master’s degree in law and diplomacy from Tufts University’s renowned The Fletcher School.

But Kim, who joins the MIT Department of Political Science as an assistant professor in the fall of 2014, stopped short of the negotiating table when he found himself more interested in the theory of international relations than its practice. He wanted to understand why countries have difficulty coming to agreement. Kim’s attention was drawn to this bit of conventional wisdom among political scientists: Trade policy reflects conflicting political interests at the level of industries. But when he looked at the data, he saw examples that ran counter to the conventional wisdom. “Government actually sets trade policy very differently across very similar products within the same industry,” he says.

Kim devoted his doctoral research at Princeton University to probing trade policy at the level of products. His thesis, “Political Cleavages within Industry: Firm-level Lobbying for Trade Liberalization,” won Best Paper Award at the 2013 International Political Economy Society conference. But when he began his research, he quickly ran into a problem: There was a lot of data. He needed to examine trade relations between pairs of countries for specific products. He was looking at monthly trade volumes for more than 200 countries and tens of thousands of products. “Big data not only means that you have a larger number of data sets, it also means that you have different computational and methodological challenges to solve,” he says.

Kim adapted the tools of the natural sciences — cluster computing and data analytics algorithms — to the task of analyzing product-level trade relations. “I’ve done a lot of programming so that I can actually deal with the data,” he says. “My research has focused more on developing proper methodology to analyze data.”

Using his big data tools, Kim found an interesting pattern. The way firms lobby government on trade policy — pushing for more protectionism or more liberalization — depends on how unique their products are. The more differentiated a company’s products, the more likely it is to push for lower tariffs or other forms of trade liberalization. And when a company’s products are not very differentiated, meaning the products can more easily be replaced by other, potentially less expensive products, the company is more likely to push for higher tariffs or other forms of protectionism.

Not only is product differentiation the key factor, product differentiation varies within industries. For example, some automobiles are relatively replaceable, and some textiles are relatively unique. In addition, firms that produce highly differentiated products tend to be larger and wield greater influence when lobbying. “They’re not only economically powerful, but also politically powerful,” says Kim. “It is the productive, big firms that are actually important in driving trade policy, especially for the countries that are producing differentiated products,” he says.

Looking ahead, Kim plans to extend his research to predictive analytics, which is the branch of big data that makes it possible to anticipate trends. “Once you know some recurring patterns, then you can actually predict what is going to happen,” he says.

For Kim, MIT is the ideal place to do this work. “My research really lies between economics, political science, and computer science,” he says. Computer science can advance political science, and political science and the other social sciences can help computer science students shape and apply their skills. “That’s why I’m very excited to be part of the MIT community.”

By Eric Smalley | MIT Political Science

Manual control

October 24, 2014

When you imagine the future of gesture-control interfaces, you might think of the popular science-fiction films “Minority Report” (2002) or “Iron Man” (2008). In those films, the protagonists use their hands or wireless gloves to seamlessly scroll through and manipulate visual data on a wall-sized, panoramic screen.

We’re not quite there yet. But the brain behind those Hollywood interfaces, MIT alumnus John Underkoffler ’88, SM ’91, PhD ’99 — who served as scientific advisor for both films — has been bringing a more practical version of that technology to conference rooms of Fortune 500 and other companies for the past year.  

Underkoffler’s company, Oblong Industries, has developed a platform called g-speak, based on MIT research, and a collaborative-conferencing system called Mezzanine that allows multiple users to simultaneously share and control digital content across multiple screens, from any device, using gesture control.

Overall, the major benefit in such a system lies in boosting productivity during meetings, says Underkoffler, Oblong’s CEO. This is especially true for clients who tend to pool resources into brainstorming and whose meeting rooms may remain open all day, every day.

“If you can make those meetings synthetically productive — not just times for people to check in, produce status reports, or check email surreptitiously under the table — that can be electrifying force for the enterprise,” he says.

Mezzanine surrounds a conference room with multiple screens, as well as the “brains” of the system (a small server) that controls and syncs everything. Several Wii-like wands, with six degrees of freedom, allow users to manipulate content — such as text, photos, videos, maps, charts, spreadsheets, and PDFs — depending on certain gestures they make with the wand.

That system is built on g-speak, a type of operating system — or a so-called “spatial operating environment” — used by developers to create their own programs that run like Mezzanine.

“G-speak programs run in a distributed way across multiple machines and allow concurrent interactions for multiple people,” Underkoffler says. “This shift in thinking — as if from single sequential notes to chords and harmonies — is powerful.”

Oblong’s clients include Boeing, Saudi Aramco, SAP, General Electric, and IBM, as well as government agencies and academic institutions, such as Harvard University’s Graduate School of Design. Architects and real estate firms are also using the system for structural designing.

Putting pixels in the room

G-speak has its roots in a 1999 MIT Media Lab project — co-invented by Underkoffler in Professor Hiroshi Ishii’s Tangible Media Group — called “Luminous Room,” which enabled all surfaces to hold data that could be manipulated with gestures. “It literally put pixels in the room with you,” Underkoffler says.

The group designed light bulbs, called “1/0 Bulbs,” that not only projected information, but also collected the information from a surface it projected onto. That meant the team could make any projected surface a veritable computer screen, and the data could interact with, and be controlled by, physical objects.

They also assigned pixels three-dimensional coordinates. Imagine, for example, if you sat down in a chair at a table, and tried to describe where the front, left corner of that table was located in physical space. “You’d say that corner is this far off the floor, this far to the right of my chair, and this much in front of me, among other things,” Underkoffler explains. “We started doing that with pixels.”

One application for urban planners involved placing small building models onto a 1/0 Bulb projected table, “and the pixels surrounded the model,” Underkoffler says. This provided three-dimensional spatial information, from which the program casted accurate, digital shadows from the models onto the table. (Changing the time on a digital clock changed the direction of the shadows.)

In another application, the researchers used a glass vase to manipulate digital text and image boxes that were projected onto a whiteboard. The digital boxes were linked to the vase in a circle via digital “springs.” When the vase moved, all the graphics followed. When the vase rotated, the graphics bunched together and “self-stored” into the vase; when the vase rotated again, the graphics reappeared in their first form.

These initial concepts — using the whole room as a digital workplace — became the foundation for g-speak. “I really wanted to get the ideas out into the world in a form that everyone could use,” Underkoffler says. “Generally, that means commercial form, but the world of movies came calling first.”

 “The world’s largest focus group”

Underkoffler was recruited as scientific advisor for Steven Spielberg’s “Minority Report” after meeting the film’s crew, who were searching for novel technology ideas at the Media Lab. Later, in 2003, Underkoffler reprised his behind-the-scenes gig for Ang Lee’s “Hulk,” and, in 2008, for Jon Favreau’s “Iron Man,” which both depicted similar technologies.

Seeing this technology on the big screen inspired Underkoffler to refine his MIT technology, launch Oblong in 2006, and build early g-speak prototypes — glove-based systems that eventually ended up with the company’s first customer, Boeing.

Having tens of millions of viewers seeing the technology on the big screen, however, offered a couple of surprising perks for Oblong, which today is headquartered in Los Angeles, with nine other offices and demo rooms in cities including Boston, New York, and London. “It might have been the world’s largest focus group,” Underkoffler says.

Those enthused by the technology, for instance, started getting in touch with Underkoffler to see if the technology was real. Additionally, being part of a big-screen production helped Underkoffler and Oblong better explain their own technology to clients, Underkoffler says. In such spectacular science-fiction films, technology competes for viewer attention and, yet, it needs to be simplified so viewers can understand it clearly.

“When you take technology from a lab like at MIT, and you need to show it in a film, the process of refining and simplifying those ideas so they’re instantly legible on screen is really close to the refinement you need to undertake if you’re turning that lab work into a product,” he says. “It was enormously valuable to us to strip away everything in the system that wasn’t necessary and leave a really compact core of user-interface ideas we have today.”

After years of writing custom projects for clients on g-speak, Oblong turned the most-requested features of these jobs — such as having cross-platform and multiple-user capabilities — into Mezzanine. “It was the first killer application we could write on top of g-speak,” he says. “Building a universal, shared-pixel workspace has enormous value no matter what your business is.”

Today, Oblong is shooting for greater ubiquity of its technology. But how far away are we from a consumer model of Mezzanine? It could take years, Underkoffler admits: “But we really hope to radically tilt the whole landscape of how we think about computers and user interface.”

By Rob Matheson | MIT News Office

An end to drug errors?

October 24, 2014

MIT alumni entrepreneurs Gauti Reynisson MBA ’10 and Ívar Helgason HS ’08 spent the early 2000s working for companies that implemented medication-safety technologies — such as electronic-prescription and pill-barcoding systems — at hospitals in their native Iceland and other European countries.

But all that time spent in hospitals soon opened their eyes to a major health care issue: Surprisingly often, patients receive the wrong medications. Indeed, a 2006 report from the Institute of Medicine found that 1.5 million hospitalized patients in the United States experience medication errors every year due, in part, to drug-administration mistakes. Some cases have adverse or fatal results.

Frustrated and seeking a solution, the Icelandic duo quit their careers and traveled to MIT for inspiration. There, they teamed up with María Rúnarsdóttir MBA ’08 and devised MedEye, a bedside medication-scanning system that uses computer vision to identify pills and check them against medication records, to ensure that a patient gets the right drug and dosage.

Commercialized through startup Mint Solutions, MedEye has now been used for a year in hospitals in the Netherlands (where the startup is based), garnering significant attention from the medical community. Through this Dutch use, the co-founders have determined that roughly 10 percent of MedEye’s scans catch medication errors.

“Medication verification is a pinnacle point of medical safety,” says Helgason, a physician and product developer. “It’s a complicated chain of events that leads up to medication mistakes. But the bedside is the last possible place to stop these mistakes.”

Mint Solutions’ aim, Reynisson says, is to aid nurses in rapidly, efficiently, and correctly administering medication. “We want the device to be the nurse’s best friend,” says Reynisson, now Mint’s CEO. The device, he adds, could yield savings by averting medication mishaps, which can cost hundreds of millions of dollars.

Currently, the startup has raised $6 million in funding, and is ramping up production and working with a Dutch health care insurance company to bring the MedEye to 15 hospitals across the country, as well as Belgium, the United Kingdom, and Germany.

Systematic approach

To use the MedEye — a foot-high box in a white housing — a nurse first scans a patient’s wristband, which has a barcode that accesses the patient’s electronic records. The nurse then pushes the assigned pills into the MedEye via a sliding tray. Inside the device, a small camera scans the pills, rapidly identifying them by size, shape, color, and markings. Algorithms distinguish the pills by matching them against a database of nearly all pills in circulation.

Although the hardware is impressive, much innovation is in MedEye’s software, which cross-references (and updates) the results in the patient’s records. Results are listed in a simple interface: Color-coded boxes show if pills have been correctly prescribed (green), or are unknown or wrong (red). If a pill isn’t in MedEye’s database — because it’s new, for instance — the system alerts the nurse, who adds the information into the software for next time.

“It does all the querying for the right medication, for the right patient, and takes care of the paperwork,” Helgason says. “We save a lot of time for nurses that way.”

Similar systems exist for catching medication errors: About 15 years ago, some hospitals began using barcode systems — which Reynisson and Helgason actually helped install in some Dutch and German hospitals. These systems also require nurses to use a handheld scanner to scan a patient’s wristband, and then the imprinted barcodes on each pill container.

“But the hurdle has been getting these installed,” Reynisson says. “Companies sell medications with barcodes, others sell software, or barcode scanners. Hospitals have to make all these things work together, and it’s hard for small and medium hospitals to afford. No one is selling turn-key  barcode systems.”

That’s where MedEye is truly unique, Helgason says: As an entire system that requires no change in a hospital’s workflow or logistics, “it’s more usable and more accessible in health care facilities.”

Feedback from nurses using MedEye to ease their workloads has been positive, Reynisson says. And errors are caught more often than expected. In fact, he recalls a memorable moment last year when a nurse at the Dutch hospital demonstrated the MedEye for department heads on a random patient. The nurse scanned four pills, which had been assigned to the patient, and added an extra, erroneous pill to show how MedEye caught errors.

“MedEye showed the extra pill was incorrect. But, to his surprise, so were two other pills that the nurse had assumed were correct, because another nurse had dispensed those,” Reynisson says. “Goes to show that even with full focus, it is common for nurses to be in a position where they are expected to catch errors made in other parts of the medication-delivery process.”

Vision for new technology

Helgason conceived of MedEye while studying in the MIT-Harvard Health Sciences and Technology program. In a computer-vision class in the Computer Science and Artificial Intelligence Laboratory, he saw that advances in 3-D object-recognition technology meant computers could learn objects based on various characteristics.

At the same time, he started taking heed of MIT’s burgeoning startup ecosystem, prompting him to contact his longtime medical-device colleague. “I remember Ívar called me one day and said, ‘Gauti, you have to come to MIT: Everyone’s starting companies,’” says Reynisson, a trained programmer who wrote early object-recognition code for the MedEye.

Seeking a change of pace from computer science, Reynisson enrolled in the MIT Sloan School of Management — where he saw that Helgason was right. “There was a spirit there, where you have to go for it, find a solution and market it, because if you don’t, no one else will,” he says. “That attitude, and seeing others do it, really inspires you to start a company and take the risk.”

Mint launched in 2009 with an initial concept design for MedEye. Entering that year’s MIT $100K Entrepreneurship Competition helped the three co-founders fine-tune their business plan and startup pitch, receiving help from mentors, professors, and even business-savvy students.

“That’s when we started to think of a business beyond the technology,” Reynisson says. “We left with a fairly sizeable business plan to take to investors and get funding.”

The team felt unsure of the technology at first. But a 2010 demonstration at a Dutch hospital of an early prototype — a bulkier version of the MedEye, with off-the-shelf parts, constructed at MIT — changed their perception. The hospital had to identify about 250 small, white pills of different medications that, in fact, all looked the same.

“We tried them all in our prototype at once, and it worked,” Reynisson says. “That’s when we realized what a change it would be for a hospital to collect data and important safety information, and get it fast and efficiently, without asking the nurse to pick up a pen.”

Mint Solution now has 40 MedEye systems ready to deploy across Europe in the coming months, with hopes of gaining some client feedback. In the future, Reynisson says, the startup has its sights on developing additional medication-safety technologies.

“At the core of the startup is this belief that better information technology in hospitals can both increase efficiency and safety, and lead to better outcomes,” he says. “We’re starting with verification of medication. But who knows what’s next?”

By Rob Matheson | MIT News Office

With a method known as finite element analysis (FEA), engineers can generate 3-D digital models of large structures to simulate how they’ll fare under stress, vibrations, heat, and other real-world conditions.

Used for mapping out large-scale structures — such as mining equipment, buildings, and oil rigs — these simulations require intensive computation done by powerful computers over many hours, costing engineering firms much time and money.

Now MIT spinout Akselos has developed novel software, based on years of research at the Institute, that uses precalculated supercomputer data for structural components — like simulated “Legos” — to solve FEA models in seconds.

A simulation that could take hours with conventional FEA software, for instance, could be done in seconds with Akselos’ platform.  

Hundreds of engineers in the mining, power-generation, and oil and gas industries are now using the Akselos software. The startup is also providing software for an MITx course on structural engineering.

With its technology, Akselos aims to make 3-D simulations more accessible worldwide to promote efficient engineering design, says David Knezevic, Akselos’ chief technology officer, who co-founded the startup with former MIT postdoc Phuong Huynh and alumnus Thomas Leurent SM’ 01.

“We’re trying to unlock the value of simulation software, since for many engineers current simulation software is far too slow and labor-intensive, especially for large models,” Knezevic says. “High-fidelity simulation enables more cost-effective designs,  better use of energy and materials, and generally an increase in overall efficiency.”

“Simulation components”

Akselos’ software runs on a novel technique called the “reduced basis (RB) component method,” co-invented by Anthony Patera, the Ford Professor of Engineering at MIT, and Knezevic and Huynh. (The technique builds on a decade of research by Patera’s group.)

This technique merges the concept of the RB method — which reproduces expensive FEA results by solving related calculations that are much faster — with the idea of decomposing larger simulations into an assembly of components.

“We developed a component-based version of the reduced basis method, which enables users to build large and complex 3-D models out of a set of parameterized components,” Knezevic says.

In 2010, the firm’s founders were part of a team, led by Patera, that used that technique to create a mobile app that displayed supercomputer simulations, in seconds, on a smartphone.

A supercomputer first presolved problems — such as fluid flow around a spherical obstacle in a pipe — that had a known form, but for dozens of different parameters. (These parameters were automatically chosen to cover a range of possible solutions.) When app users plugged in custom parameters for problems — such as the diameter of that spherical obstacle — the app would compute a solution for the new parameters by referencing the precomputed data.

Today’s Akselos software runs on a similar principle, but with new software, and cloud-based service. A supercomputer precalculates individual components, such as, say, a simple tube or a complex mechanical part. “And this creates a big data footprint for each one of these components, which we push to the cloud,” Knezevic says.

These components contain adjustable parameters, which enable users to vary properties, such as geometry, density, and stiffness. Engineers can then access and customize a library of precalculated components, drag and drop them into an “assembler” platform, and connect them to build a full simulation. After that, the software will reference the precomputed data to create a highly detailed 3-D simulation in seconds. 

In one demonstration, for instance, a mining company used components available in the Akselos library to rapidly create a simulation of shiploader infrastructure — complete with high-stress “hot spots” — that needed inspection. When on-site inspectors then found cracks, they relayed that information to the engineer, who added the damage to the simulation, and created modified simulations within a few minutes.

“The software also allows people to model the machinery in its true state,” Knezevic says. “Often infrastructure has been in use for decades and is far from pristine — with damage, or holes, or corrosion — and you want to represent those defects,” Knezevic says. “That’s not simple for engineers today, since with other software it’s not feasible to simulate large structures in full 3-D detail.”

Ultimately, pushing the data to the cloud has helped Akselos, by leveraging the age-old tradeoff between speed and storage: By storing and reusing more data, algorithms can do less work and hence finish more quickly.

“These days, with cloud technology, storing lots of data is no big deal. We store a lot more data than other methods, but that data, in turn, allows us to go faster, because we’re able to reuse as much precomputed data as possible,” he says.

Bringing technology to the world

Akselos was founded in 2012, after Knezevic and Huynh, along with Leurent — who actually started FEA work with Patera group back in 2000 — earned a Deshpande innovation grant for their “supercomputing-on-a-smartphone” innovation.

“That was a trigger,” Knezevic says. “Our passion and goal has always been to bring new technology to the world. That’s where the Deshpande Center and the MIT innovation ecosystem are great.”

From there, Akselos grew with additional help from MIT’s Venture Mentoring Service (VMS), whose mentors guided the team in fundraising, sales, opening a Web platform to users, and hiring.

“We needed a sounding board,” Knezevic says. “We’d go into meetings and bounce ideas around to help us make good decisions. I think all our decisions were influenced by that type of discussion. It’s a real luxury that you don’t have in other places.”

In expanding their visibility, and to get back into the academic sphere, Akselos has teamed with Simona Socrate, a principal research scientist in mechanical engineering at MIT, who is using the startup’s software — albeit a limited version — in her MITx class, 2.01x (Elements of Structures).

Feedback from students has been positive, Knezevic says. Primarily, he hears that the software is allowing students to “build intuition for the physics of structures beyond what they could see by simply solving math problems.”

“In 2.01x  the students learn about axial loading, bending, and torsion — we have apps for each case so they can visualize the stress, strain, and displacement in 3-D in their browser,” he says. “We think it’s a great way to show students the value of fast, 3-D simulations.”

Commercially, Akselos is expanding, hiring more employees in its three branches — in Boston, Vietnam, and Switzerland — building a community of users, and planning to continue its involvement with edX classes.

On Knezevic’s end, at the Boston office, it’s all about software development, tailoring features to customer needs — a welcome challenge for the longtime researcher.

“In academia, typically only you and a few colleagues use the software,” he says. “But in a company you have people all over the world playing with it and testing it, saying, ‘This button needs to be there’ or ‘We need this new type of analysis.’ Everything revolves around the customer. But it was good to have that solid footing in academic work that we could build on.”

By Rob Matheson | MIT News Office

Our connection to content

October 24, 2014

It’s often said that humans are wired to connect: The neural wiring that helps us read the emotions and actions of other people may be a foundation for human empathy.

But for the past eight years, MIT Media Lab spinout Innerscope Research has been using neuroscience technologies that gauge subconscious emotions by monitoring brain and body activity to show just how powerfully we also connect to media and marketing communications.

“We are wired to connect, but that connection system is not very discriminating. So while we connect with each other in powerful ways, we also connect with characters on screens and in books, and, we found, we also connect with brands, products, and services,” says Innerscope’s chief science officer, Carl Marci, a social neuroscientist and former Media Lab researcher.

With this core philosophy, Innerscope — co-founded at MIT by Marci and Brian Levine MBA ’05 — aims to offer market research that’s more advanced than traditional methods, such as surveys and focus groups, to help content-makers shape authentic relationships with their target consumers.

“There’s so much out there, it’s hard to make something people will notice or connect to,” Levine says. “In a way, we aim to be the good matchmaker between content and people.”

So far, it’s drawn some attention. The company has conducted hundreds of studies and more than 100,000 content evaluations with its host of Fortune 500 clients, which include Campbell’s Soup, Yahoo, and Fox Television, among others.

And Innerscope’s studies are beginning to provide valuable insights into the way consumers connect with media and advertising. Take, for instance, its recent project to measure audience engagement with television ads that aired during the Super Bowl.

Innerscope first used biometric sensors to capture fluctuations in heart rate, skin conductance, breathing, and motion among 80 participants who watched select ads and sorted them into “winning” and “losing” commercials (in terms of emotional responses). Then their collaborators at Temple University’s Center for Neural Decision Making used functional magnetic resonance imaging (fMRI) brain scans to further measure engagement.

Ads that performed well elicited increased neural activity in the amygdala (which drives emotions), superior temporal gyrus (sensory processing), hippocampus (memory formation), and lateral prefrontal cortex (behavioral control).

“But what was really interesting was the high levels of activity in the area known as the precuneus — involved in feelings of self-consciousness — where it is believed that we keep our identity. The really powerful ads generated a heightened sense of personal identification,” Marci says.

Using neuroscience to understand marketing communications and, ultimately, consumers’ purchasing decisions is still at a very early stage, Marci admits — but the Super Bowl study and others like it represent real progress. “We’re right at the cusp of coherent, neuroscience-informed measures of how ad engagement works,” he says.

Capturing “biometric synchrony”

Innerscope’s arsenal consists of 10 tools: Electroencephalography and fMRI technologies measure brain waves and structures. Biometric tools — such as wristbands and attachable sensors — track heart rate, skin conductance, motion, and respiration, which reflect emotional processing. And then there’s eye-tracking, voice-analysis, and facial-coding software, as well as other tests to complement these measures.

Such technologies were used for market research long before the rise of Innerscope. But, starting at MIT, Marci and Levine began developing novel algorithms, informed by neuroscience, that find trends among audiences pointing to exact moments when an audience is engaged together — in other words, in “biometric synchrony.”

Traditional algorithms for such market research would average the responses of entire audiences, Levine explains. “What you get is an overall level of arousal — basically, did they love or hate the content?” he says. “But how is that emotion going to be useful? That’s where the hole was.”

Innerscope’s algorithms tease out real-time detail from individual reactions — comprising anywhere from 500 million to 1 billion data points — to locate instances when groups’ responses (such as surprise, excitement, or disappointment) collectively match.

As an example, Levine references an early test conducted using an episode of the television show “Lost,” where a group of strangers are stranded on a tropical island.

Levine and Marci attached biometric sensors to six separate groups of five participants. At the long-anticipated moment when the show’s “monster” is finally revealed, nearly everyone held their breath for about 10 to 15 seconds.

“What our algorithms are looking for is this group response. The more similar the group response, the more likely the stimuli is creating that response,” Levine explains. “That allows us to understand if people are paying attention and if they’re going on a journey together.”

Getting on the map

Before MIT, Marci was a neuroscientist studying empathy, using biometric sensors and other means to explore how empathy between patient and doctor can improve patient health.

“I was lugging around boxes of equipment, with wires coming out and videotaping patients and doctors. Then someone said, ‘Hey, why don’t you just go to the MIT Media Lab,’” Marci says. “And I realized it had the resources I needed.”

At the Media Lab, Marci met behavioral analytics expert and collaborator Alexander “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences, who helped him set up Bluetooth sensors around Massachusetts General Hospital to track emotions and empathy between doctors and patients with depression.   

During this time, Levine, a former Web developer, had enrolled at MIT, splitting his time between the MIT Sloan School of Management and the Media Lab. “I wanted to merge an idea to understand customers better with being able to prototype anything,” he says.

After meeting Marci through a digital anthropology class, Levine proposed that they use this emotion-tracking technology to measure the connections of audiences to media. Using prototype sensor vests equipped with heart-rate monitors, stretch receptors, accelerometers, and skin-conductivity sensors, they trialed the technology with students around the Media Lab.

All the while, Levine pieced together Innerscope’s business plan in his classes at MIT Sloan, with help from other students and professors. “The business-strategy classes were phenomenal for that,” Levine says. “Right after finishing MIT, I had a complete and detailed business plan in my hands.”

Innerscope launched in 2006. But a 2008 study really accelerated the company’s growth. “NBC Universal had a big concern at the time: DVR,” Marci says. “Were people who were watching the prerecorded program still remembering the ads, even though they were clearly skipping them?”

Innerscope compared facial cues and biometrics from people who fast-forwarded ads against those who didn’t. The results were unexpected: While fast-forwarding, people stared at the screen blankly, but their eyes actually caught relevant brands, characters, and text. Because they didn’t want to miss their show, while fast-forwarding, they also had a heightened sense of engagement, signaled by leaning forward and staring fixedly.

“What we concluded was that people don’t skip ads,” Marci says. “They’re processing them in a different way, but they’re still processing those ads. That was one of those insights you couldn’t get from a survey. That put us on the map.”

Today, Innerscope is looking to expand. One project is bringing kiosks to malls and movie theaters, where the company recruits passersby for fast and cost-effective results. (Wristbands monitor emotional response, while cameras capture facial cues and eye motion.) The company is also aiming to try applications in mobile devices, wearables, and at-home sensors.

“We’re rewiring a generation of Americans in novel ways and moving toward a world of ubiquitous sensing,” Marci says. “We’ll need data science and algorithms and experts that can make sense of all that data.”

By Rob Matheson | MIT News Office

Even as 3-D printing is poised to help democratize manufacturing, it’s often overlooked that many 3-D-printed items are far too complicated for users to digitally design.

Sure, people can now order 3-D-printed items online, or even make wedding-cake figurines using 3-D-printing services at certain stores. But these are simple, largely standardized products. What if you want a chair or car built to your exact specifications?

Now, a team led by researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has developed “Fab By Example,” the first data-driven method to help people design products, with a growing database of templates that allow users to customize thousands of complex items, such as cabinets, jungle gyms, and go-carts.

“When we design things on a computer, the question arises of how to manufacture them in the real world with the necessary physical parts — wood, glass, screws, hinges, bolts and all,” said project lead Adriana Schulz, a PhD student in CSAIL. “For casual users, creating such a detailed model is not just time-consuming, but it’s actually more or less impossible unless you know something about mechanical engineering.”

Fab By Example’s intuitive drag-and-drop interface lets you mix and match materials — and position, align, and connect the different parts — without worrying if the design is actually feasible.

“The technology allows you to design and fabricate practically any off-the-wall idea that’s bouncing around your head,” Schulz said, citing a mega-shelf (pictured above) that takes up nearly an entire room.

The system, which is not yet available to the public, currently has dozens of distinct template models, each composed of hundreds of parts, down to the individual screws of a go-cart. The models are all “parametric,” meaning that they can be manipulated to take on a nearly infinite number of different shapes.  Schulz says that the team’s database of templates is currently meant to be illustrative, and could evolve to include models of cars, houses, or practically any fabricable object.

For a given project, Fab By Example allows you to see what specific parts are needed and how much they cost; you could then order the materials right from the database, with the option to optimize for price or speed-of-delivery. (Currently, you’d still need to assemble the product, but Schulz envisions a future where the database could be tied to an installation service that would send someone to your home to build it.)

Where previous do-it-yourself design databases have required an advanced degree, or at least expertise in computer-assisted design (CAD) software, the team says that now even someone with simple computer skills can make a own customizable item.

The work was developed by Schulz; CSAIL postdocs David I.W. Levin and Pitchaya Sitthi-amorn; Wojciech Matusik, an associate professor of electrical engineering and computer science at MIT; and Ariel Shamir, a professor at the Interdisciplinary Center Herzliya in Israel. The team will be presenting its system at this month’s Siggraph graphics conference.

In the future, Schulz says that the team will be working with CSAIL colleagues to incorporate designs for robots that could be assembled, customized, and even printed from home.

By Adam Conner-Simons | CSAIL