31
Oct
2012

When the New York Times ran their investigative report in September on the massive amount of energy used by data centers, it drew widespread criticism from people within the information technology industry. While nobody involved with the operation or engineering of those data centers denied that they use a lot of resources, many experts took offense at the article's suggestion that the industry wasn't interested in finding solutions. "The assertions made in it essentially paint our engineers and operations people as a bunch of idiots who are putting together rows and rows of boxes on data centers and not caring what this costs to their businesses, nay, to the planet," wrote computer scientist Diego Doval, "And nothing could be further from the truth."

That statement was backed up by a talk given last week by Hewlett Packard Labs Fellow Partha Ranganathan, who told a room of computer science students and researchers about his company's efforts to develop "energy-aware computing." Ranganathan made the argument for more efficient supercomputers and data centers not just on the merits of environmental benefits, but also as an essential hurdle that must be cleared for computing speed to continue the exponential march charted by Moore's Law. As the field hopes to push through the petascale to the exascale and beyond, Ranganathan said that the "power wall" - the energy required for power and cooling -- was becoming a fundamental limit to capacity. So one of the greatest challenges the IT field currently faces is how to deliver faster and faster performance at both low cost and high sustainability.

25
Oct
2012

Malaria is often described as a tropical disease, mostly afflicting populations in sub-Saharan Africa and equatorial regions of Asia and the Americas. Due to climate change, those warm parts of the world are getting warmer, leading some experts to speculate that malaria will become an even bigger problem than the enormous current tally of approximately 200 million infections and 650,000 deaths a year. But a new model, published this month in Ecology Letters, that better reflects the complex influence of temperature on mosquito biology produced a surprising result: that malaria infection peaks at relatively mild temperatures several degrees below previous estimates. That could be good news for those in tropical regions...but maybe not such good news for the rest of us.

Models of malaria infection are based on parameters describing the biology and behavior of humans, mosquitoes, and the malaria parasite itself. Many of biological rates related to processes such as reproduction or maturation are temperature-dependent -- especially so for a cold-blooded species like the mosquito. But rates don't increase forever as the temperature rises; eventually, they reach a maximum limit and decline in what is known as a unimodal function (basically an upside-down "U"). Yet most models of malaria infection have been built upon the assumption that these pieces of mosquito biology are linearly related to temperature.

19
Oct
2012

Yesterday, we described the awesome power of the new petascale supercomputers, which are capable of performing more than one quadrillion calculations per second. But building these machines is just the beginning -- it's how they're applied to the great scientific problems of today and the future that will define their legacy. Immense computational power is best used for an immense challenge, such as complex scientific simulations or enormous datasets that would cripple an everyday laptop. Traditionally, astronomy and physics have provided the majority of this kind of work, flush as they are with data collected by telescopes and particle colliders. But as the other three speakers at our Petascale Day event described it, the disciplines of medicine, chemistry, and even business are entering a data-driven phase where they too can take advantage of petascale power.

18
Oct
2012

In the computational world, where speed is king, fifteen zeros is the current frontier. The new wave of petascale supercomputers going online around the world in the coming months are capable of performing at least one quadrillion, or 1,000,000,000,000,000, floating calculations per second. In exponential notation, a quadrillion is shortened to 1 x 10^15, so clever computer scientists declared October 15th (get it?) to be Petascale Day, a celebration of this new computational land speed record and its ability to transform science.

Here at the Computation Institute, we observed the day by hosting a lunch event with the University of Chicago Research Computing Center, putting together a roster of six talks about these powerful machines and the new types of research they will enable. The speakers, who hailed from Argonne National Laboratory, the University of Chicago, and the Computation Institute, talked about the exciting potential of the petascale, as well as the technical challenges scientists face to get the most out of the latest supercomputers.

12
Oct
2012

The 2012 IEEE International Conference on eScience is taking place in Chicago this year, and we’ll be there Wednesday through Friday to report on talks about the latest in computational research. We’ll update the blog throughout the conference (subject to wifi and electrical outlet availability), and will tweet from the talks @Comp_Inst using the hashtag #eScience.

What to Do (and Say) When the Models Aren't Good Enough (8:30 - 10:00)

The place where most people encounter computational models in their daily life is via the weather forecast. The meteorologist on the morning news or the information in the Weather app is working with numbers generated by computer models that analyze satellite data and predict out the next 24 hours or longer to some degree of probability. As everyone knows, these weather forecasts aren't always right, despite centuries of science studying weather patterns and coming up with supposedly better ways of predicting whether it's going to rain tomorrow or not.

11
Oct
2012

The 2012 IEEE International Conference on eScience is taking place in Chicago this year, and we’ll be there Wednesday through Friday to report on talks about the latest in computational research. We’ll update the blog throughout the conference (subject to wifi and electrical outlet availability), and will tweet from the talks@Comp_Inst using the hashtag #eScience.

How to Get to All That Data (and When Do the Robots Take Over) (1:00 - 3:00)

A lot of information was shared this afternoon at the conference about the voting habits of people living in Melbourne, Australia. Two different, but related projects from Down Under — the Australian Urban Research Infrastructure Network and esocialscience.org — demonstrated their web-based portals for sharing datasets collected about the country, and both chose to map the distribution of voters for the two major parties in Australia, the Labour party and the Liberals (who are actually conservative, we learned). The presentations, by Gerson Galang at the University of Melbourne and Nigel Ward of the University of Queensland, showed both the mountains of data available to researchers with a few clicks in their browser and the very complicated machinery "under the hood" that makes such voluminous information — along with the analysis and visualization tools often needed by those researchers — so easily accessible.

10
Oct
2012

The 2012 IEEE International Conference on eScience is taking place in Chicago this year, and we'll be there Wednesday through Friday to report on talks about the latest in computational research. We'll update the blog throughout the conference (subject to wifi and electrical outlet availability), and will tweet from the talks @Comp_Inst using the hashtag #eSci12.

Paving Future Cities with Open Data (Panel 2:00 - 3:00)

As the Earth's population increases, the world is urbanizing at an accelerating rate. Currently, half of the people on planet live in cities, but that number is expected to grow to 70 percent in the coming decades. Booming populations in China and India have driven rapid urban development at a rate unprecedented in human history. Simultaneously, existing cities are releasing more data about their infrastructure than ever before, on everything from crime to public transit performance to snow plow geotracking.

So now is the perfect time for computational scientists to get involved with designing and building better cities, and that was the topic of a panel moderated by Computation Institute Senior Fellow Charlie Catlett. With representatives from IBM and Chicago City Hall and a co-founder of EveryBlock, the panel brought together experts who have already started digging into city data to talk about both the potential and the precautions inherent within.

09
Oct
2012

A vast amount of scientific knowledge is inaccessible to the scientific community due to the lack of computational resources or tools for small laboratories to share or analyze experimental results. With a new grant from the National Science Foundation, the Computation Institute will collaborate with leading institutions to look for ways that software can bring this data out of hiding, revealing untapped value in the "long tail" of scientific research.

The one-year, $500,000 planning grant enables investigators at the Computation Institute, University of California, Los Angeles, University of Arizona, University of Washington and University of Southern California to lay the groundwork for a proposed Institute for Empowering Long Tail Research as part of the NSF's Scientific Software Innovation Institutes program. Researchers will engage with scientists from fields such as biodiversity, economics and metagenomics to determine the optimal solutions for the increasingly challenging data and computational demands upon smaller laboratories.

08
Oct
2012

A large chunk of a government's budget can be traced back to a small number of frequently used, expensive programs. These can include the costs of adult and juvenile incarceration, foster care for endangered children, or safety net services such as treatment for mental health or substance abuse for poor individuals. These programs don't operate in isolation; many individuals or families in one of the above programs will also be in at least one more at some point in their lives. Finding these social service "hotspots" could allow governments to more effectively distribute resources, reducing costs without sacrificing services at a time when budgets are especially tight.

But the data from each of these programs are walled off in different departments, such as the Departments of Corrections or Children and Family Services, with limited to no sharing across bureaucratic lines. In his Sept. 27 talk at the Computation Institute, Robert Goerge, a CI senior fellow and senior research fellow at the University of Chicago's Chapin Hall, described how integrating these silos of public sector data can inform more efficient government spending, and how computation can help.

01
Oct
2012

The Dark Energy Survey (DES) is one of the most ambitious astrophysics experiments ever launched. For five years, a custom-designed camera mounted on a telescope in Chile will collect images of distant galaxies in the southern sky over an area of 5000 square degrees, corresponding to roughly 1/8th of the visible universe. That project will generate petabytes (thousands of terabytes) of data that must be painstakingly analyzed by the collaboration of scientists from 27 institutions to find answers about the nature of dark energy, dark matter and the forces that shape the evolution of the universe.

But the real data collected by that camera is only a fraction of the work in store for the DES team. As part of the DES Simulation Working Group, Andrey Kravtsov and Matthew Becker of the University of Chicago (in collaboration with researchers at Stanford University and University of Michigan) are building and running complex computer simulations modeling the evolution of the matter distribution in the universe.  By the end of the project, these simulations may increase the data analysis demands of the survey by as much as a hundredfold. Why is such a large investment of time and effort in simulations needed? Accuracy, Kravtsov said.