21
Dec
2012

Last week, we announced the newest CI research center, the Urban Center for Computation and Data (UrbanCCD). Led by CI senior fellow Charlie Catlett, the center will bring the latest computational methods to bear on the question of how to intelligently design and manage large and rapidly growing cities around the world. With more cities, including our home of Chicago, releasing open datasets, the UrbanCCD hopes to bring advanced analytics to these new data sources and use them to construct complex models that can simulate the effects of new policies and interventions upon a city's residents, services and environment.

Since the announcement, news outlets including Crain's Chicago Business, RedEye and Patch have written articles about UrbanCCD and its mission. The center was also highlighted by UrbanCCD collaborators at the School of the Art Institute of Chicago and Argonne National Laboratory, and endorsed by US Rep. Daniel Lipinski.

18
Dec
2012

Most people think that scientists spend all of their time conducting experiments. But the less glamorous side of science comes after the experiments are done, as scientists laboriously comb through the data their work created. As new technologies make laboratory procedures faster and automatic, more and more of a scientist's time is spent on the often tedious task of analyzing data. In order to accelerate the speed of discovery, use resources more efficiently and avoid burning out graduate students, new ways of automating data analysis need to be found.

Carolyn Phillips, a Computation Institute staff member and postdoctoral fellow at Argonne National Laboratory, presented one solution to this data analysis traffic jam in her talk at the CI on December 14th.  Phillips works with scientists studying nanoscale self-assembly, the ability of small, simple molecules to form incredibly complex patterns with no external influence. Many researchers in this realm are using computer simulations to understand how self-assembly works and figure out new ways of harnessing it for use in the design of drugs, materials and cleaner energy sources. But these simulations can produce a flood of data, most of which still needs to be sorted manually and analyzed by slow, distractable humans before the next round of simulations can be run – a problem Phillips sought to fix.

07
Dec
2012

Newspapers don't always have the most exciting afterlife. A day or two after printing, most newspapers retire to a secondary role as kindling for the fireplace, stuffing for fragile items or a disposable surface for house-training pets. But the content of newspaper articles can have value long after publication for researchers interested in the daily, local pulse of a particular subject. Traditionally, information was extracted from old newspaper clippings by arduously crawling through endless microfiche files or (more recently) web pages. But new methods for text mining offer a fast, automated way to turn old newspaper articles into valuable information — which can then be poured into even more ambitious project.

Those methods were the backbone of a talk at the Computation Institute by John T. Murphy of the CI and Argonne National Laboratory's Decision and Information Sciences Division. An anthropologist, Murphy is interested in the ways that towns in the American west handle water management — a utility that many of us take for granted, but which can be a bitter political battlefield. To sum up these disputes, Murphy referenced a quote often attributed, albeit probably falsely, to Mark Twain: "Whiskey is for drinking, water is for fighting over."