Blog

You’re walking down a nondescript corridor lit by a harsh overhead light.

Blog

Andrey Rzhetsky, professor of medicine and human genetics, isn’t a computer scientist by trade. But the messy complexity of biomedicine is a problem that fairly cries out for analysis by computation. It was also the perfect springboard for him to discuss the overarching theme in his work in his talk for the Visualization Speaker Series, “Adventures in Analysis of Large Biomedical Datasets”: getting data for complex networks, combining data sets, and drawing from them some “non-obvious conclusions.”

Blog

You may not realize it, but network analysis is a daily part of your life.

Blog

An article from Benjamin Recchie of the University of Chicago Research Computing Center looks at how CI Senior Fellow John Goldsmith and graduate student Jackson Lee use high-performance computing to better understand how computers -- and by extension, humans -- learn the rules of language.

Blog

The genetic code of living things has been likened to a blueprint for life. Unlike a real blueprint, though, the genome doesn’t explicitly lay out everything. Genomes can reveal the amino acid sequence of proteins, the molecules at the heart of biological functions like metabolizing food and fighting disease, but the truly valuable insights lie hidden in a protein's physical structure—a far more elusive piece of data.

Blog

To the naked eye, the brain is pretty unimpressive: just a pink, wrinkly lump of an organ. To see the true complexity of the brain, one must find a way to observe the intricate symphony of electrical and chemical communication that underlies all of the brain’s functions. But for decades, scientists were largely limited to watching the electrical activity of one neuron at a time, akin to trying to comprehend that symphony by listening to just a single violin in the orchestra. In Nicholas Hatsopoulos’ talk for the University of Chicago Research Computing Center’s Visualizing the Life of the Mind series, he explained the recent technological advances that have allowed his laboratory to reveal more of the brain’s busy conversation.

CHINA'S LATEST SUPERCOMPUTER VICTORY

China's Milky Way 2 supercomputer was recently declared the fastest supercomputer in the world by industry scorekeeper Top500, the latest move in the increasingly international race for high performance computing supremacy. Late last month, CI Senior Fellow Rick Stevens appeared on Science Friday, alongside Top 500 editor Horst Simon, to talk about why that competition matters, and what the global push for faster computation will do for medicine, engineering and other sciences.

"These top supercomputers are like time machines," Stevens said. "They give us access to a capability that won't be broadly available for five to ten years. So whoever has the time machine is able to do experiments, able to see into the future deeper and more clearly than those that don't have such machines."

Blog

This week, some 25 cities around the world are hosting events online and offline as part of Big Data Week, described by its organizers as a "global community and festival of data." The Chicago portion of the event features several people from the Computation Institute, including two panels on Thursday:  "Data Complexity in the Sciences: The Computation Institute" featuring Ian Foster, Charlie Catlett, Rayid Ghani and Bob George, and  "Science Session with the Open Cloud Consortium" featuring Robert Grossman and his collaborators. Both events are in downtown Chicago, free, and you can register at the above links.

But the CI's participation in Big Data Week started with two webcast presentations on Tuesday and Wednesday that demonstrated the broad scope of the topic. The biggest data of all is being produced by simulations on the world's fastest supercomputers, including Argonne's Mira, the fourth-fastest machine in the world. Mira boasts the ability to 10 quadrillion floating point operations per second, but how do you make sense of the terabytes of data such powerful computation produces on a daily basis?

{C}

Blog

They say a picture is worth a thousand words. But if your camera is good enough, the photos it takes could also be worth billions of data points. As digital cameras grew increasingly popular over the last two decades, they also became exponentially more powerful in terms of their image resolution. The highest-end cameras today can claim 50 gigapixel resolution, meaning they are capable of taking images made up of 50 billion pixels. Many of these incredible cameras are so advanced that they have out-paced the resolution of the displays used to view their images – and the ability of humans to find meaningful information within their borders.

Closing this gap was the focus of Amitabh Varshney's talk for the Research Computing Center's Show and Tell: Visualizing the Life of the Mind series in late February. Varshney, a professor of computer science at the University of Maryland-College Park, discussed the visual component of today's big data challenges and the solutions that scientists are developing to help extract maximum value out of the new wave of ultra-detailed images -- a kind of next-level Where's Waldo? search. The methods he discussed combine some classic psychology about how vision and attention works in humans with advanced computational techniques.

Blog

Humans have a visual bias, even hundreds of thousands of years after our pattern recognition skills evolved due to prehistoric habits of hunting and predator avoidance. In a newspaper or a scientific article, a well-designed graphic or picture can often convey information more quickly and efficiently than raw data or a lengthy chunk of text. And as the era of data science is dawning, the interpretative role of visualization is more important than ever. It's hard to even imagine the size of a petabyte of data, much less the complex analysis necessary to extract knowledge from the flood of information within.

Fortunately, scientists and engineers were studying this need for visualization long before Big Data became a buzzword. The Electronic Visualization Laboratory, housed at the University of Illinois at Chicago, has been active in this field long enough to have done special effects work on the original Star Wars. EVL researchers have pioneered methods in computer animation, virtual reality and touchscreen displays, and adapted those technologies for use by scientists in academia and industry. But in EVL director Jason Leigh's talk at the University of Chicago Medical Center on January 29th, the killer app he focused the most on was almost as old as those hunter-gatherer ancestral humans: collaboration.