Chameleon: Why Computer Scientists Need a Cloud of Their Own


It's been almost a year since Chameleon, the experimental cloud computing testbed co-run by the Computation Institute and Texas Advanced Computing Center, went into full production for research use. Already, 600 users and 150 projects have used the system to test new uses and technologies for cloud computing, from finding unknown exoplanets to preventing cyberattacks. Last week, HPCwire spoke to CI Senior Fellow Kate Keahey and other members of the Chameleon team, surveying its early successes, previewing the innovations still to come, and offering some clarification on what it means to be a "testbed" for a growing area of computer science.

“It’s not so much a test bed – that’s a slightly confusing name – it’s a scientific instrument,” said Keahey. “Having an experimental testbed for computer science means that users are able to try out and validate their research on those resources...They come up with all sorts of interesting ideas, which are solutions to the open challenges that we have in computer science right now – and they are able to validate their research on the scientific instrument – that means that if they have a hypothesis that their new algorithm is able to solve something faster or more efficiently, they will be able to get time on the resource and run experiments that prove or disprove that hypothesis."

Chameleon offers two key concepts for researchers, author Tiffany Trader explains: access and scale. The "bare metal" configuration researchers can do on Chameleon is unusual for an HPC resource, allowing researchers to construct and test the performance of different architectures instead of just running code. The scalability also provides unique functionality that will be especially important for new sources and uses of data, Keahey said.

“The price of instruments is falling and small sensors, personal devices, wearable Internet-connected devices and other Internet-of-Things elements in tandem with social media feeds are generating enormous quantities of data,” Keahey continued. “It’s not always just big data, it’s sometimes small pieces of data that get generated all the time and they accumulate, but our insight, our capability to instrument our environment now has become unprecedented and it will just continue to grow. And so the question now is what new data processing patterns it introduces, how do we interact with this highly instrumented environment? These are all interesting research questions in computer science and all of them have an element of scale.”

To learn more about the Chameleon project, view their website.

Research Tags: