01
Oct
2012

The Dark Energy Survey (DES) is one of the most ambitious astrophysics experiments ever launched. For five years, a custom-designed camera mounted on a telescope in Chile will collect images of distant galaxies in the southern sky over an area of 5000 square degrees, corresponding to roughly 1/8th of the visible universe. That project will generate petabytes (thousands of terabytes) of data that must be painstakingly analyzed by the collaboration of scientists from 27 institutions to find answers about the nature of dark energy, dark matter and the forces that shape the evolution of the universe.

But the real data collected by that camera is only a fraction of the work in store for the DES team. As part of the DES Simulation Working Group, Andrey Kravtsov and Matthew Becker of the University of Chicago (in collaboration with researchers at Stanford University and University of Michigan) are building and running complex computer simulations modeling the evolution of the matter distribution in the universe.  By the end of the project, these simulations may increase the data analysis demands of the survey by as much as a hundredfold. Why is such a large investment of time and effort in simulations needed? Accuracy, Kravtsov said.

20
Sep
2012

The last decade has seen a statistical revolution in sports, where new, smarter measures of player performance in baseball, football, or soccer are replacing more traditional stats. Often known as “sabermetrics” in tribute to the Society for American Baseball Research, advanced statistics such as VORP, BABIP, and FIPS try to more accurately quantify a player’s performance, while forecasting tools such as PECOTA try to predict their future. While imperfect, these stats have given general managers new tools to decide which players to sign to long-term contracts and which to release.

The scientific community has its own measures of career performance, but the use of these figures in personnel decisions remains controversial. Decisions on hiring or tenure remain largely in the hands of committees, who judge applicants based on their CV, interviews, pedigree, or myriad other potentially subjective factors. Attempts to come up with more objective measures of scientific achievement are handicapped by disagreement over what factors make a “good” scientist and predict a successful career: is it number of publications, or citations, or something else entirely?