LabLit.com

Please visit our new site!

Essay

Deep observation

On how modern cosmologists see the Universe

Henry Joy McCracken 18 September 2011

www.lablit.com/article/684

All photos in this piece by Grégoire Eloy

Until the moment the image is created, these galaxies are unknown in any catalogue, and no name exists for them in any official register

What does it mean to observe something?

The answer is perhaps not as straightforward as it first seems. For most of recorded time, astronomers concerned themselves with studying what could be seen. A star or a galaxy. Planets. Needless to say the invention of the telescope immeasurably increased our reach: the human eye collects light in instants, each a fraction of second in length, and on a good night at a dark site one can see stars down to magnitude 6 or so. A few tens of thousands of stars scattered across the night sky.

Telescopes changed that, and suddenly the universe that could be observed was measured in the hundreds of thousands of objects. With steady progress in instrumentation, our Universe has expanded outwards; the photographic plate is a much better and more faithful instrument than the human eye. In the 20th century, it became possible for the first time to detect radiation from beyond our solar system at wavelengths outside the narrow range of visible light, and now, on the ground or in space, observatories cover every corner of the electromagnetic spectrum, from radio waves to X-rays.

Deep inside the Earth or in the oceans, we are constructing new observatories which can detect ghostly particles like neutrinos created deep in the cores of distant stars – either from our own sun, or in the explosions of distant stars such as the famous “supernovae 1987a''. Such particles interact so weakly with normal matter than kilometers of lead have almost no effect on them, and one must construct these massive underground observatories full of hundreds of litres of liquids linked to sensitive detectors to have even the smallest chance of detecting the interaction of even one of these particles with normal matter.


In deep images of the distant Universe, astronomers now are able to measure not only the light from individual objects emitted when the Universe was only a fraction of its current age, but also the collective radiation emitted from all those objects which are not directly detected, the so-called extragalactic background light – like the dim glow in the sky one sees kilometers from a large city. In these deep images, we can glimpse objects from a time when the Universe was only a few hundred million years old. In the coming years it is possible we will see the very first galaxies ever formed, the brink of the beginning of the Universe.

Beyond these galaxies are the photons from the “last scattering surface'', the edge of the epoch when the Universe was an opaque cloud. This light is red-shifted into the microwave bands at the present day and represents the ultimate distance to which it is possible to observe anything. This delineates the absolute ultimate scale of the Universe. And yet, despite all this, despite the wealth of information that these photons bring us from the beginning of time, we have observed almost nothing. In terms of the total energy content of the Universe, the glowing stars and plasmas from which these photons originated represent less than one percent of the Universe in terms of energy. It turns out that most of the Universe is unseen, and the major challenge facing astronomy today is to find some way to "observe" this unseen content and to understand its properties and origin.

**********

Gravity touches everything in the Universe, even photons travelling at the speed of light. One of the first hints that our old picture of the Universe – the static Universe on-a-stage of Newtonian mechanics – might not be entirely accurate was a remarkable observation back at the beginning of the 20th century: the position of Mercury during a total solar eclipse was ever so slightly perturbed. It was not where it should be; the gravitational field of the sun had bent the planet's light scattered to Earth. Or rather, in the new way of thinking about these things, space itself had been bent around the sun. The displacement of the image of Mercury was exactly what one would expect from an object the mass of the sun. Or, stated differently, one could deduce the mass of the sun by measuring the position of Mercury.

Of course, at the time, this didn't seem a particularly useful approach to measure the mass of the largest and most luminous object in our solar system; after all the effect of the sun on the motion of the planets was much easier to determine. But Albert Einstein, the architect of this new vision of the Universe, suggested that there might be astrophysical sources outside our solar system whose light was bent by an intervening massive object, a 'gravitational lens'. The effect would be difficult to measure and would require very high-resolution images to measure the small displacements expected. In the 1970s, the first gravitational lens was discovered, a multiply-imaged quasar. In the next ten years, arcs and filaments were observed at the centres of massive galaxy clusters: these were the images of distant background galaxies whose light had been bent by the presence of material inside the foreground clusters. Since the amount of luminous matter in the cluster was insufficient to explain the amplitude of the effect, it had to be an unseen dark component – it had to be “dark matter''. Astronomers could even trace the distribution of this dark matter inside the clusters from the orientation and shapes of the lensed objects. In other words, it became possible to “observe'' dark matter.

**********

Scientific theories are advanced by the “truths on the ground''. Getting these ground truths has always been challenging, and surprisingly the enormous advances in instrumentation and the vastly declining cost of computer power has not made things easier. Even if each new set of instruments represents an order of magnitude increase in our ability to measure the Universe and its contents, to make an order-of-magnitude leap in understanding still demands the same quantity of time and effort as it did before. This is due in large part to the fact that each new theory must explain not only the latest and newest observations, but also every piece of data gathered in the past. In the 1970s, galaxy distances were measured one by one, things not having changed much since Milton Humanson and Edwin Hubble measured the first galaxy distances on the Palomar telescope. By the 1980s, one could measure a few tens of galaxies per night and in the early years of the last decade of the twentieth century, the first “redshift surveys'' of the distant universe had been made, comprising catalogues of perhaps a thousand objects gathered over the space of several years' observation. Today, on the 8m-class VLT telescopes, new instruments makes it possible to collect as many distance measurements as these deep surveys of the 1990s did in a single night – but in the meantime, the bar has been raised ever higher: to push the boundaries of our knowledge, a survey at these depths must contain 10,000 - 20,000 distance measurements – a project which will still take several years to complete.

At each step in the history of science, our vision of the Universe has sharpened and we see more and further. Every so often there is a radical paradigm shift in our understanding of the underlying “reality'' of our observations but this does not invalidate the observations which have been made before. It is worth remembering that any new theory which supposes to solve the “dark matter problem'' or the “dark energy problem'' has to also explain the vast wealth of observational facts on the ground which have accumulated over the last three hundred years of astronomy.

The overarching problem facing cosmology today is to understand the nature and origin of dark matter and dark energy, and how galaxies and stars evolve in the context of a universe dominated by dark matter and dark energy. To do this means combining what we can observe and what we can't observe.

**********

Descartes imagined that it was possible to predict the entire future and past history of the universe if one were to know the positions and velocities of every particle it contained. All that was required was solving the equations of motion for an enormously large number of particles – impossible in practice in an epoch when even the slide-rule calculator had yet to be invented. Unfortunately, early twentieth-century physics uncovered unnerving truths concerning the limits of our ability to observe microscopic systems and how deterministic the Universe actually is, and it soon became clear that Descartes' proposition would be impossible to realise in its entirety. But perhaps macroscopic systems – big objects like stars and galaxies – could be followed, provided of course that some way could be found do all the mathematics required in a reasonable amount of time. And provided of course we had an idea how the Universe was created.

By the 1970s, the theoretical foundations were almost in place, and computers were becoming powerful enough to follow the motions of a few tens of thousand or hundreds of thousands of particles – enough, just, to simulate the motion of stars in a galaxy. The first remarkable discovery was that the only way in which the beautiful spiral arms seen in many galaxies could ever form was if the galaxy was itself was surrounded by an enormous halo of dark, unseen material. Astronomers had seen hints of this unseen mass in other places, for example in the speed with which galaxies rotate or in the motion of galaxies inside galaxy clusters, but the results of these numerical simulations made everyone consider in more detail the possibility of an unseen dark component permeating the Universe.

By the next decade, computer power had increased sufficiently to permit, for the first time, simulations of the evolution of particles on large scales throughout the history of the Universe; and we were making the first distance surveys with the first measurements of the distribution of galaxies in our local Universe. The only computer model which matched observations on large scales was one in which a large component of the Universe was comprised of a material which only interacted with normal matter through the action of gravity.

The relentless increase in computer power over the next decades permitted simulations of higher and higher resolution, containing even larger numbers of particles of even lower masses. Hundreds of scientific papers were written describing in detail the properties of dark matter haloes “observed'' inside the simulations, and how the number and density of these haloes evolved over the history of the Universe. A new branch of astrophysics slowly came into being: computational cosmology, and along with it a second way to “observe'' dark matter. Paradoxically, although the identity of the dark matter particle was unknown, its properties were known in ever-greater detail: it was a “cold'' particle which only interacted with normal matter through gravity.

**********

Stars are not scattered randomly across the sky. Take a look at the night sky from a dark site far from cities: that veil of light stretching across the heavens you see is made up of millions of stars. Understanding this distribution of stars can tell us about the structure of our own galaxy. But look carefully and one can see one or two dim smudges of light: galaxies, clouds of stars beyond the Milky Way. In reality there are as many galaxies in the observable universe as there are stars in our Milky Way. Imprinted in this distribution of galaxies is the whole history of their formation, evolution and their interaction through gravity with the haloes of dark matter they inhabit. Many research groups are now attempting to making four-dimensional maps of the Universe: thanks to the finite speed of light, observing more distant objects leads us further back into the history of the Universe. With the fourth axes as time, astronomers can use this ancient fossil light to make a series of slices through the Universe, almost like a film, except in this case the actors are different in every scene: the same galaxies don't appear in every slice. Each slice is a different group of objects seen at a different stage in their evolutionary history.

How do galaxies form in the first place? The idea is that haloes of dark matter which have detached themselves from the expansion of the Universe (through their own self-attraction) attract in turn ordinary material, gas and dust, which falls into these deep wells of gravity. This gas becomes heated and compressed and eventually stars and galaxies light up the haloes. How the distribution of visible material in the Universe “follows'' dark matter on small scales depends on how galaxies form. So if we can measure the distribution of visible objects in the sky on small scales over a large range in cosmic time and add to this our knowledge of how dark matter haloes are distributed in the Universe (which we can learn either from gravitational lensing observations or from computer simulations), we can measure how massive each individual dark matter halo is, and learn how galaxies form. This our third way to “observe'' something which we cannot see.

**********

Today, observational cosmology advances on all these fronts simultaneously: advances in instrumentation provide larger and deeper surveys of the sky comprising greater numbers of dimmer objects seen and at larger distances. These surveys probe both visible and invisible components of the Universe. At the same time, advances in computing power make it possible make simulated Universes capable of at least partially reproducing the Universe we observe. Although computers are not powerful enough to simulate the formation of galaxies starting from the Big Bang, simulations containing only dark matter have been perfected in the last decade . One key challenge now is to understand how the galaxies fill the haloes.

In the basement of the Institut d'Astrophysique de Paris, where I work, hundreds of computers run around the clock taking countless observations either from satellites (in the case of Planck) or ground-based observatories (in the case of the TERAPIX project) and combining them into a single image. The deep images produced at TERAPIX comprise hundreds of individual images combined together; one cannot simply point a telescope at a patch of sky and open the shutter for hours at time: the light from the night sky alone would white-out the image in only a few minutes. Moreover, there are only so many hours in a given night. Astronomers must superimpose observations taken from many different nights from a given telescope over the course of perhaps several years, representing hundreds of hours of exposures. On a single image, which perhaps results from only a few minutes' exposure, a few tens of galaxies are visible. The final image, representing hundreds' of hours of exposure (and for this reason capable of seeing much fainter and more distant objects) is filled with galaxies; there are perhaps a million or more in a region of sky no larger than the full moon. Not a single one of these galaxies is visible to the naked eye, and many of them have never been observed before by any human. Until the moment the image is created and analysed, they are unknown in any catalogue, and no name exists for them in any official register. A survey might contain hundreds of images like this taken at several different wavelengths, all of which have to be precisely aligned before combination. The finished survey might stretch over hundreds of degrees of the sky.

Once images like this are produced, the next step is analysis. This means first cataloguing and classifying each object in each image, a process carried out automatically using computer programs. It is here that we are confronted by the one of the most difficult problems facing observational cosmology today. Imagine you are trying to measure a quantity, say for example the average height of a given population of people. If you had only ten or a hundred data points you wouldn't care very much about how you made the measurements as the largest source of uncertainty would come from scatter in the small number of measurements. But if you had ten thousand or a hundred thousand measurements then it becomes crucial just how those measurements are made – suppose you are unfortunate enough to use a measuring rod that is just slightly too short; averaging together a million measurements will result in a value with a very small “random" error but one which has a “systematic" error. It will be just ever so slightly off.

This kind of error is actually one of the dominant sources of error in cosmological surveys. So every step of the measurement chain must be tested, preferably with synthetic images that reproduce every aspect of the real observations. Often, not every source of error is known in advance, and we have to make educated guesses. At every stage in the analysis, we must try to check that the measurements are consistent with what has been made before. There is no way to “look up the answer in the back up of the book'' as one might do for a mathematics problem in school. The point of all this is to try gain confidence in measurements that have never been made before and which cannot be directly verified. The process of image production, measurement and verification is usually the longest in any scientific project. With the increasing complexity of instrumentation and the large volume of data involved, there are fewer and fewer places in the world which can process the amount of data required to carry out a large survey of the sky and control every aspect of the processing chain, from when pixels arrive at the telescope to when measurements are made.

**********

Ultimately, everything can be expressed as symbols. In the past, astronomers would travel to telescopes and spend nights high above the observing floor guiding ancient light onto glass plates. In the light of day, each object on the plate would be carefully measured and counted using a microscope, the results tabulated and analysed. Today, the first thing that happens to a distant photon arriving at a telescope is that it is converted into an electron, a process many times more efficient than darkening film on a photographic plate. And that electron becomes a digital bit, a symbol.

Other symbols describe our model of the Universe, our best guess for how the world works. Does this model provide an accurate representation of the facts as we understand them? The entire scientific process is concerned with confronting these fragments of data with our own picture of the Universe. That is what creates new knowledge. That is what it really means to observe something.