Wednesday, 27 February 2013

The Whirlpool Galaxy: NASA Image

Photonic Space

The Whirlpool Galaxy is a classic spiral galaxy. At only 30 million light years distant and fully 60 thousand light years across, M51, also known as NGC 5194, is one of the brightest and most picturesque galaxies on the sky. This image is a digital combination of a ground-based image from the 0.9-meter telescope at Kitt Peak National Observatory and a space-based image from the Hubble Space Telescope highlighting sharp features normally too red to be seen. 

Image Credit: NASA/Hubble

Memristors may lead to Artificial Brains

Bielefeld physicist Andy Thomas takes nature as his model

Scientists have long been dreaming about building a computer that would work like a brain. This is because a brain is far more energy-saving than a computer, it can learn by itself, and it doesn’t need any programming. Privatdozent [senior lecturer] Dr. Andy Thomas from Bielefeld University’s Faculty of Physics is experimenting with memristors – electronic microcomponents that imitate natural nerves. Thomas and his colleagues proved that they could do this a year ago. They constructed a memristor that is capable of learning. Andy Thomas is now using his memristors as key components in a blueprint for an artificial brain. He will be presenting his results at the beginning of March in the print edition of the prestigious Journal of Physics published by the Institute of Physics in London.
Lernfähiges Nano-Bauelement: 600 Mal dünner als das Haar eines Menschen ist der Bielefelder Memristor, hier eingebaut in einen Chip. Foto: Universität Bielefeld
A nanocomponent that is capable of learning: The Bielefeld memristor built into a chip here is 600 times thinner than a human hair.
Memristors are made of fine nanolayers and can be used to connect electric circuits. For several years now, the memristor has been considered to be the electronic equivalent of the synapse. Synapses are, so to speak, the bridges across which nerve cells (neurons) contact each other. Their connections increase in strength the more often they are used. Usually, one nerve cell is connected to other nerve cells across thousands of synapses.

Like synapses, memristors learn from earlier impulses. In their case, these are electrical impulses that (as yet) do not come from nerve cells but from the electric circuits to which they are connected. The amount of current a memristor allows to pass depends on how strong the current was that flowed through it in the past and how long it was exposed to it.

Andy Thomas explains that because of their similarity to synapses, memristors are particularly suitable for building an artificial brain – a new generation of computers. ‘They allow us to construct extremely energy-efficient and robust processors that are able to learn by themselves.’ Based on his own experiments and research findings from biology and physics, his article is the first to summarize which principles taken from nature need to be transferred to technological systems if such a neuromorphic (nerve like) computer is to function. Such principles are that memristors, just like synapses, have to ‘note’ earlier impulses, and that neurons react to an impulse only when it passes a certain threshold.

Dr. Andy Thomas hat technische Regeln für den Bau eines Prozessors zusammengefasst, der dem Gehirn nachempfunden ist. Foto: Universität Bielefeld
Dr. Andy Thomas has summarized the technological principles that need to be met when constructing a processor based on the brain.
Thanks to these properties, synapses can be used to reconstruct the brain process responsible for learning, says Andy Thomas. He takes the classic psychological experiment with Pavlov’s dog as an example. The experiment shows how you can link the natural reaction to a stimulus that elicits a reflex response with what is initially a neutral stimulus – this is how learning takes place. If the dog sees food, it reacts by salivating. If the dog hears a bell ring every time it sees food, this neutral stimulus will become linked to the stimulus eliciting a reflex response. As a result, the dog will also salivate when it hears only the bell ringing and no food is in sight. The reason for this is that the nerve cells in the brain that transport the stimulus eliciting a reflex response have strong synaptic links with the nerve cells that trigger the reaction.

If the neutral bell-ringing stimulus is introduced at the same time as the food stimulus, the dog will learn. The control mechanism in the brain now assumes that the nerve cells transporting the neutral stimulus (bell ringing) are also responsible for the reaction – the link between the actually ‘neutral’ nerve cell and the ‘salivation’ nerve cell also becomes stronger. This link can be trained by repeatedly bringing together the stimulus eliciting a reflex response and the neutral stimulus. ‘You can also construct such a circuit with memristors – this is a first step towards a neuromorphic processor,’ says Andy Thomas.

‘This is all possible because a memristor can store information more precisely than the bits on which previous computer processors have been based,’ says Thomas. Both a memristor and a bit work with electrical impulses. However, a bit does not allow any fine adjustment – it can only work with ‘on’ and ‘off’. In contrast, a memristor can raise or lower its resistance continuously. ‘This is how memristors deliver a basis for the gradual learning and forgetting of an artificial brain,’ explains Thomas.

Original publication:
Andy Thomas, ‘Memristor-based neural networks’, Journal of Physics D: Applied Physics,, released online on 5 February 2013, published in print on 6 March 2013.

For further information in the Internet, go to:


Dr. Andy Thomas, Bielefeld University
Faculty of Physics
Telephone: 0049 521 106-2540

link for this article

Sunday, 24 February 2013

NASA Rover Confirms First Drilled Mars Rock Sample

On 20 February 2013 NASA's Mars rover Curiosity relayed new images that confirm it has successfully obtained the first sample ever collected from the interior of a rock on another planet. No rover has ever drilled into a rock beyond Earth and collected a sample from its interior.

This image from NASA's Curiosity rover shows the first sample of powdered rock extracted by the rover's drill. Image credit: NASA/JPL-Caltech/MSSS 

Transfer of the powdered-rock sample into an open scoop was visible for the first time in images received Wednesday at NASA's Jet Propulsion Laboratory in Pasadena, Calif.
"Seeing the powder from the drill in the scoop allows us to verify for the first time the drill collected a sample as it bore into the rock," said JPL's Scott McCloskey, drill systems engineer for Curiosity. "Many of us have been working toward this day for years. Getting final confirmation of successful drilling is incredibly gratifying. For the sampling team, this is the equivalent of the landing team going crazy after the successful touchdown."
The drill on Curiosity's robotic arm took in the powder as it bored a 2.5-inch (6.4-centimeter) hole into a target on flat Martian bedrock on Feb. 8. The rover team plans to have Curiosity sieve the sample and deliver portions of it to analytical instruments inside the rover.
The scoop now holding the precious sample is part of Curiosity's Collection and Handling for In-Situ Martian Rock Analysis (CHIMRA) device. During the next steps of processing, the powder will be enclosed inside CHIMRA and shaken once or twice over a sieve that screens out particles larger than 0.006 inch (150 microns) across.
Small portions of the sieved sample later will be delivered through inlet ports on top of the rover deck into the Chemistry and Mineralogy (CheMin) instrument and Sample Analysis at Mars (SAM) instrument.
In response to information gained during testing at JPL, the processing and delivery plan has been adjusted to reduce use of mechanical vibration. The 150-micron screen in one of the two test versions of CHIMRA became partially detached after extensive use, although it remained usable. The team has added precautions for use of Curiosity's sampling system while continuing to study the cause and ramifications of the separation.
The sample comes from a fine-grained, veiny sedimentary rock called "John Klein," named in memory of a Mars Science Laboratory deputy project manager who died in 2011. The rock was selected for the first sample drilling because it may hold evidence of wet environmental conditions long ago. The rover's laboratory analysis of the powder may provide information about those conditions.
NASA's Mars Science Laboratory Project is using the Curiosity rover with its 10 science instruments to investigate whether an area within Mars' Gale Crater ever has offered an environment favorable for microbial life. JPL, a division of the California Institute of Technology, Pasadena, manages the project for NASA's Science Mission Directorate in Washington.
Link to NASA

Photonic Space

Saturday, 23 February 2013

Iconic NASA Image of Mercury

This colorful view of Mercury was produced by using images from the color base map imaging campaign during MESSENGER's primary mission. These colors are not what Mercury would look like to the human eye, but rather the colors enhance the chemical, mineralogical, and physical differences between the rocks that make up Mercury's surface.

Image Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington

With Robots, Humans face ‘New Society’

Humanity came one step closer in January to being able to replicate itself, thanks to the EU’s approval of funding for the Human Brain Project. Danica Kragic, a robotics researcher and computer science professor at KTH Royal Institute of Technology in Stockholm, says that while the prospect of living among humanoid robots calls to mind terrifying scenarios from science fiction, the reality of how humans cope with advances in robotics will be more complex, and subtle.

“Robots will challenge the way we feel about machines in general,” Kragic says. “A completely different kind of society is on the way.”
The Human Brain Project will involve 87 universities in a simulation of the cells, chemistry and connectivity of the brain in a supercomputer, in order to understand the brain’s architecture, organisation, functions and development. The project will include testing brain-enabled robots. 
“Will we be able to – just by the fact that we can build a brain – build a human? Why not? What would stop you?” Kragic asks.
Nevertheless, consumer-grade robots are a long way from reality, says Kragic, who in addition to serving as Director of KTH’s Centre for Autonomous Systems, is also head of the Computer Vision and Active Perception Lab.
She says that in order for robots to offer some value to households, researchers and developers will have to overcome some daunting technological challenges. Robots will have to multitask and perhaps even be programmed to have emotional capacities programmed into their logical processes, she says.
“Based on the state of the environment and what it is expected of the robot, we want the outcome action to be acceptable to humans,” she says. “Many things that we do are based not just on facts, so should machines somehow have simulated emotions, or not? Either way, it is difficult to predict how that will affect their interaction with humans.”
Kragic sees robots making a largely positive contribution to society. But they will also present some novel problems for which humans have few reference points, such as what are the social norms for interacting with robots?
“There is a discussion about robot ethics and how we should treat robots,” Kragic says. “It’s difficult to say what’s right and wrong until you are actually in the situation where you need to question yourself and your own feelings about a certain machine – and the big question is how your feelings are conditioned by the fact that you know it’s a machine, or don’t know whether it’s a machine.”
Kragic predicts that one of the most popular consumer application of robots will be as housekeepers, performing the chores that free up time for their owners. They could also take over jobs that are repetitive, such as operating buses or working in restaurants. On the other hand, the robot industry will expand and create jobs, she predicts.
As for the possibility that one day robots will turn on us – Kragic is skeptical. “A robot rebellion - that’s the ultimate science fiction scenario, right? It’s worth placing some constraints on robots, such as (author Isaac) Asimov’s Three Rules of Robotics. At the same time, we have rules as humans, which we break. No one is 100 percent safe, and the same can happen with machines.”
Human rebellion against robots is far more likely, she says, pointing out that even as society’s attitudes toward automation evolve over generations, the debate over whether humans have the right to “play God” will likely continue. “There will be people for and against it,” she says. “But what is wrong with building a human? We have been raised in a society that thinks this is wrong, that this is playing God.
“Subsequent generations could have a different view.”

By David Callahan

Small groups of brain cells store concepts for memory formation

Concepts in our minds – from Luke Skywalker to our grandmother - are represented by their own distinct group of neurons, according to new research involving University of Leicester neuroscientists form the UK.

Recent experiments during brain surgeries have shown that small groups of brain cells are responsible for encoding memories of specific people or objects.
These neurons may also represent different variations of one thing – from the name of a person to their appearance from many different viewpoints.
The researchers believe that single concepts may be held in as little as thousands of neurons or less – a tiny fraction of the billion or so neurons contained in the medial temporal lobe, which is a memory related structure within the brain.
The group were able to monitor the brain activity of consenting patients undergoing surgery to treat epilepsy. This allowed the team to monitor the activity of single neurons in conscious patients while they looked at images on laptop screens, creating and recalling memories.
In previous experiments, they had found that single neurons would ‘fire’ for specific concepts – such as Luke Skywalker – even when they were viewing images of him from different angles or simply hearing or reading his name.
They have also found that single neurons can also fire to related people and objects – for instance, the neuron that responded to Luke Skywalker also fired to Yoda, another Jedi from Star Wars.
They argue that relatively small groups of neurons hold concepts like Luke Skywalker and that related concepts such as Yoda are held by some but not all of the same neurons. At the same time, a completely separate set of neurons would hold an unrelated concept like Jennifer Aniston.
The group believes this partially overlapping representation of related concepts are the neural underpinnings of encoding associations, a key memory function.
Professor Quian Quiroga said: “After the first thrill when finding neurons in the human hippocampus with such remarkable firing characteristics, converging evidence from experiments we have been carrying out in the last years suggests that we may be hitting one of the key mechanisms of memory formation and recall.
“The abstract representation of concepts provided by these neurons is indeed ideal for representing the meaning of the sensory stimuli around us, the internal representation we use to form and retrieve memories. These concepts cells, we believe, are the building blocks of memory functions.”
The research, by neuroscientist Professor Rodrigo Quian Quiroga from the University of Leicester Centre for Systems Neuroscience together with Professor Itzhak Fried, of the UCLA David Geffen School of Medicine, Tel Aviv Sourasky Medical Center and Tel Aviv University, and Professor Christof Koch, of the California Institute of Technology and Allen Institute for Brain Science, Seattle, is featured in a recent article of the prestigious Scientific American magazine. A link to the full article in Scientific American can be found here .

Monday, 18 February 2013

Photonic Progress: Engineers are Capturing Rainbows

Photonic Progress

University of Buffalo engineers announce they have created a more efficient way to catch rainbows, an advancement in photonics that could lead to technological breakthroughs in solar energy, stealth technology and other areas of research.

Qiaoqiang Gan, assistant professor of electrical engineering, and a team of graduate students described their work in a paper called “Rainbow Trapping in Hyperbolic Metamaterial Waveguide,” published Feb. 13 in the online journal Scientific Reports.

An up-close look at the “hyperbolic metamaterial waveguide,” which catches and ultimately absorbs wavelengths—or color—in a vertical direction. UB

They developed a “hyperbolic metamaterial waveguide,” which is essentially an advanced microchip made of alternate ultra-thin films of metal and semiconductors and/or insulators. The waveguide halts and ultimately absorbs each frequency of light at slightly different places in a vertical direction to catch a “rainbow” of wavelengths.

Gan is a researcher in UB’s new Center of Excellence in Materials Informatics. 

“Electromagnetic absorbers have been studied for many years, especially for military radar systems,” Gan says. “Right now, researchers are developing compact light absorbers based on optically thick semiconductors or carbon nanotubes. However, it is still challenging to realize the perfect absorber in ultra-thin films with tunable absorption band.

“We are developing ultra-thin films that will slow the light and therefore allow much more efficient absorption, which will address the long existing challenge.”

Light is made of photons that, because they move extremely fast (i.e., at the speed of light), are difficult to tame. In their initial attempts to slow light, researchers relied upon cryogenic gases. But because cryogenic gases are very cold—roughly 240 degrees below zero Fahrenheit—they are difficult to work with outside a laboratory.

Before joining the UB faculty, Gan helped pioneer a way to slow light without cryogenic gases. He and other researchers at Lehigh University made nano-scale-sized grooves in metallic surfaces at different depths, a process that altered the optical properties of the metal. While the grooves worked, they had limitations. For example, the energy of the incident light cannot be transferred onto the metal surface efficiently, which hampered its use for practical applications, Gan says.

The hyperbolic metamaterial waveguide solves that problem because it is a large area of patterned film that can collect the incident light efficiently. It is referred to as an artificial medium with subwavelength features whose frequency surface is hyperboloid, which allows it to capture a wide range of wavelengths in different frequencies, including visible, near-infrared, mid-infrared, terahertz and microwaves.

For example, in electronics there is a phenomenon known as crosstalk, in which a signal transmitted on one circuit or channel creates an undesired effect in another circuit or channel. The on-chip absorber could potentially prevent this.

The on-chip absorber also may be applied to solar panels and other energy-harvesting devices. It could be especially useful in mid-infrared spectral regions as thermal absorber for devices that recycle heat after sundown, Gan says.

Technology such as the Stealth bomber involves materials that make planes, ships and other devices invisible to radar, infrared, sonar and other detection methods. Because the on-chip absorber has the potential to absorb different wavelengths at a multitude of frequencies, it could be useful as a stealth coating material.

Additional authors of the paper include Haifeng Hu, Dengxin Ji, Xie Zeng and Kai Liu, all PhD candidates in the Department of Electrical Engineering. The work was sponsored by the National Science Foundation and UB’s electrical engineering department.

Photonic Progress

The Search for the Oldest Star is Close to Home.

Photonic Space
When we look up at the dark sky at night, we see a vast swath of blackness that has been set on fire by the distant, furious flames of billions and billions of incandescent stars. But where did the first stars come from, and when did they appear on this vast stage of blackness to brighten up a dismal scene?

early cosmos NASA

Indeed, the birth of the first stars in our Universe is one of the most intriguing mysteries haunting today's astronomers. The most ancient stars are thought to have caught fire as early as 100 million years after the inflationary Big Bang birth of the Universe. In January 2013, astronomers announced that they had discovered the oldest star seen so far to be bouncing around in our Universe. It is a mere 186 light-years from our own Solar System, making it a near neighbor, as stars go--and it is estimated to be at least 13.2 billion years old. The Universe itself is about 13.77 billion years old, and so this oldest of all known stars is almost as old as the Universe!
Astronomers now think that the first stars inhabiting the Cosmos were unlike the stars we know and love today. This is because they were born directly from primordial gases churned out in the Big Bang itself. The primordial gases were primarily hydrogen and helium, and these two lightest of all atomic elements are believed to have pulled themselves together to form ever tighter and tighter knots. The cores of the very first protostars to dwell in our Universe first started to ignite within the mysterious dark and very cold hearts of these extremely dense knots of pristine primordial hydrogen and helium--which collapsed under their own heavy gravitational weight. It is thought that the first stars were enormous (compared to the star's dwelling in the Cosmos today), because they did not form in the same way, or from the same elements, as stars do now. The first generation of stars are called Population III stars, and they were likely gigantic megastars. Our Sun is a lovely member of the most youthful generation of stars, and is a so-called Population I star. In between the first and most recent generations of stars are, of course, the Population II stars.
Extremely heavy Population III stars were also dazzlingly bright, and their existence is largely responsible for causing the sea-change of our Universe from what it was to what it now is! These enormous and brilliant stars changed the dynamics of our Universe by heating and thus ionizing the ambient gases.
The metallicity of a star refers to the percentage of its material that is made up of atomic elements heavier than the primordial hydrogen and helium. Because stars, which compose the lion's share of the visible (atomic) matter in the Universe, are composed mainly of hydrogen and helium, astronomers use (for convenience) the all-encompassing designation of metal when describing all of the elements of the Periodic Table that are heavier than hydrogen and helium. Both hydrogen and helium formed in the inflationary Big Bang --the heavier elements, however, were all born in the nuclear-fusing, searing-hot cores of our Universe's vast multitude of incandescent stars--or in their ultimate explosive deaths. Therefore the term metal, in astronomical terminology, possesses a different meaning than the same term has in chemistry. This term should not be confused with the chemist's definition of metal. Metallic bonds are impossible in the extremely hot cores of stars, and the very strongest of chemical bonds are only possible in the outer layers of cool "stars", such as brown dwarfs, which are not even stars in the strictest sense because, even though it is thought that they are born in the same way as normal stars, they are far too small for their nuclear-fusing fires to catch flame.
The metallicity of a star provides a valuable tool for astronomers to use, because its determination can reveal the star's age. When the Universe came into being, its "normal" atomic matter was almost entirely hydrogen which, through primordial nucleosynthesis, manufactured a large quantity of helium and small quantities of lithium and beryllium--and no heavier elements. Therefore, older stars (Populations II and III) show lowermetallicities than younger stars (Population I), like our lovely bouncing baby of a Sun. Nucleosynthesis refers to the process by which heavier elements are formed out of lighter ones, by way of nuclear fusion--the fusing of atomic nuclei.
The stellar Populations I, II, and III, reveal to astronomers a decreasing metal content with increasing age. Therefore, Population I stars, like our Sun, display the greatest metal content. The three stellar populations were named in this somewhat confusing way because they were designated in the order that they were discovered, which is the reverse of the order in which they formed. Therefore, the first stars to catch fire in our Universe (Population III) were depleted of metals. The stars bearing the highest metal content are the Population I stars, the youngest in our Universe.
Population II Stars
Population II stars are very ancient, but not as old as the Population III stars. Population II stars carry the metals manufactured in the searing-hot hearts of the first generation of stars, but they do not possess the higher metal content of stars like our Sun, which contain the metals forged in the hearts of the more ancient Population II stars.
Even though the most ancient stars contain fewer heavy elements than younger stars, the fact that all stars carry at least some scant quantity of metals presents a puzzle. The currently favored explanation for this puzzling observation is that Population III stars must have existed--even though not one Population III star has ever been observed. This line of reasoning suggests that in order for the ancient Population II stars to carry the small quantity of metals that they possess, their metals must have been created in the nuclear-fusing hearts of an earlier generation of stars.
Population II stars also possess very low metallicities, and are the oldest stars to be directly observed by astronomers. However, this must be kept in its proper perspective. Even metal-rich stars, the Population I stars like our Sun, contain only minute quantities of any element heavier than hydrogen or helium. In fact, metals (in the astronomical sense of the term), make up only an extremely small percentage of the overall chemical composition of the Universe. The elderly Population II stars were born during an ancient, remote epoch. So-called Intermedicate Population II stars are most common in the bulge near the center of the Milky Way; whereas Population II stars dwelling in the Galactic halo are considerably older and hence even more metal-poor. Globular clusters also harbor a large number of Population II stars.
The Oldest Star
The star, HD 140283, is a Population II star. It dwells near our Solar System, and it is the oldest star ever spotted by astronomers. HD 140283 is at least 13.2 billion years old--but it could be much older!

"We believe this star is the oldest known in the Universe with a well determined age," Dr. Howard Bond told the press in January 2013. Dr. Bond, of Pennsylvania State University in University Park, and his colleagues, announced the discovery of this ancient star on January 10, 2013, at the winter meeting of the American Astronomical Society (AAS) in Long Beach, California.
This very old star dwells a mere 186 light-years from our Solar System, and its close proximity made it a choice target for determining a precise age measurement. The star has been scrutinized by astronomers for over a century.
Astronomers have known for a very long time that HD 140283 is made up almost entirely of hydrogen and helium--the lower the metal content, the older the star. Therefore, it has long been suspected that HD 140283 is quite antiquated--but its precise venerable age had not previously been calculated.
Dr. Bond's team determined that the star is 13.9 billion years old--plus or minus 700 hundred million years. This does not cause a conflict with the 13.77 billion year age of the Universe itself, because the calculation lies within the experimental error bars.
The discovery places some constraints on ancient star formation, however. Population III stars coalesced from primordial hydrogen and helium, and did not contain sizable quantities of elements heavier than helium. This means that as antiquated as the elderly HD 140283 most assuredly is, its composition--which contains scant amounts of metals--means that it must have formed after the first generation of stars in our Universe--the Population III stars.
Therefore, conditions for the formation of Population II stars must have existed very early in the history of the Universe. Astronomers generally think that the first stars were born a few hundred million years after our Universe was born--but that they were massive, lived fast and furiously, and died young, in wild and gigantic supernova blasts that heated the ambient gas and blessed it with all of the elements heavier than hydrogen and helium.
However, before Population II stars could be born, that ambient gas had to cool off. The very old age of HD 140283 suggests that this cooling-off time, that existed between the first and second generations of stars, might have been brief by cosmological standards--a mere few tens of millions of years.
This research was published on January 10, 2013 in the journal Nature.
author: Judith E. Braffman-Miller

Photonic Space

Sunday, 17 February 2013

Supernova: the Mysterious Origin of Cosmic Rays

Photonic Space
Very detailed new observations with ESO’s Very Large Telescope (VLT) of the remains of a thousand-year-old supernova have revealed clues to the origins of cosmic rays. For the first time the observations suggest the presence of fast-moving particles in the supernova remnant that could be the precursors of such cosmic rays. The results are appearing in the 14 February 2013 issue of the journal Science.
In the year 1006 a new star was seen in the southern skies and widely recorded around the world. It was many times brighter than the planet Venus and may even have rivaled the brightness of the Moon. It was so bright at maximum that it cast shadows and it was visible during the day. More recently astronomers have identified the site of this supernova and named it SN 1006. They have also found a glowing and expanding ring of material in the southern constellation of Lupus (The Wolf) that constitutes the remains of the vast explosion.

This remarkable image was created from pictures taken by different telescopes in space and on the ground. It shows the thousand-year-old remnant of the brilliant SN 1006 supernova, as seen in radio (red), X-ray (blue) and visible light (yellow).
It has long been suspected that such supernova remnants may also be where some cosmic rays — very high energy particles originating outside the Solar System and travelling at close to the speed of light — are formed. But until now the details of how this might happen have been a long-standing mystery.
A team of astronomers led by Sladjana Nikolić (Max Planck Institute for Astronomy, Heidelberg, Germany [1]) has now used the VIMOS instrument on the VLT to look at the one-thousand-year-old SN 1006 remnant in more detail than ever before. They wanted to study what is happening where high-speed material ejected by the supernova is ploughing into the stationary interstellar matter — the shock front. This expanding high-velocity shock front is similar to the sonic boom produced by an aircraft going supersonic and is a natural candidate for a cosmic particle accelerator.
For the first time the team has not just obtained information about the shock material at one point, but also built up a map of the properties of the gas, and how these properties change across the shock front. This has provided vital clues to the mystery.
The results were a surprise — they suggest that there were many very rapidly moving protons in the gas in the shock region [2]. While these are not the sought-for high-energy cosmic rays themselves, they could be the necessary “seed particles”, which then go on to interact with the shock front material to reach the extremely high energies required and fly off into space as cosmic rays.
Nikolić explains: “This is the first time we were able to take a detailed look at what is happening in and around a supernova shock front. We found evidence that there is a region that is being heated in just the way one would expect if there were protons carrying away energy from directly behind the shock front.
The study was the first to use an integral field spectrograph [3] to probe the properties of the shock fronts of supernova remnants in such detail. The team now is keen to apply this method to other remnants.
Co-author Glenn van de Ven of the Max Planck Institute for Astronomy, concludes: “This kind of novel observational approach could well be the key to solving the puzzle of how cosmic rays are produced in supernova remnants.
[1] The new evidence emerged during analysis of the data by Sladjana Nikolić (Max Planck Institute for Astronomy) as part of work towards her doctoral degree at the University of Heidelberg.
[2] These protons are called suprathermal as they are moving much quicker than expected simply from the temperature of the material.
[3] This is achieved using a feature of VIMOS called an integral field unit, where the light recorded in each pixel is separately spread out into its component colours and each of these spectra recorded. The spectra can then be subsequently analysed individually and maps of the velocities and chemical properties of each part of the object created.

More information

This research was presented in a paper “An Integral View of Fast Shocks around Supernova 1006” to appear in the journal Science on 14 February 2013.
The team is composed of Sladjana Nikolić (Max Planck Institute for Astronomy [MPIA], Heidelberg, Germany), Glenn van de Ven (MPIA), Kevin Heng (University of Bern, Switzerland), Daniel Kupko (Leibniz Institute for Astrophysics Potsdam [AIP], Potsdam, Germany), Bernd Husemann (AIP), John C. Raymond (Harvard-Smithsonian Center for Astrophysics, Cambridge, USA), John P. Hughes (Rutgers University, Piscataway, USA), Jesús Falcon-Barroso (Instituto de Astrofísica de Canarias, La Laguna, Spain).
ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 15 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning the 39-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.
link to ESO

Photonic Space

Hubble Sees Cosmic “Flying V” of Merging Galaxies

Photonic Space

This large “flying V” is actually two distinct objects — a pair of interacting galaxies known as IC 2184. Both the galaxies are seen almost edge-on in the large, faint northern constellation of Camelopardalis (The Giraffe), and can be seen as bright streaks of light surrounded by the ghostly shapes of their tidal tails.

These tidal tails are thin, elongated streams of gas, dust and stars that extend away from a galaxy into space. They occur when galaxies gravitationally interact with one another, and material is sheared from the outer edges of each body and flung out into space in opposite directions, forming two tails. They almost always appear curved, so when they are seen to be relatively straight, as in this image, it is clear that we are viewing the galaxies side-on.

Also visible in this image are bursts of bright blue, pinpointing hot regions where the colliding gas clouds stir up vigorous star formation. The image consists of visible and infrared observations from Hubble’s Wide Field and Planetary Camera 2.

Image credit ESA/Hubble & NASA

Editor: any similarity to the aliens in the TX series is co-incidental.

Photonic Space

Saturday, 16 February 2013

First Smartphone Satellite to Launch

Photonic Space

A UK mission, jointly developed by the University of Surrey’s Surrey Space Centre (SSC) and Surrey Satellite Technology Limited (SSTL), to send the world’s first smartphone satellite into orbit is due to launch on 25th February.

The unique and innovative satellite, called STRaND-1 (the Surrey Training, Research and Nanosatellite Demonstrator), is a 30cm CubeSat weighing 4.3kg.
It will launch into a 785km sun-synchronous orbit on ISRO’s Polar Satellite Launch Vehicle (PSLV) from Sriharikota, India.
STRaND-1 will also be the first UK CubeSat to be launched and has been developed by talented space engineers and researchers at Surrey with the majority of the design and developmental work being carried out in their spare time. The build and test phase of the project has been completed in just three months.
At the heart of STRaND-1 is a Google Nexus One smartphone with an Android operating system. Smartphones contain highly advanced technologies and incorporate several key features that are integral to a satellite – such as cameras, radio links, accelerometers and high performance computer processors – almost everything except the solar panels and propulsion.
During the first phase of the mission, STRaND-1 will use a number of experimental ‘Apps’ to collect data whilst a new high-speed linux-based CubeSat computer developed by SSC takes care of the satellite.  During phase two, the STRaND-1 team plan to switch the satellite’s in-orbit operations to the smartphone, thereby testing the capabilities of a number of standard smartphone components for a space environment.
The satellite will be commissioned and operated from the Surrey Space Centre’s ground station at the University of Surrey.
Being the first smartphone satellite in orbit is just one of many ‘firsts’ that STRaND-1 is hoping to achieve. It will also fly innovative new technologies such as a ‘WARP DRiVE’ (Water Alcohol Resistojet Propulsion Deorbit Re-entry Velocity Experiment) and electric Pulsed Plasma Thrusters (PPTs); both ‘firsts’ to fly on a nanosatellite. It is also flying a 3D printed part – believed to be the first to fly in space.
Dr Chris Bridges, SSC’s lead engineer on the project, says: “A smartphone on a satellite like this has never been launched before but our tests have been pretty thorough, subjecting the phone to oven and freezer temperatures, to a vacuum and blasting it with radiation.
"It has a good chance of working as it should, but you can never make true design evolutions or foster innovation without taking a few risks: STRaND is cool because it allows us to do just that.”
SSTL’s Head of Science, Doug Liddle said “We’ve deliberately asked this enthusiastic and talented young team to do something very non-standard in terms of the timescales, processes and the technologies used to put the satellite together because we want to maximise what we learn from this research programme.  I can’t wait to see what happens next.”
Photonic Space