KurzweilAI Net

Syndicate content
Accelerating Intelligence
Updated: 1 week 3 days ago

Making hydrogen fuel from water and visible light at 100 times higher efficiency

Mon, 08/24/2015 - 02:38

Test unit schematic for temperature-induced photocatalytic hydrogen production from H2O with methanol as a sacrificial agent: (1) thermocouple (temperature sensor), (2) black Pt/TiO2 on SiO2 substrate, (3) quartz wool, (4) quartz tube reactor, (5) electrical tube furnace; (GC) gas chromatograph (analyzes gas components) (credit: Bing Han and Yun Hang Hu/Journal of Physical Chemistry)

Researchers at Michigan Technological University have found a way to convert light to hydrogen fuel more efficiently — a big step closer to mimicking photosynthesis.

Current methods for creating hydrogen fuel are based on using electrodes made from titanium dioxide (TiO2), which acts as a catalyst to stimulate the light–>water–>hydrogen chemical reaction. This works great with ultraviolet (UV) light, but UV comprises only about 4% of the total solar energy, making the overall process highly inefficient.*

The ideal would be to use visible light, since it constitutes about 45 percent of solar energy. Now two Michigan Tech scientists — Yun Hang Hu, the Charles and Carroll McArthur professor of Materials Science and Engineer, and his PhD student, Bing Han — have developed a way to do exactly that.

They report in Journal of Physical Chemistry that by absorbing the entire visible light spectrum, they have increased the yield and energy efficiency of creating hydrogen fuel by up to two magnitudes (100 times) greater than previously reported.**

As described in the paper, they used three new techniques to achieve that:

  • “Black titanium dioxide” (with 1 percent platinum) on a silicon dioxide substrate;
  • A “light-diffuse-reflected surface” to trap light;
  • An elevated reaction temperature (280 degrees Celsius).

In addition, the new setup is “convenient for scaling up commercially,” said Ho.

* TiO2 has a relatively large band gap energy (3.0−3.2 eV) and thus it can absorb only ultraviolet (UV) light (about 4% of the total solar energy), leading to a low photoconversion efficiency (less than 2% under AM 1.5 global sunlight illumination).

** The new method achieves a photo hydrogen yield of 497 mmol/h/g and an apparent quantum efficiency of 65.7% for the entire visible light range at 280 °C.

Abstract of Highly Efficient Temperature-Induced Visible Light Photocatalytic Hydrogen Production from Water

Intensive effort has led to numerous breakthroughs for photoprocesses. So far, however, energy conversion efficiency for the visible-light photocatalytic splitting of water is still very low. In this paper, we demonstrate (1) surface-diffuse-reflected-light can be 2 orders of magnitude more efficient than incident light for photocatalysis, (2) the inefficiency of absorbed visible light for the photocatalytic H2 production from water with a sacrificial agent is due to its kinetic limitation, and (3) the dispersion of black Pt/TiO2 catalyst on the light-diffuse-reflection-surface of a SiO2 substrate provides a possibility for exploiting a temperature higher than H2O boiling point to overcome the kinetic limitation of visible light photocatalytic hydrogen production. Those findings create a novel temperature-induced visible light photocatalytic H2production from water steam with a sacrificial agent, which exhibits a high photohydrogen yield of 497 mmol/h/gcat with a large apparent quantum efficiency (QE) of 65.7% for entire visible light range at 280 °C. The QE and yield are one and 2 orders of magnitude larger than most reported results, respectively.

Why you’re smarter than a chicken

Sat, 08/22/2015 - 02:10

Sorry, wrong protein — you’re dinner (credit: Johnathan Nightingale via Flickr)

A single molecular event in a protein called PTBP1 in our cells could hold the key to how we evolved to become the smartest animal on the planet, University of Toronto researchers have discovered.

The conundrum: Humans and frogs, for example, have been evolving separately for 350 million years and use a remarkably similar repertoire of genes to build organs in the body. So what accounts for the vast range of organ size and complexity?

Benjamin Blencowe, a professor in the University of Toronto’s Donnelly Centre and Banbury Chair in Medical Research, and his team believe they now have the key: alternative splicing (AS).

With alternative splicing, the same gene can generate three different types of protein molecules, as in this example (credit: Wikipedia)

Here’s how alternative splicing works: specific sections of a gene called exons may be included or excluded from the final messenger RNA (mRNA) that expresses the gene (creates proteins). And that changes the arrangement of amino acid sequences.

This image shows a frog and human brain, brought to scale. Although the brain-building genes are similar in both, alternative splicing ensures greater protein diversity in human cells, which fuels organ complexity. (credit: Jovana Drinjakovic)

There are two forms of PTBP1: one that is common in all vertebrates, and another in mammals. The researchers showed that in mammalian cells, the presence of the mammalian version of PTBP1 unleashes a cascade of alternative splicing events that lead to a cell becoming a neuron instead of a skin cell, for example.

To prove that, they engineered chicken cells to make mammalian-like PTBP1, and this triggered alternative splicing events that are found in mammals, creating a smart chicken (no relation to the eponymous brand). Also, in turns out that alternative splicing prevalence increases with vertebrate complexity.

The end result: all those small accidental changes across specific genes have fueled the evolution of mammalian brains.

The study is published in the August 20 issue of Science.

Abstract of An alternative splicing event amplifies evolutionary differences between vertebrates

Alternative splicing (AS) generates extensive transcriptomic and proteomic complexity. However, the functions of species- and lineage-specific splice variants are largely unknown. Here we show that mammalian-specific skipping of polypyrimidine tract–binding protein 1 (PTBP1) exon 9 alters the splicing regulatory activities of PTBP1 and affects the inclusion levels of numerous exons. During neurogenesis, skipping of exon 9 reduces PTBP1 repressive activity so as to facilitate activation of a brain-specific AS program. Engineered skipping of the orthologous exon in chicken cells induces a large number of mammalian-like AS changes in PTBP1 target exons. These results thus reveal that a single exon-skipping event in an RNA binding regulator directs numerous AS changes between species. Our results further suggest that these changes contributed to evolutionary differences in the formation of vertebrate nervous systems.

MIT researchers invent process for 3D-printing complex transparent glass forms

Sat, 08/22/2015 - 00:20

Printing molten glass (credit: John Klein et al./3D Printing and Additive Manufacturing)

An additive-manufacturing glass-printing process called G3DP (Glass 3D Printing) has been developed by researchers in the Mediated Matter Group at the MIT Media Lab in collaboration with the Glass Lab at MIT.

Glass-printing platform. (1) Crucible, (2) heating elements, (3) nozzle (4) thermocouple, (5) removable feed access lid. (credit: John Klein et al./3D Printing and Additive Manufacturing)

The platform is based on a dual heated-chamber concept. The upper chamber acts as a Kiln Cartridge (a thermally insulated heater) operating at about 1900°F to melt the glass, while the lower chamber serves to anneal (form) the structures. The molten material gets funneled through an alumina-zircon-silica nozzle, which extrudes the material onto a build platform, where it cools and hardens. By tuning the form, transparency, and color variation, the process can drive, limit, or control light transmission, reflection and refraction in the final material.

Detail of a colored printed object (credit: John Klein et al./3D Printing and Additive Manufacturing)

The G3DP project was created in collaboration between the Mediated Matter group at the MIT Media Lab, the Mechanical Engineering Department, the MIT Glass Lab, and the Wyss Institute.

A selection of Glass pieces will appear in an exhibition at Cooper Hewitt, Smithsonian Design Museum, New York City in 2016.  An “Additive Manufacturing of Optically Transparent Glass” patent application was filed on April 25, 2014.


Mediated Matter Group | GLASS

The dilemma of human enhancement

Fri, 08/21/2015 - 01:22

How far can science push the limits of human life?

That was the theme of a Crosstalks webcast today, “The dilemma of human enhancement,” available for download.

The show addressed questions like “Can we prevent people from dying? With implants, nanotechnology, artificial body parts and smart drugs we can enhance human physiology beyond our current limitations. But should we really pursue this? And can we do it responsibly?”

Participants

Maria Konovalenko, Molecular biophysicist, Program Coordinator for the Science for Life Extension Foundation.

Zoltan Istvan, American writer, philosopher, futurist and 2016 presidential candidate for the newly formed Trans humanist Party.

Gustav Nilsonne, (MD, PhD) researcher in cognitive neuroscience at Stockholm University

Karim Jebari, Ph.D in analytic philosophy at KTH Royal Institute of Technology and Post Doc at the Institute for Futures Studies

Mats Nilsson, Lecturer and researcher at KTH Royal Institute of Technology.

Crosstalks is an international academic talk show broadcast once a month as a joint venture produced by Stockholm University and KTH Royal Institute of Technology, moderated by journalist Johanna Koljonen.

‘Diamonds from the sky’ approach to turn CO2 into valuable carbon nanofibers

Thu, 08/20/2015 - 05:46

Researchers are removing a greenhouse gas from the air while generating carbon nanofibers like these (credit: Stuart Licht, Ph.D)

A research team of chemists at George Washington University has developed a technology that can economically convert atmospheric CO2 directly into highly valued carbon nanofibers for industrial and consumer products — converting an anthropogenic greenhouse gas from a climate change problem to a valuable commodity, they say.

The team presented their research today (Aug. 19) at the 250th National Meeting & Exposition of the American Chemical Society (ACS).

“Such nanofibers are used to make strong carbon composites, such as those used in the Boeing Dreamliner, as well as in high-end sports equipment, wind turbine blades and a host of other products,” said Stuart Licht, Ph.D., team leader.

Previously, the researchers had made fertilizer and cement without emitting CO2, which they reported. Now, the team, which includes postdoctoral fellow Jiawen Ren, Ph.D., and graduate student Jessica Stuart, says their research could shift CO2 from a global-warming problem to a feed stock for the manufacture of in-demand carbon nanofibers.

Licht calls his approach “diamonds from the sky.” That refers to carbon being the material that diamonds are made of, and also hints at the high value of the products, such as carbon nanofibers.

A low-energy, high-efficiency process

The researchers claim this low-energy process can be run efficiently, using only a few volts of electricity, sunlight, and a whole lot of carbon dioxide. The system uses electrolytic syntheses to make the nanofibers. Here’s how:

  1. To power the syntheses, heat and electricity are produced through a hybrid and extremely efficient concentrating solar-energy system. The system focuses the sun’s rays on a photovoltaic solar cell to generate electricity and on a second system to generate heat and thermal energy, which raises the temperature of an electrolytic cell.
  2. CO2 is broken down in a high-temperature electrolytic bath of molten carbonates at 1,380 degrees F (750 degrees C).
  3. Atmospheric air is added to an electrolytic cell.
  4. The CO2 dissolves when subjected to the heat and direct current through electrodes of nickel and steel.
  5. The carbon nanofibers build up on the steel electrode, where they can be removed.

Licht estimates electrical energy costs of this “solar thermal electrochemical process” to be around $1,000 per ton of carbon nanofiber product. That means the cost of running the system is hundreds of times less than the value of product output, he says.

Decreasing CO2 to pre-industrial-revolution levels

“We calculate that with a physical area less than 10 percent the size of the Sahara Desert, our process could remove enough CO2 to decrease atmospheric levels to those of the pre-industrial revolution within 10 years,” he says.

At this time, the system is experimental. Licht’s biggest challenge will be to ramp up the process and gain experience to make consistently sized nanofibers. “We are scaling up quickly,” he adds, “and soon should be in range of making tens of grams of nanofibers an hour.”

Licht explains that one advance the group has recently achieved is the ability to synthesize carbon fibers using even less energy than when the process was initially developed. “Carbon nanofiber growth can occur at less than 1 volt at 750 degrees C, which for example is much less than the 3–5 volts used in the 1,000 degree C industrial formation of aluminum,” he says.

No published details on overall energy costs and efficiency are yet available (to be updated).

Abstract of New approach to carbon dioxide utilization: The carbon molten air battery

As the levels of carbon dioxide (CO2) increase in the Earth’s atmosphere, the effects on climate change become increasingly apparent. As the demand to reduce our dependence on fossils fuels and lower our carbon emissions increases, a transition to renewable energy sources is necessary. Cost effective large-scale electrical energy storage must be established for renewable energy to become a sustainable option for the future. We’ve previously shown that carbon dioxide can be captured directly from the air at solar efficiencies as high as 50%, and that carbon dioxide associated with cement formation and the production of other commodities can be electrochemically avoided in the STEP process.1-3

The carbon molten air battery, presented by our group in late 2013, is attractive due to its scalability, location flexibility, and construction from readily available resources, providing a battery that can be useful for large scale applications, such as the storage of renewable electricity.4

Uncommonly, the carbon molten air battery can utilize carbon dioxide directly from the air:
(1) charging: CO2(g) -> C(solid) + O2(g)
(2) discharging: C(solid) + O2(g) -> CO2(g)
More specifically, in a molten carbonate electrolyte containing added oxide, such as lithium carbonate with lithium oxide, the 4 electron charging reaction eq. 1 approaches 100% faradic efficiency and can be described as the following two equations:
(1a) O2-(dissolved) + CO2(g) -> CO32-(molten)
(1b) CO32-(molten) -> C(solid) + O2(g) + O2-(dissolved)
Thus, powered by carbon formed directly from the CO2 in our earth’s atmosphere, the carbon molten air battery is a viable system to provide large-scale energy storage.

1S. Licht, ”Efficient Solar-Driven Synthesis, Carbon Capture, and Desalinization, STEP: Solar Thermal Electrochemical Production of Fuels, Metals, Bleach,” Advanced Materials47, 5592 (2011).
2S. Licht, H. Wu, C. Hettige, B. Wang, J. Lau, J. Asercion, J. Stuart “STEP Cement: Solar Thermal Electrochemical Production of CaO without CO2 emission,” Chemical Communications, 48, 6019 (2012).
3S. Licht, B. Cui, B. Wang, F.-F. Li, J. Lau, S. Liu,” Ammonia synthesis by N2 and steam electrolysis in molten hydroxide suspensions of nanoscale Fe2O3,” Science, 345, 637 (2014).
4S. Licht, B. Cui, J. Stuart, B. Wang, J. Lau, “Molten Air Batteries – A new, highest energy class of rechargeable batteries,” Energy & Environmental Science, 6, 3646 (2013).

‘I think I know that person … or do I?’

Thu, 08/20/2015 - 05:20

A cross-section of a rat’s brain, showing where the key decisions are made about which is a new memory being made and which is old and familiar (credit: Johns Hopkins University)

Know that feeling when you see someone and realize you may know them (or not)? Now we know actually where in the brain that happens — the CA3 region of the hippocampus, the seat of memory, thanks to Johns Hopkins University neuroscientists.

“You see a familiar face and say to yourself, ‘I think I’ve seen that face.’ But is this someone I met five years ago, maybe with thinner hair or different glasses — or is it someone else entirely?” said James J. Knierim, a professor of neuroscience at the university’s Zanvyl Krieger Mind/Brain Institute who led the research, described in the current issue of the journal Neuron.

Is that you under that beard? Oops, excuse me. “That’s one of the biggest problems our memory system has to solve, Kneirim said. “The final job of the CA3 region is to make the decision: Is it the same or is it different? Usually you are correct in remembering that this person is a slightly different version of the person you met years ago.

“But when you are wrong, and it embarrassingly turns out that this is a complete stranger, you want to create a memory of this new person that is absolutely distinct from the memory of your familiar friend, so you don’t make the mistake again.”

Would you like chocolate sprinkles on that cheese? Knierim and associates implanted electrodes in the hippocampus of rats and monitored them as they got to know an environment and as that environment changed. They trained the rats to run around a track, eating chocolate sprinkles. The track floor had four different textures — sandpaper, carpet padding, duct tape and a rubber mat.

The rat could see, feel and smell the differences in the textures. Meanwhile, a black curtain surrounding the track had various objects attached to it. Over 10 days, the rats built mental maps of that environment.

Messing with rat minds for fun and science. Then the experimenters changed things up. They rotated the track counter-clockwise, while rotating the curtain clockwise, creating a perceptual mismatch in the rats’ minds. The effect was similar, Knierim said, to if you opened the door of your home and all of your pictures were hanging on different walls and your furniture had been moved.

“Would you recognize it as your home or think you are lost?” he said. “It’s a very disorienting experience and a very uncomfortable feeling.”

Even when the perceptual mismatch between the track and curtain was small, the “pattern-separating” part of CA3 almost completely changed its activity patterns, creating a new memory of the altered environment. But the “pattern-completing” part of CA3 tended to retrieve a similar activity pattern used to encode the original memory, even when the perceptual mismatch increased.

The findings, which validate models about how memory works, could help explain what goes wrong with memory in diseases like Alzheimer’s and could help to preserve people’s memories as they age.

This research was supported by the National Institutes of Health grants and by the Johns Hopkins University Brain Sciences Institute.

Abstract of Neural population evidence of functional heterogeneity along the CA3 transverse axis: Pattern completion vs. pattern separation

Classical theories of associative memory model CA3 as a homogeneous attractor network because of its strong recurrent circuitry. However, anatomical gradients suggest a functional diversity along the CA3 transverse axis. We examined the neural population coherence along this axis, when the local and global spatial reference frames were put in conflict with each other. Proximal CA3 (near the dentate gyrus), where the recurrent collaterals are the weakest, showed degraded representations, similar to the pattern separation shown by the dentate gyrus. Distal CA3 (near CA2), where the recurrent collaterals are the strongest, maintained coherent representations in the conflict situation, resembling the classic attractor network system. CA2 also maintained coherent representations. This dissociation between proximal and distal CA3 provides strong evidence that the recurrent collateral system underlies the associative network functions of CA3, with a separate role of proximal CA3 in pattern separation.

‘Armchair nanoribbon’ design makes graphene a wafer-scalable semiconductor

Thu, 08/20/2015 - 04:31

Progressively zoomed-in images of graphene nanoribbons grown on germanium (gray area). The ribbons automatically align perpendicularly and naturally grow in “armchair” edge configuration. (credit: Arnold Research Group and Guisinger Research Group)

University of Wisconsin-Madison engineers have discovered a way to grow graphene nanoribbons with semiconducting properties — and directly on a conventional germanium semiconductor wafer.

Graphene, an atom-thick material with extraordinary properties, normally functions as a conductor of electricity, but not as a semiconductor. This advance is significant because it could allow manufacturers to easily use graphene nanoribbons in hybrid integrated circuits, which promise to significantly boost the performance of next-generation electronic devices.

The technology could also have specific uses in high-performance industrial and military applications, such as sensors that detect specific chemical and biological species and photonic devices that manipulate light. More importantly, the technique promises to be easily scaled up for mass production and is compatible with the prevailing fab infrastructure used in semiconductor processing.

The development was announced in an open-access paper published Aug. 10 in the journal Nature Communications by Michael Arnold, an associate professor of materials science and engineering at UW-Madison, Ph.D. student Robert Jacobberger, and their collaborators.

How to create ultra-thin “armchair” graphene nanoribbons

Armchair shape in graphene sheet (credit: Rajaram Narayanan/Jacobs School of Engineering/UC San Diego)

“Graphene nanoribbons that can be grown directly on the surface of a semiconductor like germanium are more compatible with planar processing that’s used in the semiconductor industry, and so there would be less of a barrier to integrating these really excellent materials into electronics in the future,” Arnold says.

Graphene, a sheet of carbon atoms that is only one atom in thickness, conducts electricity and dissipates heat much more efficiently than silicon, the material most commonly found in today’s computer chips.

But to exploit graphene’s remarkable electronic properties in semiconductor applications where current must be switched on and off, graphene nanoribbons need to be less than 10 nanometers wide. In addition, the nanoribbons must have smooth, well-defined “armchair” edges in which the carbon-carbon bonds are parallel to the length of the ribbon.

Researchers have typically fabricated nanoribbons by using lithographic techniques to cut larger sheets of graphene into ribbons. However, this “top-down” fabrication approach lacks precision and produces nanoribbons with very rough edges.

Another strategy for making nanoribbons is to use a “bottom-up” approach such as surface-assisted organic synthesis, where molecular precursors react on a surface to polymerize nanoribbons. Arnold says surface-assisted synthesis can produce beautiful nanoribbons with precise, smooth edges, but this method only works on metal substrates and the resulting nanoribbons are thus far too short for use in electronics.

Chemical vapor deposition process breakthrough

To overcome these hurdles, the UW-Madison researchers pioneered a bottom-up technique in which they grow ultra-narrow nanoribbons with smooth, straight edges directly on germanium wafers using a process called chemical vapor deposition. In this process, the researchers start with methane, which adsorbs to the germanium surface and decomposes to form various hydrocarbons. These hydrocarbons react with each other on the surface, where they form graphene.

Arnold’s team made its discovery when it explored dramatically slowing the growth rate of the graphene crystals by decreasing the amount of methane in the chemical vapor deposition chamber. They found that at a very slow growth rate, the graphene crystals naturally grow into long nanoribbons on a specific crystal facet of germanium. By simply controlling the growth rate and growth time, the researchers can easily tune the nanoribbon width be to less than 10 nanometers.

“What we’ve discovered is that when graphene grows on germanium, it naturally forms nanoribbons with these very smooth, armchair edges,” Arnold says. “The widths can be very, very narrow and the lengths of the ribbons can be very long, so all the desirable features we want in graphene nanoribbons are happening automatically with this technique.”

The nanoribbons produced with this technique start nucleating, or growing, at seemingly random spots on the germanium and are oriented in two different directions on the surface. Arnold says the team’s future work will include controlling where the ribbons start growing and aligning them all in the same direction.

The researchers are patenting their technology through the Wisconsin Alumni Research Foundation. The research was primarily supported by the Department of Energy’s Basic Energy Sciences program.

Abstract of Direct oriented growth of armchair graphene nanoribbons on germanium

Graphene can be transformed from a semimetal into a semiconductor if it is confined into nanoribbons narrower than 10 nm with controlled crystallographic orientation and well-defined armchair edges. However, the scalable synthesis of nanoribbons with this precision directly on insulating or semiconducting substrates has not been possible. Here we demonstrate the synthesis of graphene nanoribbons on Ge(001) via chemical vapour deposition. The nanoribbons are self-aligning 3° from the Ge110 directions, are self-defining with predominantly smooth armchair edges, and have tunable width to <10 nm and aspect ratio to >70. In order to realize highly anisotropic ribbons, it is critical to operate in a regime in which the growth rate in the width direction is especially slow, <5 nm h−1. This directional and anisotropic growth enables nanoribbon fabrication directly on conventional semiconductor wafer platforms and, therefore, promises to allow the integration of nanoribbons into future hybrid integrated circuits.

Paper-based test can quickly diagnose Ebola in remote areas

Thu, 08/20/2015 - 02:57


American Chemical Society | A simple, cheap test for Ebola, dengue and yellow fever

MIT researchers have developed a low-cost, paper-based device that changes color, depending on whether the patient has Ebola, dengue, or yellow fever. The test is designed to facilitate diagnosis in remote, low-resource settings, takes minutes, and does not need electricity to read out results.

The team described their approach Tuesday (Aug. 18) at the 250th National Meeting & Exposition of the American Chemical Society (ACS), updating the MIT announcement in February.

Standard approaches for diagnosing viral infections require technical expertise and expensive equipment, says MIT researcher Kimberly Hamad-Schifferli, Ph.D. “Typically, people perform PCR and ELISA, which are highly accurate, but they need a controlled lab environment.” Polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA) are bioassays that detect pathogens directly or indirectly, respectively.

Color-changing paper devices that work like over-the-counter pregnancy tests offer a possible solution. “These are not meant to replace PCR and ELISA [lab tests], because we can’t match their accuracy,” Hamad-Schifferli says. “This is a complementary technique for places with no running water or electricity.”

Hamad-Schifferli and her team at the Massachusetts Institute of Technology, Harvard Medical School and the U.S. FDA use silver nanoparticles in a rainbow of colors. The sizes of the nanoparticles determine their colors.

When a fever strikes in a developing area, the immediate concern may be: Is it the common flu or something much worse that requires quarantine? A paper-based diagnostic test that distinguishes between yellow fever virus, Ebola, and dengue, using different colored nanoparticles tagged with virus-specific antibodies (credit: Chunwan Yen)

The researchers attached red, green, or orange nanoparticles to antibodies that specifically bind to proteins from the organisms that cause yellow fever, Ebola, or dengue, respectively. They introduced the antibody-tagged nanoparticles onto the end of a small strip of paper. In the paper’s middle, the researchers affixed “capture” antibodies to three test lines at different locations, one for each disease.

To test the device, the researchers spiked blood samples with the viral proteins and then dropped small volumes onto the end of the paper device. If a sample contained dengue proteins, for example, the dengue antibody, which was attached to a green nanoparticle, latched onto one of those proteins. This complex then migrated through the paper, until reaching the dengue fever test line, where a second dengue-specific antibody captured it. That stopped the complex from going farther down the strip, and the test line turned green. When the researchers tested samples with proteins from Ebola or yellow fever, the antibody complexes migrated to different places on the strip and turned red or orange.

“Using other laboratory tests, we know the typical concentrations of yellow fever or dengue virus in patient blood. We know that the paper-based test is sensitive enough to detect concentrations well below that range,” says Hamad-Schifferli. “It’s hard to get that information for Ebola, but we can detect down to tens of nanograms per milliliter — that’s pretty sensitive and might work with patient samples.”

Next, the researchers plan to produce kits for free distribution. “We’re giving people the components so they can build the devices themselves,” says Hamad-Schifferli. The kits will provide a flexible platform for making paper devices that can detect any disease of interest, given the right antibody. “We are trying to move this into the field and put it in the hands of the people who need it,” she says.


American Chemical Society | Paper-based test can quickly diagnose Ebola in remote areas (press conference)

Abstract of Multicolored silver nanoparticles for multiplexed disease diagnostics: Distinguishing dengue, Yellow Fever, and Ebola viruses

Rapid point-of-care (POC) diagnostic devices are needed for field-forward screening of severe acute systemic febrile illnesses. Multiplexed rapid lateral flow diagnostics have the potential to distinguish among multiple pathogens, thereby facilitating diagnosis and improving patient care. Here, we present a platform for multiplexed pathogen detection using multi-colored prism-shaped silver nanoparticles (AgNPs). We exploit the size-dependent optical properties of Ag NPs to construct a multiplexed paperfluidic lateral flow POC sensor. AgNPs of different sizes were conjugated to antibodies that bind to specific biomarkers. Red AgNPs were conjugated to antibodies that could recognize the glycoprotein for Ebola virus, green AgNPs to those that could recognize nonstructural protein 1 for dengue virus, and orange AgNPs for non structural protein 1 for yellow fever virus. Presence of each of the biomarkers resulted in a different colored band on the test line in the lateral flow test. Thus, we were able to use NP color to distinguish among three pathogens that cause a febrile illness. Because positive test lines can be imaged by eye or a mobile phone camera, the approach is adaptable to low-resource, widely deployable settings. This design requires no external excitation source and permits multiplexed analysis in a single channel, facilitating integration and manufacturing.

A brain-computer interface for controlling an exoskeleton

Wed, 08/19/2015 - 04:58

A volunteer calibrating the exoskeleton brain-computer interface (credit: (c) Korea University/TU Berlin)

Scientists at Korea University and TU Berlin have developed a brain-computer interface (BCI) for a lower limb exoskeleton used for gait assistance by decoding specific signals from the user’s brain.

LEDs flickering at five different frequencies code for five different commands (credit: Korea University/TU Berlin)

Using an electroencephalogram (EEG) cap, the system allows users to move forward, turn left and right, sit, and stand, simply by staring at one of five flickering light emitting diodes (LEDs).

Each of the five LEDs flickers at a different frequency, corresponding to five types of movements. When the user focuses their attention on a specific LED, the flickering light generates a visual evoked potential in the EEG signal, which is then identified by a computer and used to control the exoskeleton to move in the appropriate manner (forward, left, right, stand, sit).


Korea University/TU Berlin | A brain-computer interface for controlling an exoskeleton

The results are published in an open-access paper today (August 18) in the Journal of Neural Engineering.

“A key problem is designing such a system is that exoskeletons create lots of electrical ‘noise,’” explains Klaus Muller, an author of the paper. “The EEG signal [from the brain] gets buried under all this noise, but our system is able to separate out the EEG signal and the frequency of the flickering LED within this signal.”

“People with amyotrophic lateral sclerosis (ALS) (motor neuron disease) or spinal cord injuries face difficulties communicating or using their limbs,” he said. This system could let them walk again, he believes. He suggests that the control system could be added on to existing BCI devices, such as Open BCI devices.

In experiments with 11 volunteers, it only took them a few minutes to be trained in operating the system. Because of the flickering LEDs, they were carefully screened for epilepsy prior to taking part in the research. The researchers are now working to reduce the “visual fatigue” associated with longer-term use.

Abstract of A lower limb exoskeleton control system based on steady state visual evoked potentials

Objective. We have developed an asynchronous brain–machine interface (BMI)-based lower limb exoskeleton control system based on steady-state visual evoked potentials (SSVEPs). 

Approach. By decoding electroencephalography signals in real-time, users are able to walk forward, turn right, turn left, sit, and stand while wearing the exoskeleton. SSVEP stimulation is implemented with a visual stimulation unit, consisting of five light emitting diodes fixed to the exoskeleton. A canonical correlation analysis (CCA) method for the extraction of frequency information associated with the SSVEP was used in combination with k-nearest neighbors.

Main results. Overall, 11 healthy subjects participated in the experiment to evaluate performance. To achieve the best classification, CCA was first calibrated in an offline experiment. In the subsequent online experiment, our results exhibit accuracies of 91.3 ± 5.73%, a response time of 3.28 ± 1.82 s, an information transfer rate of 32.9 ± 9.13 bits/min, and a completion time of 1100 ± 154.92 s for the experimental parcour studied. 

Significance. The ability to achieve such high quality BMI control indicates that an SSVEP-based lower limb exoskeleton for gait assistance is becoming feasible.

Most complete functioning human-brain model to date, according to researchers

Wed, 08/19/2015 - 04:05

This image of the lab-grown brain is labeled to show identifiable structures: the cerebral hemisphere, the optic stalk, and the cephalic flexure, a bend in the mid-brain region, all characteristic of the human fetal brain (credit: The Ohio State University)

Scientists at The Ohio State University have developed a miniature human brain in a dish with the equivalent brain maturity of a five-week-old fetus.

The brain organoid, engineered from adult human skin cells, is the most complete human brain model yet developed, said Rene Anand, a professor of biological chemistry and pharmacology at Ohio State.

The lab-grown brain, about the size of a pencil eraser, has an identifiable structure and contains 99 percent of the genes present in the human fetal brain. Such a system will enable ethical and more rapid, accurate testing of experimental drugs before the clinical trial stage. It is intended to advance studies of genetic and environmental causes of central nervous system disorders.

“It not only looks like the developing brain, its diverse cell types express nearly all genes like a brain,” Anand said. “The power of this brain model bodes very well for human health because it gives us better and more relevant options to test and develop therapeutics other than [using] rodents.”

Anand reported on his lab-grown brain today (August 18) at the 2015 Military Health System Research Symposium in Ft. Lauderdale, Florida.

The main thing missing in this model is a vascular system. But what is there — a spinal cord, all major regions of the brain, multiple cell types, signaling circuitry and even a retina — has the potential to dramatically accelerate the pace of neuroscience research, said Anand, who is also a professor of neuroscience.

Organoid derivation and development (credit: Rene Anand and Susan McKay)

Created from pluripotent stem cells

“In central nervous system diseases, this will enable studies of either underlying genetic susceptibility or purely environmental influences, or a combination,” he said. According to genomic science, “there are up to 600 genes that give rise to autism, but we are stuck there. Mathematical correlations and statistical methods are insufficient in themselves to identify causation. You need an experimental system — you need a human brain.”

Anand’s method is proprietary and he has filed an invention disclosure with the university. He said he used techniques to differentiate pluripotent stem cells into cells that are designed to become neural tissue, components of the central nervous system or other brain regions..

High-resolution imaging of the organoid identifies functioning neurons and their signal-carrying extensions — axons and dendrites — as well as astrocytes, oligodendrocytes and microglia. The model also activates markers for cells that have the classic excitatory and inhibitory functions in the brain, and that enable chemical signals to travel throughout the structure.

It takes about 15 weeks to build a model system developed to match the 5-week-old fetal human brain. Anand and colleague Susan McKay, a research associate in biological chemistry and pharmacology, let the model continue to grow to the 12-week point, observing expected maturation changes along the way.

“If we let it go to 16 or 20 weeks, that might complete it, filling in that 1 percent of missing genes. We don’t know yet,” he said.

Models of brain disorders and injury with civilian and military uses

He and McKay have already used the platform to create brain organoid models of Alzheimer’s and Parkinson’s diseases and autism in a dish. They hope that with further development and the addition of a pumping blood supply, the model could be used for stroke therapy studies. For military purposes, the system offers a new platform for the study of Gulf War illness, traumatic brain injury, and post-traumatic stress disorder.

Anand hopes his brain model could be incorporated into the Microphysiological Systems program, a platform the Defense Advanced Research Projects Agency is developing by using engineered human tissue to mimic human physiological systems.

Support for the work came from the Marci and Bill Ingram Research Fund for Autism Spectrum Disorders and the Ohio State University Wexner Medical Center Research Fund.

Anand and McKay are co-founders of a Columbus-based start-up company, NeurXstem, to commercialize the brain organoid platform, and have applied for funding from the federal Small Business Technology Transfer program to accelerate its drug discovery applications.

Surprising results from brain and cognitive studies of a 93-year-old woman athelete

Wed, 08/19/2015 - 02:50

Olga Kotelko’s brain “does not look like a 90-plus-year-old” — Beckman Institute director Art Kramer

Brain scans and cognitive tests of Olga Kotelko, a 93-year-old Canadian track-and-field athlete with more than 30 world records in her age group, may support the potential beneficial effects of exercise on cognition in the “oldest old.”

In the summer of 2012, researchers at the Beckman Institute for Advanced Science and Technology at the University of Illinois invited her to visit for in-depth analysis of her brain. The resulting study, was reported in the journal Neurocase.

A retired teacher and mother of two, Kotelko started her athletic career late in life. She began with slow-pitch softball at age 65, and at 77 switched to track-and-field events, later enlisting the help of a coach. By the time of her death in 2014, she had won 750 gold medals in her age group in World Masters Athletics events, and had set new world records in the 100-meter, 200-meter, high jump, long jump, javelin, discus, shot put and hammer events.


Beckman Institute | Senior Olympian: 93-Year-Old Track Star Shows Physical & Mental Fitness

Lacking a peer group of reasonably healthy nonagenarians for comparison, the researchers decided to compare Kotelko with a group of 58 healthy, low-active women who were 60 to 78 years old.

“In our studies, we often collect data from adults who are between 60 and 80 years old, and we have trouble finding participants who are 75 to 80 and relatively healthy,” said U. of I. postdoctoral researcher Agnieszka Burzynska, who led the new analysis. As a result, very few studies have focused on the “oldest old,” she said.

“Although it is tough to generalize from a single study participant to other individuals, we felt very fortunate to have an opportunity to study the brain and cognition of such an exceptional individual,” said Beckman Institute director Art Kramer, an author of the new study.

Aging processes in the brain

The researchers wanted to determine whether Kotelko’s late-life athleticism had slowed — or perhaps even reversed — some of the processes of aging in her brain.

“In general, the brain shrinks with age,” Burzynska said. Fluid-filled spaces appear between the brain and the skull, and the ventricles enlarge, she said.

“The cortex, the outermost layer of cells where all of our thinking takes place, that also gets thinner,” she said. White matter tracts, which carry nerve signals between brain regions, tend to lose their structural and functional integrity over time. And the hippocampus, which is important to memory, usually shrinks with age, Burzynska said.

Previous studies have shown that regular aerobic exercise can enhance cognition and boost brain function in older adults, and can even increase the volume of specific brain regions like the hippocampus, Kramer said.

Surprising test results

In one long day at the lab, Kotelko submitted to an MRI brain scan, a cardiorespiratory fitness test on a treadmill, and cognitive tests. (All of the data are available at XNAT, a public repository; Kotelko and her daughter agreed to make her data public.) The women in the comparison group underwent the same tests and scans.

Afterwards, Kramer asked Olga if she was tired; she replied, “I rarely get tired.” “The decades-younger graduate students who tested her, however, looked exhausted.”

Kotelko’s brain offered some intriguing first clues about the potentially beneficial effects of her active lifestyle.

White-matter tracts remarkably intact. “Her brain did not seem to be, in general, very shrunken, and her ventricles did not seem to be enlarged,” Burzynska said. On the other hand, she had obvious signs of advanced aging in the white-matter tracts of some brain regions, Burzynska said.

“Olga had quite a lot white-matter hyperintensities, which are markers of unspecific white-matter damage,” she said. These are common in people over age 65, and tend to increase with age, she said.

As a whole, however, Kotelko’s white-matter tracts were remarkably intact — comparable to those of women decades younger, the researchers found. And the white-matter tracts in one region of her brain — the genu of the corpus callosum, which connects the right and left hemispheres at the very front of the brain — were in great shape, Burzynska said.

“Olga had the highest measure of white-matter integrity in that part of the brain, even higher than those younger females, which was very surprising,” she said. These white-matter tracts serve a region of the brain that is engaged in tasks known to decline fastest in aging, such as reasoning, planning and self-control, Burzynska said.

Better on cognitive test than other adults her own age. Kotelko performed worse on cognitive tests than the younger women, but better than other adults her own age who had been tested in an independent study. “She was quicker at responding to the cognitive tasks than other adults in their 90s,” Burzynska said. “And on memory, she was much better than they were.”

Hippocampus larger given her age. Her hippocampus was smaller than the younger participants, but larger than expected given her age, Burzynska said.

The new findings are only a very limited, first step toward calculating the effects of exercise on cognition in the oldest old, she said. “We have only one Olga and only at one time point, so it’s difficult to arrive at very solid conclusions,” Burzynska said.

“But I think it’s very exciting to see someone who is highly functioning at 93, possessing numerous world records in the athletic field and actually having very high integrity in a brain region that is very sensitive to aging. I hope it will encourage people that even as we age, our brains remain plastic. We have more and more evidence for that.”

The Robert Bosch Foundation and the National Institute on Aging at the National Institutes of Health supported this research, as did Abbott Nutrition, through the Center for Nutrition, Learning and Memory at the U. of I.

Kotelko biographer Bruce Grierson prompted researchers at the Beckman Institute to study Kotelko’s brain.

Abstract of White matter integrity, hippocampal volume, and cognitive performance of a world-famous nonagenarian track-and-field athlete

Physical activity (PA) and cardiorespiratory fitness (CRF) are associated with successful brain and cognitive aging. However, little is known about the effects of PA, CRF, and exercise on the brain in the oldest-old. Here we examined white matter (WM) integrity, measured as fractional anisotropy (FA) and WM hyperintensity (WMH) burden, and hippocampal (HIPP) volume of Olga Kotelko (1919–2014). Olga began training for competitions at age of 77 and as of June 2014 held over 30 world records in her age category in track-and-field. We found that Olga’s WMH burden was larger and the HIPP was smaller than in the reference sample (58 healthy low-active women 60–78 years old), and her FA was consistently lower in the regions overlapping with WMH. Olga’s FA in many normal-appearing WM regions, however, did not differ or was greater than in the reference sample. In particular, FA in her genu corpus callosum was higher than any FA value observed in the reference sample. We speculate that her relatively high FA may be related to both successful aging and the beneficial effects of exercise in old age. In addition, Olga had lower scores on memory, reasoning and speed tasks than the younger reference sample, but outperformed typical adults of age 90–95 on speed and memory. Together, our findings open the possibility of old-age benefits of increasing PA on WM microstructure and cognition despite age-related increase in WMH burden and HIPP shrinkage, and add to the still scarce neuroimaging data of the healthy oldest-old (>90 years) adults.

‘Information sabotage’ on Wikipedia claimed

Tue, 08/18/2015 - 01:23

 

Research has moved online, with more than 80 percent of U.S. students using Wikipedia for research papers, but controversial science information has egregious errors, claim researchers (credit: Pixabay)

Wikipedia entries on politically controversial scientific topics can be unreliable due to “information sabotage,” according to an open-access paper published today in the journal PLOS One.

The authors (Gene E. Likens* and Adam M. Wilson*) analyzed Wikipedia edit histories for three politically controversial scientific topics (acid rain, evolution, and global warming), and four non-controversial scientific topics (the standard model in physics, heliocentrism, general relativity, and continental drift).

“Egregious errors and a distortion of consensus science”

Using nearly a decade of data, the authors teased out daily edit rates, the mean size of edits (words added, deleted, or edited), and the mean number of page views per day. Across the board, politically controversial scientific topics were edited more heavily and viewed more often.

“Wikipedia’s global warming entry sees 2–3 edits a day, with more than 100 words altered, while the standard model in physics has around 10 words changed every few weeks,” Wilson notes. “The high rate of change observed in politically controversial scientific topics makes it difficult for experts to monitor their accuracy and contribute time-consuming corrections.”

While the edit rate of the acid rain article was less than the edit rate of the evolution and global warming articles, it was significantly higher than the non-controversial topics. “In the scientific community, acid rain is not a controversial topic,” said professor Likens. “Its mechanics have been well understood for decades. Yet, despite having ‘semi-protected’ status to prevent anonymous changes, Wikipedia’s acid rain entry receives near-daily edits, some of which result in egregious errors and a distortion of consensus science.”

Wikipedia’s limitations

Likens adds, “As society turns to Wikipedia for answers, students, educators, and citizens should understand its limitations for researching scientific topics that are politically charged. On entries subject to edit-wars, like acid rain, evolution, and global change, one can obtain — within seconds — diametrically different information on the same topic.”

However, the authors note that as Wikipedia matures, there is evidence that the breadth of its scientific content is increasingly based on source material from established scientific journals. They also note that Wikipedia employs algorithms to help identify and correct blatantly malicious edits, such as profanity. But in their view, it remains to be seen how Wikipedia will manage the dynamic, changing content that typifies politically charged science topics.

To help readers critically evaluate Wikipedia content, Likens and Wilson suggest identifying entries that are known to have significant controversy or edit wars. They also recommend quantifying the reputation of individual editors. In the meantime, users are urged to cast a critical eye on Wikipedia source material, which is found at the bottom of each entry.

Wikipedia editors not impressed

In the Wikipedia “User_talk:Jimbo_Wales” page, several Wikipedia editors questioned the PLOS One authors’ statistical accuracy and conclusions, and noted that the data is three years out of date. “I don’t think this dataset can make any claim about controversial subjects at all,” one editor said. “It simply looks at too few articles, and there are too many explanations.”

“It has long been a source of bewilderment to me that we allow climate change denialists to run riot on Wikipedia,” said another.

* Dr. Gene E. Likens is President Emeritus of the Cary Institute of Ecosystem Studies and a Distinguished Research Professor at the University of Connecticut, Storrs. Likens co-discovered acid rain in North America, and counts among his accolades a National Medal of Science, a Tyler Prize, and elected membership in the National Academy of Sciences. Dr. Adam M. Wilson is a geographer at the University of Buffalo.

Abstract of Content Volatility of Scientific Topics in Wikipedia: A Cautionary Tale

Wikipedia has quickly become one of the most frequently accessed encyclopedic references, despite the ease with which content can be changed and the potential for ‘edit wars’ surrounding controversial topics. Little is known about how this potential for controversy affects the accuracy and stability of information on scientific topics, especially those with associated political controversy. Here we present an analysis of the Wikipedia edit histories for seven scientific articles and show that topics we consider politically but not scientifically “controversial” (such as evolution and global warming) experience more frequent edits with more words changed per day than pages we consider “noncontroversial” (such as the standard model in physics or heliocentrism). For example, over the period we analyzed, the global warming page was edited on average (geometric mean ±SD) 1.9±2.7 times resulting in 110.9±10.3 words changed per day, while the standard model in physics was only edited 0.2±1.4 times resulting in 9.4±5.0 words changed per day. The high rate of change observed in these pages makes it difficult for experts to monitor accuracy and contribute time-consuming corrections, to the possible detriment of scientific accuracy. As our society turns to Wikipedia as a primary source of scientific information, it is vital we read it critically and with the understanding that the content is dynamic and vulnerable to vandalism and other shenanigans.

Scientists discover atomic-resolution secret of high-speed brain signaling

Mon, 08/17/2015 - 23:57

This illustration shows a protein complex at work in brain signaling. Its structure, which contains joined protein complexes known as SNARE (shown in blue, red, and green) and synaptotagmin-1 (orange), is shown in the foreground. This complex is responsible for the calcium-triggered release of neurotransmitters from our brain’s nerve cells in a process called synaptic vesicle fusion. The background image shows electrical signals traveling through a neuron. (credit: SLAC National Accelerator Laboratory)

Stanford School of Medicine scientists have mapped the 3D atomic structure of a two-part protein complex that controls the release of signaling chemicals, called neurotransmitters, from brain cells in less than one-thousandth of a second.

The experiments were reported today (August 17) in the journal Nature. Performed at the Linac Coherent Light Source (LCLS) X-ray laser at the Department of Energy’s SLAC National Accelerator Laboratory, the experiments were built on decades of previous research at Stanford University, Stanford School of Medicine, and SLAC.

“This is a very important, exciting advance that may open up possibilities for targeting new drugs to control neurotransmitter release,” said Axel Brunger, the study’s principal investigator — a professor at Stanford School of Medicine and SLAC and a Howard Hughes Medical Institute investigator. “Many mental disorders, including depression, schizophrenia and anxiety, affect neurotransmitter systems.”

The two protein parts are known as neuronal SNAREs and synaptotagmin-1. “Both parts of this protein complex are essential,” Brunger said, “but until now it was unclear how its two pieces fit and work together.” Earlier X-ray studies, including experiments at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) nearly two decades ago, shed light on the structure of the SNARE complex, a helical protein bundle found in yeasts and mammals.

SNAREs play a key role in the brain’s chemical signaling by joining, or “fusing,” little packets of neurotransmitters to the outer edges of neurons, where they are released and then dock with chemical receptors in another neuron to trigger a response.

Explains rapid triggering of brain signaling

In this latest research, the scientists found that when the SNAREs and synaptotagmin-1 join up, they act as an amplifier for a slight increase in calcium concentration, triggering a gunshot-like release of neurotransmitters from one neuron to another. They also learned that the proteins join together before they arrive at a neuron’s membrane, which helps to explain how they trigger brain signaling so rapidly.

The team speculates that several of the joined protein complexes may group together and simultaneously interact with the same vesicle to efficiently trigger neurotransmitter release, an exciting area for further studies. “The structure of the SNARE-synaptotagmin-1 complex is a milestone that the field has awaited for a long time, and it sets the framework for a better understanding of the system,” said James Rothman, a professor at Yale University who discovered the SNARE proteins and shared the 2013 Nobel Prize in Physiology or Medicine.

Thomas C. Südhof, a professor at the Stanford School of Medicine and Howard Hughes Medical Institute investigator who shared that 2013 Nobel Prize with Rothman, discovered synaptotagmin-1 and showed that it plays an important role as a calcium sensor and calcium-dependent trigger for neurotransmitter release.

“The new structure has identified unanticipated interfaces between synaptotagmin-1 and the neuronal SNARE complex that change how we think about their interaction by revealing, in atomic detail, exactly where they bind together,” Südhof said. “This is a new concept that goes much beyond previous general models of how synaptotagmin-1 functions.”

Using crystals, robotics and X-rays to advance neuroscience

To study the joined protein structure, researchers in Brunger’s laboratory at the Stanford School of Medicine found a way to grow crystals of the complex. They used a robotic system developed at SSRL to study the crystals at SLAC’s LCLS, an X-ray laser that is one of the brightest sources of X-rays on the planet. The researchers combined and analyzed hundreds of X-ray images from about 150 protein crystals to reveal the atomic-scale details of the joined structure.

According to SSRL’s Aina Cohen, who oversaw the development of the highly automated platform used for the neuroscience experiment, “This experiment was the first to use this robotic platform at LCLS to determine a previously unsolved structure of a large, challenging multi-protein complex.” The study was also supported by X-ray experiments at SSRL and at Argonne National Laboratory’s Advanced Photon Source.

Brunger said future studies will explore other protein interactions relevant to neurotransmitter release. “What we studied is only a subset,” he said. “There are many other factors interacting with this system and we want to know what these look like.”

Other contributing scientists were from Lawrence Berkeley National Laboratory. The research was supported by the Howard Hughes Medical Institute, the National Institutes of Health (NIH), the DOE Office Science, and the SSRL Structural Molecular Biology Program, which is also supported by the DOE Office of Science and the NIH’s National Institute of General Medical Sciences.

Abstract of Architecture of the synaptotagmin–SNARE machinery for neuronal exocytosis

Synaptotagmin-1 and neuronal SNARE proteins have central roles in evoked synchronous neurotransmitter release; however, it is unknown how they cooperate to trigger synaptic vesicle fusion. Here we report atomic-resolution crystal structures of Ca2+- and Mg2+-bound complexes between synaptotagmin-1 and the neuronal SNARE complex, one of which was determined with diffraction data from an X-ray free-electron laser, leading to an atomic-resolution structure with accurate rotamer assignments for many side chains. The structures reveal several interfaces, including a large, specific, Ca2+-independent and conserved interface. Tests of this interface by mutagenesis suggest that it is essential for Ca2+-triggered neurotransmitter release in mouse hippocampal neuronal synapses and for Ca2+-triggered vesicle fusion in a reconstituted system. We propose that this interface forms before Ca2+ triggering, moves en bloc as Ca2+ influx promotes the interactions between synaptotagmin-1 and the plasma membrane, and consequently remodels the membrane to promote fusion, possibly in conjunction with other interfaces.

Koko the gorilla shows signs of early speech

Mon, 08/17/2015 - 22:50


UW-Madison Campus Connection | Koko the Gorilla Coughs

Koko the gorilla has learned vocal and breathing behaviors that may change the perception that humans are the only primates with the capacity for speech.

In 2010, Marcus Perlman started research work at The Gorilla Foundation in California, where Koko has spent more than 40 years living immersed with humans — interacting for many hours each day with psychologist Penny Patterson and biologist Ron Cohn.

“I went there with the idea of studying Koko’s gestures, but as I got into watching videos of her, I saw her performing all these amazing vocal behaviors,” says Perlman, now a postdoctoral researcher at the University of Wisconsin-Madison.

“Decades ago, in the 1930s and ’40s, a couple of husband-and-wife teams of psychologists tried to raise chimpanzees as much as possible like human children and teach them to speak. Their efforts were deemed a total failure,” Perlman says. “Since then, there is an idea that apes are not able to voluntarily control their vocalizations or even their breathing.”

Instead, the thinking went, the calls apes make pop out almost reflexively in response to their environment — the appearance of a dangerous snake, for example. And the particular vocal repertoire of each ape species was thought to be fixed, unable to learn new vocal and breathing-related behaviors.


UW-Madison Campus Connection | Koko the gorilla blows her nose

These limits fit a theory on the evolution of language. “This idea says there’s nothing that apes can do that is remotely similar to speech,” Perlman says. “And, therefore, speech essentially evolved — completely new — along the human line since our last common ancestor with chimpanzees.”

Learned vocalization and breathing

However, in a study published online in July in the journal Animal Cognition, Perlman and collaborator Nathaniel Clark of the University of California, Santa Cruz, sifted 71 hours of video of Koko interacting with Patterson and Cohn and others, and found repeated examples of Koko performing nine different, voluntary behaviors that required control over her vocalization and breathing. These were learned behaviors, not part of the typical gorilla repertoire.

Among other things, Perlman and Clark watched Koko blow a raspberry (or blow into her hand) when she wanted a treat, blow her nose into a tissue, play wind instruments, huff moisture onto a pair of glasses before wiping them with a cloth and mimic phone conversations by chattering wordlessly into a telephone cradled between her ear and the crook of an elbow.


UW-Madison Campus Connection | Koko the gorilla plays an instrument

“She doesn’t produce a pretty, periodic sound when she performs these behaviors, like we do when we speak,” Perlman says. “But she can control her larynx enough to produce a controlled grunting sound.” Koko can also cough on command — impressive for a gorilla because it requires her to close off her larynx.

These behaviors are all learned, Perlman figures, and the result of living with humans since Koko was just six months old.

This suggests that some of the evolutionary groundwork for the human ability to speak was in place at least by the time of our last common ancestor with gorillas, estimated to be around 10 million years ago.

“Koko bridges a gap,” Perlman says. “She shows the potential under the right environmental conditions for apes to develop quite a bit of flexible control over their vocal tract. It’s not as fine as human control, but it is certainly control.”

Orangutans have also demonstrated some impressive vocal and breathing-related behavior, according to Perlman, indicating the whole great ape family may share the abilities Koko has learned to tap.

Abstract of Learned vocal and breathing behavior in an enculturated gorilla

We describe the repertoire of learned vocal and breathing-related behaviors (VBBs) performed by the enculturated gorilla Koko. We examined a large video corpus of Koko and observed 439 VBBs spread across 161 bouts. Our analysis shows that Koko exercises voluntary control over the performance of nine distinctive VBBs, which involve variable coordination of her breathing, larynx, and supralaryngeal articulators like the tongue and lips. Each of these behaviors is performed in the context of particular manual action routines and gestures. Based on these and other findings, we suggest that vocal learning and the ability to exercise volitional control over vocalization, particularly in a multimodal context, might have figured relatively early into the evolution of language, with some rudimentary capacity in place at the time of our last common ancestor with great apes.

Glass paint could keep metal roofs and other structures cool even on sunny days

Mon, 08/17/2015 - 02:46

Silica-based paint (credit: American Chemical Society/Johns Hopkins University Applied Physics Lab)

Scientists at the Johns Hopkins University Applied Physics Lab have developed a new, environmentally friendly paint made from glass that bounces sunlight off metal surfaces — keeping them cool and durable.

“Most paints you use on your car or house are based on polymers, which degrade in the ultraviolet light rays of the sun,” says Jason J. Benkoski, Ph.D. “So over time you’ll have chalking and yellowing. Polymers also tend to give off volatile organic compounds, which can harm the environment. That’s why I wanted to move away from traditional polymer coatings to inorganic glass ones.”

Glass, which is made out of silica, would be an ideal coating. It’s hard, durable and has the right optical properties. But it’s very brittle.

To address that aspect in a new coating, Benkoski, started with silica, one of the most abundant materials in the earth’s crust. He modified one version of it, potassium silicate, that normally dissolves in water. His tweaks transformed the compound so that when it’s sprayed onto a surface and dries, it becomes water-resistant.

Unlike acrylic, polyurethane or epoxy paints, Benkoski’s paint is almost completely inorganic, which should make it last far longer than its counterparts that contain organic compounds. His paint is also designed to expand and contract with metal surfaces to prevent cracking.

Mixing pigments with the silicate gives the coating an additional property: the ability to reflect all sunlight and passively radiate heat. Since it doesn’t absorb sunlight, any surface coated with the paint will remain at air temperature, or even slightly cooler. That’s key to protecting structures from the sun.

“When you raise the temperature of any material, any device, it almost always by definition ages much more quickly than it normally would,” Benkoski says. “It’s not uncommon for aluminum in direct sunlight to heat 70 degrees Fahrenheit above ambient temperature. If you make a paint that can keep an outdoor surface close to air temperature, then you can slow down corrosion and other types of degradation.”


American Chemical Society | Glass Paint That Can Keep Structures Cool

The paint Benkoski’s lab is developing is intended for use on naval ships (with funding from the U.S. Office of Naval Research), but has many potential commercial applications.

“You might want to paint something like this on your roof to keep heat out and lower your air-conditioning bill in the summer,” he says. It could even go on metal playground slides or bleachers. And it would be affordable. The materials needed to make the coating are abundant and inexpensive.”

Benkoski says he expects his lab will start field-testing the material in about two years.

The researchers presented their work today at the 250th National Meeting & Exposition of the American Chemical Society (ACS), held in Boston through Thursday. It features more than 9,000 presentations on a wide range of science topics.

Abstract of Passive cooling with UV-resistant siloxane coatings in direct sunlight

Solar exposure is a leading cause of material degradation in outdoor use. Polymers and other organic materials photo-oxidize due to ultraviolet (UV) exposure. Even in metals, solar heating can cause unwanted property changes through precipitation and Ostwald ripening. In more complex systems, cyclic temperature changes cause fatigue failure wherever thermal expansion mismatch occurs. Most protective coatings designed to prevent these effects inevitably succumb to the same phenomena because of their polymeric matrix. In contrast, siloxane coatings have the potential provide indefinite solar protection because they do not undergo photo-oxidation. This study therefore demonstrates UV-reflective siloxane coatings with low solar absorptance and high thermal emissivity that prevent any increase in temperature above ambient conditions in direct sunlight. Mathematical modeling suggests that even sub-ambient cooling is possible for ZnO-filled potassium silicate. Preventing widespread adoption of potassium silicates until now has been their tendency to crack at large thicknesses, dissolve in water, and delaminate from untreated surfaces. This investigation has successfully addressed these limitations by formulating potassium silicates to behave more like a flexible siloxane polymer than a brittle inorganic glass. The addition of plasticizers (potassium, glycerol), gelling agents (polyethylenimine), and water-insoluble precipitates (zinc silicates, cerium silicates, organosilanes) make it possible to form thick, water resistant coatings that exhibit excellent adhesion even to untreated aluminum surfaces.

Trans fats, but not saturated fats, linked to greater risk of death and heart disease

Sat, 08/15/2015 - 05:40

Which of these is a killer fat: cheese or margarine? (credit: Wikimedia Commons)

A study led by researchers at McMaster University has found that that trans fats are associated with greater risk of death and coronary heart disease, unlike saturated fats, which are also not associated with an increased risk of stroke or Type 2 diabetes.

The findings were published in an open-access paper August 12 by the British Medical Journal (BMJ).

Trans vs. saturated fats

“For years everyone has been advised to cut out fats,” said lead author Russell de Souza, an assistant professor in the Department of Clinical Epidemiology and Biostatistics with the Michael G. DeGroote School of Medicine. But there are different “fats.”

Saturated fats come mainly from animal products, such as butter, cows’ milk, meat, salmon, and egg yolks, and some plant products such as chocolate and palm oils. Trans unsaturated fats (trans fats) are mainly produced industrially from plant oils (a process known as hydrogenation) for use in margarine, snack foods and packaged baked goods.

Trans fats have no health benefits and pose a significant risk for heart disease, but the case for saturated fat is less clear,” said de Souza. “That said, we aren’t advocating an increase of the allowance for saturated fats in dietary guidelines, as we don’t see evidence that higher limits would be specifically beneficial to health.”

Saturated fats are limited to less than 10 per cent of energy, and trans fats to less than one per cent of energy, to reduce risk of heart disease and stroke, guidelines cited in the BMJ paper (citations 14 to 19) currently recommend.

No cardio risk from saturated fats, unlike trans

Contrary to prevailing dietary advice, a recent evidence review found no excess cardiovascular risk associated with intake of saturated fat. In contrast, research suggests that industrial trans fats may increase the risk of coronary heart disease.

To help clarify these controversies, de Souza and colleagues analyzed the results of 50 observational studies assessing the association between saturated and/or trans fats and health outcomes in adults.

Study design and quality were taken into account to minimize bias, and the certainty of associations were assessed using a recognized scoring method developed at McMaster.

The team found no clear association between higher intake of saturated fats and death for any reason, coronary heart disease (CHD), cardiovascular disease (CVD), ischemic stroke or type 2 diabetes.

Killer fats

However, consumption of industrial trans fats was associated with a 34 per cent increase in death for any reason, a 28 per cent increased risk of CHD mortality, and a 21 per cent increase in the risk of CHD.

Inconsistencies in the studies analyzed meant that the researchers could not confirm an association between trans fats and type 2 diabetes. And, they found no clear association between trans fats and ischemic stroke.

The researchers stress that their results are based on observational studies, so no definitive conclusions can be drawn about cause and effect. However, the authors write that their analysis “confirms the findings of five previous systematic reviews of saturated and trans fats and CHD.”

De Souza, who is also a registered dietitian, added that dietary guidelines for saturated and trans fatty acids must carefully consider the effect of replacement foods.

“If we tell people to eat less saturated or trans fats, we need to offer a better choice. Unfortunately, in our review, we were not able to find as much evidence as we would have liked for a best replacement choice, but ours and other studies suggest replacing foods high in these fats — such as high-fat or processed meats and donuts — with vegetable oils, nuts, and whole grains.”

Abstract of Intake of saturated and trans unsaturated fatty acids and risk of all cause mortality, cardiovascular disease, and type 2 diabetes: systematic review and meta-analysis of observational studies

Objective: To systematically review associations between intake of saturated fat and trans unsaturated fat and all cause mortality, cardiovascular disease (CVD) and associated mortality, coronary heart disease (CHD) and associated mortality, ischemic stroke, and type 2 diabetes.

Design: Systematic review and meta-analysis.

Data sources: Medline, Embase, Cochrane Central Registry of Controlled Trials, Evidence-Based Medicine Reviews, and CINAHL from inception to 1 May 2015, supplemented by bibliographies of retrieved articles and previous reviews.

Eligibility criteria for selecting studies: Observational studies reporting associations of saturated fat and/or trans unsaturated fat (total, industrially manufactured, or from ruminant animals) with all cause mortality, CHD/CVD mortality, total CHD, ischemic stroke, or type 2 diabetes.

Data extraction and synthesis: Two reviewers independently extracted data and assessed study risks of bias. Multivariable relative risks were pooled. Heterogeneity was assessed and quantified. Potential publication bias was assessed and subgroup analyses were undertaken. The GRADE approach was used to evaluate quality of evidence and certainty of conclusions.

Results: For saturated fat, three to 12 prospective cohort studies for each association were pooled (five to 17 comparisons with 90 501-339 090 participants). Saturated fat intake was not associated with all cause mortality (relative risk 0.99, 95% confidence interval 0.91 to 1.09), CVD mortality (0.97, 0.84 to 1.12), total CHD (1.06, 0.95 to 1.17), ischemic stroke (1.02, 0.90 to 1.15), or type 2 diabetes (0.95, 0.88 to 1.03). There was no convincing lack of association between saturated fat and CHD mortality (1.15, 0.97 to 1.36; P=0.10). For trans fats, one to six prospective cohort studies for each association were pooled (two to seven comparisons with 12 942-230 135 participants). Total trans fat intake was associated with all cause mortality (1.34, 1.16 to 1.56), CHD mortality (1.28, 1.09 to 1.50), and total CHD (1.21, 1.10 to 1.33) but not ischemic stroke (1.07, 0.88 to 1.28) or type 2 diabetes (1.10, 0.95 to 1.27). Industrial, but not ruminant, trans fats were associated with CHD mortality (1.18 (1.04 to 1.33) v 1.01 (0.71 to 1.43)) and CHD (1.42 (1.05 to 1.92) v0.93 (0.73 to 1.18)). Ruminant trans-palmitoleic acid was inversely associated with type 2 diabetes (0.58, 0.46 to 0.74). The certainty of associations between saturated fat and all outcomes was “very low.” The certainty of associations of trans fat with CHD outcomes was “moderate” and “very low” to “low” for other associations.

Conclusions: Saturated fats are not associated with all cause mortality, CVD, CHD, ischemic stroke, or type 2 diabetes, but the evidence is heterogeneous with methodological limitations. Trans fats are associated with all cause mortality, total CHD, and CHD mortality, probably because of higher levels of intake of industrial trans fats than ruminant trans fats. Dietary guidelines must carefully consider the health effects of recommendations for alternative macronutrients to replace trans fats and saturated fats.

Newly discovered brain network recognizes what’s new, what’s familiar

Sat, 08/15/2015 - 04:11

The Parietal Memory Network, a newly discovered memory and learning network shows consistent patterns of activation and deactivation in three distinct regions of the parietal cortex in the brain’s left hemisphere — the precuneus, the mid-cingulate cortex and the dorsal angular gyrus (credit: Image adapted from Creative Commons original by Patrick J. Lynch, medical illustrator; C. Carl Jaffe, MD, cardiologist)

New research from Washington University in St. Louis has identified a novel learning and memory brain network, dubbed the Parietal Memory Network (PMN), that processes incoming information based on whether it’s something we’ve experienced previously or appears to be new and unknown — helping us recognize, for instance, whether a face is that of a familiar friend or a complete stranger.

The study pulls together evidence from multiple neuroimaging studies and methods to demonstrate the existence of this previously unknown and distinct functional brain network, one that appears to have broad involvement in human memory processing.

“When an individual sees a novel stimulus, this network shows a marked decrease in activity,” said Adrian Gilmore, first author of the study and a fifth-year psychology doctoral student at Washington University. When an individual sees a familiar stimulus, this network shows a marked increase in activity.”

The new memory and learning network shows consistent patterns of activation and deactivation in three distinct regions of the parietal cortex in the brain’s left hemisphere — the precuneus, the mid-cingulate cortex, and the dorsal angular gyrus.

Activity within the PMN during the processing of incoming information (encoding) can be used to predict how well that information will be stored in memory and later made available for successful retrieval.

Researchers identified interesting characteristics of the PMN by analyzing data from a range of previously published neuroimaging studies. Using converging bits of evidence from dozens of fMRI brain experiments, their study shows how activity in the PMN changes during the completion of specific mental tasks and how the regions interact during resting states when the brain is involved in no particular activity or mental challenge.

This study builds on research by Marcus Raichle, MD, the Alan A. and Edith L. Wolff Distinguished Professor of Medicine, and other neuroscience researchers at Washington University, which established the existence of another functional brain network that remains surprisingly active when the brain is not involved in a specific activity, a system known as the Default Mode Network.

Like the Default Mode Network, key regions of the PMN were shown to hum in a similar unison while the brain is in relative periods of rest. And while key regions of the PMN are located close to the Default Mode Network, the PMN appears to be its own distinct and separate functional network, preliminary findings suggest.

A broad role in learning and recall

Another characteristic that sets the PMN apart from other functional networks is that its activity patterns remains consistent regardless of the type of mental challenge it is processing.

Many regions of the cortex jump into action only during the processing of a very specific task, such as learning a list of words, but remain relatively inactive during very similar tasks, such as learning a group of faces, The PMN, on the other hand, exhibits activity across a wide range of mental tasks, with levels rising and falling based on how much a task’s novelty or familiarity captures our attention.

“It seems like the amount of change relies heavily on how much a given stimulus captures our attention,” Gilmore said. “If something really stands out as old or new, you see much larger changes in the network’s activity than if it doesn’t stand out as much.”

The consistency of these patterns across various types of processing tasks suggests that the PMN plays a broad role in many different learning and recall processes, the research team suggests.

“A really cool feature of the PMN is that it seems to show its response patterns regardless of what you’re doing,” Gilmore said. “The PMN doesn’t seem to care what it is that you’re trying to do. It deactivates when we encounter something new, and activates when we encounter something that we’ve seen before. This makes it a really promising target for future research in areas such as education or Alzheimer’s research, where we want to foster or improve memory performance broadly, rather than focusing on specific tasks.”

The study is forthcoming in the September issue of the journal Trends in Cognitive Sciences.

Abstract of A parietal memory network revealed by multiple MRI methods

The manner by which the human brain learns and recognizes stimuli is a matter of ongoing investigation. Through examination of meta-analyses of task-based functional MRI and resting state functional connectivity MRI, we identified a novel network strongly related to learning and memory. Activity within this network at encoding predicts subsequent item memory, and at retrieval differs for recognized and unrecognized items. The direction of activity flips as a function of recent history: from deactivation for novel stimuli to activation for stimuli that are familiar due to recent exposure. We term this network the ‘parietal memory network’ (PMN) to reflect its broad involvement in human memory processing. We provide a preliminary framework for understanding the key functional properties of the network.

Helping Siri hear you in a party

Sat, 08/15/2015 - 03:32

This prototype sensor can separate out simultaneous sounds coming from different directions (credit: Steve Cummer, Duke University)

Duke University engineers have invented a device that emulates the “cocktail party effect” — the remarkable ability of the brain to home in on a single voice in a room with voices coming from multiple directions.

The device uses plastic metamaterials — the combination of natural materials in repeating patterns to achieve unnatural properties — to determine the direction of a sound and extract it from the surrounding background noise.

“We think this could improve the performance of voice-activated devices like smartphones and game consoles while also reducing the complexity of the system,” said Abel Xie, a PhD student in electrical and computer engineering at Duke and lead author of the paper.

Metamaterial and one fan-like waveguide section, showing the varying resonator cavity depths. For each person speaking, these unique cavities modify the distribution of sound strength across the frequency spectrum, creating a unique directional signature (credit: Yangbo Xie et al./PNAS)

How it works

The 3D-printed proof-of-concept plastic device looks a  pie-shaped honeycomb split into dozens of slices. The depth of resonator cavities varies within in each slice. This gives each slice of the honeycomb pie a unique sonic pattern.

“The cavities behave like soda bottles when you blow across their tops,” said Steve Cummer, professor of electrical and computer engineering at Duke. “The amount of soda left in the bottle, or the depth of the cavities in our case, affects the pitch of the sound they make, and this changes the incoming sound in a subtle but detectable way.”

When a sound wave gets to the device, it gets slightly distorted by the cavities. And that distortion has a specific signature depending what slice of the pie it passed over. After being picked up by a microphone, the sound is transmitted to a computer that separates the jumble of noises based on these unique distortions.

The researchers tested their invention in multiple trials in an anechoic chamber by simultaneously sending three identical sounds at the sensor from three different directions. It was able to distinguish between them with a 96.7 percent accuracy rate.

Uses in medical imaging, other applications

While the prototype is six inches wide, the researchers believe it could be scaled down and incorporated into the devices we use on a regular basis.

Once miniaturized, the device could have applications in voice-command electronics and medical sensing devices that use sound waves, like ultrasound imaging, said Xie. “It should also be possible to improve the sound fidelity and increase functionalities for applications like hearing aids and cochlear implants.”

The work was supported by a Multidisciplinary University Research Initiative from the Office of Naval Research. Conceivably, this design concept could be used in hydrophone-based systems to help separate out underwater sounds. It could also be used to separate out battlefield sounds and gunshots and other sounds in urban scenarios.

This research was featured in the Proceedings of the National Academy of Sciences August 11.

Abstract of Single-sensor “cocktail party listening” with acoustic metamaterials

Designing a “cocktail party listener” that functionally mimics the selective perception of a human auditory system has been pursued over the past decades. By exploiting acoustic metamaterials and compressive sensing, we present here a single-sensor listening device that separates simultaneous overlapping sounds from different sources. The device with a compact array of resonant metamaterials is demonstrated to distinguish three overlapping and independent sources with 96.67% correct audio recognition. Segregation of the audio signals is achieved using physical layer encoding without relying on source characteristics. This hardware approach to multichannel source separation can be applied to robust speech recognition and hearing aids and may be extended to other acoustic imaging and sensing applications.

Optical chip allows for reprogramming quantum computer in seconds

Fri, 08/14/2015 - 23:53

Linear optics processor (credit: University of Bristol)

A fully reprogrammable optical chip that can process photons in quantum computers in an infinite number of ways have been developed by researchers from the University of Bristol in the UK and Nippon Telegraph and Telephone (NTT) in Japan.

The universal “linear optics processor” (LPU) chip is a major step forward in creating a quantum computer to solve problems such as designing new drugs, superfast database searches, and performing otherwise intractable mathematics that aren’t possible for supercomputers — marking a new era of research for quantum scientists and engineers at the cutting edge of quantum technologies, the researchers say.

The chip solves a major barrier in testing new theories for quantum science and quantum computing: the time and resources needed to build new experiments, which are typically extremely demanding due to the notoriously fragile nature of quantum systems.

DIY photonics

“A whole field of research has essentially been put onto a single optical chip that is easily controlled,” said University of Bristol research associate Anthony Laing, PhD, project leader and senior author of a paper on the research in the journal Science today (August 14).

“The implications of the work go beyond the huge resource savings. Now anybody can run their own experiments with photons, much like they operate any other piece of software on a computer. They no longer need to convince a physicist to devote many months of their life to painstakingly build and conduct a new experiment.”

Linear optics processing system (credit: J. Carolan et al./Science)

The team demonstrated that by reprogramming it to rapidly perform a number of different experiments, each of which would previously have taken many months to build.

“Once we wrote the code for each circuit, it took seconds to reprogram the chip, and milliseconds for the chip to switch to the new experiment,” explained Bristol PhD student Jacques Carolan, one of the researchers. “We carried out a year’s worth of experiments in a matter of hours. What we’re really excited about is using these chips to discover new science that we haven’t even thought of yet.”

The University of Bristol’s pioneering Quantum in the Cloud is the first service to make a quantum processor publicly accessible. They plan to add more chips like the LPU to the service “so others can discover the quantum world for themselves.”

Abstract of Universal linear optics

Linear optics underpins fundamental tests of quantum mechanics and quantum technologies. We demonstrate a single reprogrammable optical circuit that is sufficient to implement all possible linear optical protocols up to the size of that circuit. Our six-mode universal system consists of a cascade of 15 Mach-Zehnder interferometers with 30 thermo-optic phase shifters integrated into a single photonic chip that is electrically and optically interfaced for arbitrary setting of all phase shifters, input of up to six photons, and their measurement with a 12-single-photon detector system. We programmed this system to implement heralded quantum logic and entangling gates, boson sampling with verification tests, and six-dimensional complex Hadamards. We implemented 100 Haar random unitaries with an average fidelity of 0.999 ± 0.001. Our system can be rapidly reprogrammed to implement these and any other linear optical protocol, pointing the way to applications across fundamental science and quantum technologies.

The origin of the robot species

Thu, 08/13/2015 - 05:48

A “mother robot” (A) is used for automatic assembly of candidate agents from active and passive modules. For the construction process, the robotic manipulator is equipped with a gripper and a glue supplier. Each agent is represented by the information stored in its genome (B). It contains one gene per module, and each gene contains information about the module types, construction parameters and motor control of the agent. A construction sequence encoded by one gene is shown in (C). First, the part of the robot which was encoded by the previous genes is rotated (C1 to C2). Second, the new module (here active) is picked from stock, rotated (C3), and eventually attached on top of the agent (C4). (credit: Luzius Brodbeck et al./PLOS ONE)

Researchers led by the University of Cambridge have built a mother robot that can build its own children, test which one does best, and automatically use the results to inform the design of the next generation — passing down preferential traits automatically.

Without any human intervention or computer simulation, beyond the initial command to build a robot capable of movement, the mother created children constructed of between one and five plastic cubes with a small motor inside.

In each of five separate experiments, the mother designed, built and tested generations of ten children, using the information gathered from one generation to inform the design of the next.

The results, reported in an open access paper in the journal PLOS One, found that the “fittest” individuals in the last generation performed a set task twice as quickly as the fittest individuals in the first generation.

Natural selection

Natural selection is ”essentially what this robot is doing — we can actually watch the improvement and diversification of the species,” said lead researcher Fumiya Iida of Cambridge’s Department of Engineering, who worked in collaboration with researchers at ETH Zurich.

For each robot child, there is a unique “genome” made up of a combination of between one and five different genes, which contains all of the information about the child’s shape, construction and motor commands.

As in nature, the evolution takes place through “mutation,” where components of one gene are modified or single genes are added or deleted, and “crossover,” where a new genome is formed by merging genes from two individuals.

To allow the mother to determine which children were the fittest, each child was tested on how far it traveled from its starting position in a given amount of time. The most successful individuals in each generation remained unchanged in the next generation to preserve their abilities, while mutation and crossover were introduced in the less successful children.

The increase in performance was due to both the fine-tuning of design parameters and the fact that the mother was able to invent new shapes and gait patterns for the children over time, including some designs that a human designer would not have been able to build.


Cambridge University | Fumiya Iida’s research looks at how robotics can be improved by taking inspiration from nature, whether that’s learning about intelligence, or finding ways to improve robotic locomotion. Iida’s lab is filled with a wide array of hopping robots, which may take their inspiration from grasshoppers, humans or even dinosaurs. One of his group’s developments, the “Chairless Chair,” is a wearable device that allows users to “sit” anywhere, without the need for a real chair.

Creative machines

“One of the big questions in biology is how intelligence came about — we’re using robotics to explore this mystery,” said Iida. “We think of robots as performing repetitive tasks, and they’re typically designed for mass production instead of mass customization, but we want to see robots that are capable of innovation and creativity.”

In nature, organisms are able to adapt their physical characteristics to their environment over time. These adaptations allow biological organisms to survive in a wide variety of different environments — allowing animals to make the move from living in the water to living on land, for instance.

But machines are not adaptable in the same way. They are essentially stuck in one shape for their entire “lives,” and it’s uncertain whether changing their shape would make them more adaptable to changing environments.

Using a computer simulation to study artificial evolution generates thousands, or even millions, of possibilities in a short amount of time, but the researchers found that having the robot generate its own possibilities, without any computer simulation, resulted in more successful children. The disadvantage is that it takes time: each child took the robot about 10 minutes to design, build and test. A robot also requires between ten and 100 times more energy than an animal to do the same thing.

According to Iida, in the future they might use a computer simulation to pre-select the most promising candidates, and use real-world models for actual testing.


Cambridge University | Researchers have observed the process of evolution by natural selection at work in robots, by constructing a “mother” robot that can design, build and test its own “children,” and then use the results to improve the performance of the next generation, without relying on computer simulation or human intervention.

 Abstract of Morphological Evolution of Physical Robots through Model-Free Phenotype Development

Artificial evolution of physical systems is a stochastic optimization method in which physical machines are iteratively adapted to a target function. The key for a meaningful design optimization is the capability to build variations of physical machines through the course of the evolutionary process. The optimization in turn no longer relies on complex physics models that are prone to the reality gap, a mismatch between simulated and real-world behavior. We report model-free development and evaluation of phenotypes in the artificial evolution of physical systems, in which a mother robot autonomously designs and assembles locomotion agents. The locomotion agents are automatically placed in the testing environment and their locomotion behavior is analyzed in the real world. This feedback is used for the design of the next iteration. Through experiments with a total of 500 autonomously built locomotion agents, this article shows diversification of morphology and behavior of physical robots for the improvement of functionality with limited resources.