Volumental‘s 3D Scan-to-Print web app will be the world’s first browser-based, 3D-printable model creator. The idea is to scan (and later, print) a 3D object as easily as printing a document, using just a depth camera (such as a Kinect) and computer browser.
The team has just launched a Kickstarter. “We need $20,000 in funding to hire a dedicated developer for this project as well as to fund upgrades needed in server and processing hardware. We estimate that the app will take 3 months to develop.”
How can I use this?
- 3D Print a chess set with your family’s faces on the pieces
- Kids fighting over a toy? Replicate it!
- Easily replicate your dog, horse, cat, lizard, ferret, hamster, boa constrictor
- Make a dollhouse miniature of your sofa.
- 3D scan and print amazing, hard to find props for your next indie movie.
Durham University Astronomers have found a new way of measuring the spin in supermassive black holes, which could lead to better understanding about how they drive the growth of galaxies.
The astronomers observed a black hole — with a mass 10 million times that of our Sun — at the center of a spiral galaxy 500 million light years from Earth while it was feeding on the surrounding disc of material that fuels its growth and powers its activity.
By viewing optical, ultraviolet and soft x-rays generated by heat as the black hole fed, they were able to measure how far the disc was from the black hole.
This distance depends on black hole spin, as a fast-spinning black hole pulls the disc in closer to itself, the researchers said. Using the distance between the black hole and the disc, the scientists were able to estimate the spin of the black hole.
The scientists said that understanding spin could lead to greater understanding of galaxy growth over billions of years.
Black holes lie at the centers of almost all galaxies, and can spit out incredibly hot particles at high energies that prevent intergalactic gases from cooling and forming new stars in the outer galaxy. Scientists don’t yet understand why the jets are ejected into space, but the Durham experts believe that their power could be linked to the spin of the black hole. This spin is difficult to measure, as it only affects the behavior of material really close to the black hole.
Lead researcher Professor Chris Done, in the Department of Physics, at Durham University, said: “We know the black hole in the center of each galaxy is linked to the galaxy as a whole, which is strange because black holes are tiny in relation to the size of a galaxy. This would be like something the size of a large boulder (10m), influencing something the size of the Earth.
“Understanding this connection between stars in a galaxy and the growth of a black hole, and vice-versa, is the key to understanding how galaxies form throughout cosmic time.
“If a black hole is spinning it drags space and time with it and that drags the accretion disc, containing the black hole’s food, closer towards it. This makes the black hole spin faster, a bit like an ice skater doing a pirouette.
“By being able to measure the distance between the black hole and the accretion disc, we believe we can more effectively measure the spin of black holes.
“Because of this, we hope to be able to understand more about the link between black holes and their galaxies.”
The University of Nottingham scientists have developed a new technology that would enable all of the world’s crops to take nitrogen from the air, instead of requiring expensive and environmentally damaging fertilizers.
Nitrogen fixation, the process by which nitrogen is converted to ammonia, is vital for plants to survive and grow. However, only a very small number of plants, most notably legumes (such as peas, beans and lentils) have the ability to fix (use) nitrogen from the atmosphere, with the help of nitrogen fixing bacteria.
The vast majority of plants have to obtain nitrogen from the soil, and for most crops currently being grown across the world, this also means reliance on synthetic nitrogen fertilizer.
Adding nitrogen-fixing bacteria to roots
Professor Edward Cocking, Director of The University of Nottingham’s Centre for Crop Nitrogen Fixation, has developed a unique method of putting nitrogen-fixing bacteria into the cells of plant roots.
His major breakthrough came when he found a specific strain of nitrogen-fixing bacteria in sugar cane known as G. diazotrophicus could intracellularly colonize all major crop plants.
This ground-breaking development potentially provides every cell in the plant with the ability to fix atmospheric nitrogen. The implications for agriculture are enormous, as this new technology can provide much of the plant’s nitrogen needs, he suggests.
Known as N-Fix, the method is neither genetic modification nor bioengineering. It is naturally occurring nitrogen fixing bacteria that take up and use nitrogen from the air.
Applied to the cells of plants via the seed, it provides every cell in the plant with the ability to fix nitrogen. Plant seeds are coated with these bacteria to create a symbiotic, mutually beneficial relationship and naturally produce nitrogen.
N-Fix is a natural nitrogen seed coating that provides a sustainable solution to fertilizer overuse and nitrogen pollution. It is environmentally friendly and can be applied to all crops.
Over the last 10 years, The University of Nottingham has conducted a series of extensive research programs which have established proof of principal of the technology in the laboratory, growth rooms and glasshouses.
A leading world expert in nitrogen and plant science, Professor Cocking has long recognized that there is a critical need to reduce nitrogen pollution caused by nitrogen based fertilizers. Nitrate pollution is a major problem as is also the pollution of the atmosphere by ammonia and oxides of nitrogen.
In addition, nitrate pollution is a health hazard and also causes oxygen-depleted “dead zones” in our waterways and oceans. A recent study estimates that that the annual cost of damage caused by nitrogen pollution across Europe is £60 billion to £280 billion.
Professor Cocking said: “Helping plants to naturally obtain the nitrogen they need is a key aspect of world food security.
“The world needs to unhook itself from its ever increasing reliance on synthetic nitrogen fertilizers produced from fossil fuels with its high economic costs, its pollution of the environment and its high energy costs.”
Making N-Fix available worldwide
The N-Fix technology has been licensed by The University of Nottingham to Azotic Technologies Ltd to develop and commercialise N-Fix globally on its behalf for all crop species.
Peter Blezard, CEO of Azotic Technologies added: “Agriculture has to change and N-Fix can make a real and positive contribution to that change.
It has enormous potential to help feed more people in many of the poorer parts of the world, while at the same time, dramatically reducing the amount of synthetic nitrogen produced in the world.”
Azotic is now working on field trials to produce robust efficacy data. This will be followed by seeking regulatory approval for N-Fix initially in the UK, Europe, USA, Canada and Brazil, with more countries to follow. It is anticipated that the N-Fix technology will be commercially available within the next two to three years.
The University of Nottingham’s Plant and Crop Sciences Division is internationally acclaimed as a centre for fundamental and applied research, underpinning its understanding of agriculture, food production and quality, and the natural environment. It also has one of the largest communities of plant scientists in the UK.
AGL Energy, one of the big three power utilities in Australia, says that 9,000MW of fossil-fuel baseload capacity needs to be taken out of the national electricity market (NEM) to bring it back into balance, RenewEconomy reports.
That assessment of 9,000MW equates to nearly one-third of the country’s baseload generation — a sure sign that renewables, and in particularly rooftop solar, are changing the dynamics of the market, says RenewEconomy.
Germany, much further advanced than Australia on its renewables path, is witnessing the forced closure of fossil fuel assets as the market dynamics change — and despite the withdrawal of nuclear capacity.
A top secret National Security Agency program allows analysts to search with no prior authorization through vast databases containing emails, online chats and the browsing histories of millions of individuals — ts “widest-reaching” system for developing intelligence from the Internet. — according to documents provided by whistleblower Edward Snowden, Guardian columnist Glenn Greenwald reported Wednesday.
The latest revelations come as senior intelligence officials testify to the Senate judiciary committee on Wednesday, releasing classified documents in response to the Guardian’s earlier stories on bulk collection of phone records and Fisa surveillance court oversight.
Training materials for XKeyscore detail how analysts can use it and other systems to mine enormous agency databases by filling in a simple on-screen form giving only a broad justification for the search, including U.S. persons.. The request is not reviewed by a court or any NSA personnel before it is processed.
The purpose of XKeyscore is to allow analysts to search the metadata as well as the content of emails and other internet activity, such as browser history, even when there is no known email account (a “selector” in NSA parlance) associated with the individual being targeted.
Analysts can also search by name, telephone number, IP address, keywords, the language in which the internet activity was conducted or the type of browser used. An NSA tool called DNI Presenter, used to read the content of stored emails, also enables an analyst using XKeyscore to read the content of Facebook chats or private messages.
The XKeyscore program also allows an analyst to learn the IP addresses of every person who visits any website the analyst specifies. As one slide indicates, the ability to search HTTP activity by keyword permits the analyst access to what the NSA calls “nearly everything a typical user does on the internet”.
William Binney, a former NSA mathematician, said last year that the agency had “assembled on the order of 20 trillion transactions about U.S. citizens with other U.S. citizens,” an estimate, he said, that “only was involving phone calls and emails.” A 2010 Washington Post article reported that “every day, collection systems at the [NSA] intercept and store 1.7 billion emails, phone calls and other type of communications.”
One document explains: “At some sites, the amount of data we receive per day (20+ terabytes) can only be stored for as little as 24 hours.” In 2012, there were at least 41 billion total records collected and stored in XKeyscore for a single 30-day period.
The ability to pay attention to relevant information while ignoring distractions is a core brain function. Without the ability to focus and filter out “noise,” we could not effectively interact with our environment.
But despite much study of attention in the brain, the cellular mechanisms responsible for the effects of attention have remained a mystery.
Researchers from Dartmouth’s Geisel School of Medicine and the University of California Davis studied communications between synaptically connected neurons under conditions where subjects shifted their attention toward or away from visual stimuli that activated the recorded neurons.
Using this highly sensitive measure of attention’s influence on neuron-to-neuron communication, they were able to demonstrate that attention operates at the level of the synapse to improve sensitivity to incoming signals, sharpen the precision of these signals, and selectively boost the transmission of attention-grabbing information while reducing the level of noisy or attention-disrupting information.
The results point to a novel mechanism by which attention shapes perception by selectively altering presynaptic weights to highlight sensory features among all the noisy sensory input.
“While our findings are consistent with other reported changes in neuronal firing rates with attention, they go far beyond such descriptions, revealing never-before tested mechanisms at the synaptic level,” said study co-author Farran Briggs, PhD, assistant professor of Physiology and Neurobiology at the Geisel School of Medicine.
In addition to expanding our understanding of brain, this study could help people with attention deficits resulting from brain injury or disease, possibly leading to improved screening and new treatments.
The research was supported by grants from the National Institute of Health’s (NIH) National Eye Institute (grant #s EY18683 to F.B., EY013588 to W.M.U.) and from NIH’s National Institute of Mental Health (grant # MH055714 to G.R.M.). Funding support also provided to G.R.M. and W.M.U from the National Science Foundation (grant #s BCS-0727115 and 1228535).
And three months after the end of the therapy, patients given online treatment even displayed fewer symptoms.
Six therapists treated 62 patients, the majority of whom were suffering from moderate depression. The patients were divided into two equal groups and randomly assigned to one of the therapeutic forms.
The treatment consisted of eight sessions with different established techniques that stem from cognitive behavior therapy and could be carried out both orally and in writing. Patients treated online had to perform one predetermined written task per therapy unit — such as querying their own negative self-image.
Online therapy even more effective in the medium term
“In both groups, the depression values fell significantly,” says Professor Andreas Maercker, summing up the results of the study. At the end of the treatment, no more depression could be diagnosed in 53 percent of the patients who underwent online therapy — compared to 50 percent for face-to-face therapy.
Three months after completing the treatment, the depression in patients treated online even decreased whereas those treated conventionally only displayed a minimal decline: no more depression could be detected in 57 percent of patients from online therapy compared to 42 percent with conventional therapy.
For both patient groups, the degree of satisfaction with the treatment and therapists was more or less equally high. 96 percent of the patients given online therapy and 91 percent of the recipients of conventional treatment rated the contact with their therapist as “personal.”
In the case of online therapy, the patients tended to use the therapy contacts and subsequent homework very intensively to progress personally. For instance, they indicated that they had re-read the correspondence with their therapist from time to time.
“In the medium term, online psychotherapy even yields better results. Our study is evidence that psychotherapeutic services on the Internet are an effective supplement to therapeutic care,” concludes Maercker.
NASA has begun studying how remotely-operated vehicles may one day help astronauts explore other worlds.
NASA tested the Surface Telerobotics exploration concept, in which an astronaut in an orbiting spacecraft remotely operates a robot on a planetary surface. In the future, astronauts orbiting other planetary bodies, such as Mars, asteroids or the moon, could use this approach to perform work on the surface using robotic avatars.
“The initial test was notable for achieving a number of firsts for NASA and the field of human-robotic exploration,” said Terry Fong, Human Exploration Telerobotics project manager and director of the Intelligent Robotics Group at NASA’s Ames Research Center, Moffett Field, Calif., which designed and manages the tests. “Specifically, this project represents the first fully-interactive remote operation of a planetary rover by an astronaut in space.”
During the June 17 test, Expedition 36 Flight Engineer Chris Cassidy of NASA remotely operated the K10 planetary rover in the Roverscape — an outdoor robotic test area the size of two football fields located at NASA Ames — hundreds of miles below on Earth’s surface from his post aboard the International Space Station (ISS). For more than three hours, Cassidy used the robot to perform a survey of the Roverscape’s rocky, lunar-like terrain.
The July 26 test picked up where Cassidy left off. Fellow Expedition 36 Flight Engineer Luca Parmitano of the European Space Agency remotely-controlled the rover and began deploying a simulated Kapton film-based radio antenna.
These tests represent the first time NASA’s open-source Robot Application Programming Interface Delegate (RAPID) robot data messaging system was used to control a robot from space. RAPID originally was developed by NASA’s Human-Robotic Systems project and is a set of software data structures and routines that simplify the process of communicating information between different robots and their command and control systems. RAPID has been used with a wide variety of systems including rovers, walking robots, free-flying robots and robotic cranes.
The test also is the first time the NASA Ensemble-based software — jointly developed at Ames and NASA’s Jet Propulsion Laboratory in Pasadena, Calif. — was used in space for telerobotics. Ensemble is an open architecture for the development, integration and deployment of mission operations software.
“Whereas it is common practice in undersea exploration to use a joystick and have direct control of remote submarines, the K10 robots are more intelligent,” said Fong. “Astronauts interact with the robots at a higher level, telling them where to go, and then the robot itself independently and intelligently figures out how to safely get there.”
The primary objective of the Surface Telerobotics testing is to collect engineering data from astronauts aboard the space station, the K10 robot and data communication links. This will allow engineers to characterize the system and validate previous ground tests.
“During future missions beyond low-Earth orbit, some work will not be feasible for humans to do manually,” said Fong. “Robots will complement human explorers, allowing astronauts to perform work via remote control from a space station, spacecraft or other habitat.”
The primary goal of the Human Exploration Telerobotics project is to understand how human and robot activities, such as Surface Telerobotics, can be coordinated to improve crew safety, enhance science activities and increase mission success while also reducing cost, risk and consumables, such as fuel and oxygen, during future exploration missions.
The K10 robot is a four-wheel drive, four-wheel steer robot that stands about 4.5 feet tall, weighs about 220 pounds and can travel about three feet per second (a little slower than the average person’s walking pace). For the Surface Telerobotics tests, K10 is equipped with multiple cameras and a 3-D scanning laser system to perform survey work, as well as a mechanism to deploy the simulated radio antenna.
This year’s Surface Telerobotics tests simulate a possible future mission involving astronauts aboard NASA’s Orion spacecraft traveling to the L2 Earth-moon Lagrange point. The L2 point is where the combined gravity of the Earth and moon allows a spacecraft to easily maintain a stationary orbit and is located 40,000 miles above the far side of the moon. From L2, astronauts would remotely operate a robot to perform surface science work, such as deploying a radio telescope. This mission concept was developed by the Lunar University Network for Astrophysics Research (LUNAR), which is based at the University of Colorado, Boulder (CU).
“Deploying a radio telescope on the farside of the moon would allow us to make observations of the early universe free from the radio noise of Earth,” said Jack Burns, a professor at CU, director of LUNAR and co-investigator at NASA’s Lunar Science Institute. “The Surface Telerobotics test represents a next step in new modes of exploration that will bring together humans and robots, as well as science and exploration. Such telerobotics technology will be needed for exploration of the moon, asteroids and eventually the surface of Mars.”
Students from several universities assisted with the development of Surface Telerobotics. Industrial design students from the Academy of Art University in San Francisco collaborated with NASA engineers to create the user interface for remotely operating the K10 rover. Undergraduates from CU and the University of Idaho helped design the Kapton film deployer, which is mounted on K10.
“These surface telerobotics tests, in collaboration with astronauts aboard the ISS, offer exciting opportunities for our students to have hands-on engineering and mission operations experiences with realistic simulations of future human-robot missions to planetary bodies,” said Burns. ”Such experiences inspire our students to careers in the aerospace sciences. These students are destined for bright futures as part of NASA’s exploration of the solar system.”
“This work really tests the notion that robots can project human presence to other planetary surfaces,” said Fong. “Ultimately, this will allow us to discover and explore dangerous and remote places, whether they’re at the bottom of the ocean or at the far reaches of our solar system.”
Future quantum computers with machine learning could attack larger sets of data than classical computers
Seth Lloyd of MIT and his collaborators have developed a quantum version of machine learning — a type of AI in which programs can learn from previous experience to become progressively better at finding patterns in data. It would take advantage of quantum computations to speed up machine-learning tasks exponentially, Nature News reports.
Data can be split into groups — a task that is at the core of handwriting- and speech-recognition software — or can be searched for patterns. Massive amounts of information could therefore be manipulated with a relatively small number of qubits.
“We could map the whole Universe — all of the information that has existed since the Big Bang — onto 300 qubits,” Lloyd says.
Such quantum AI techniques could dramatically speed up tasks such as image recognition for comparing photos on the web or for enabling cars to drive themselves — fields in which companies such as Google have invested considerable resources.
Putting quantum machine learning into practice will be more difficult. Lloyd estimates that a dozen qubits would be needed for a small-scale demonstration.
The ideas are explored in a series of five (open-access) arXiv papers. [...]
The chemical components crucial to the start of life on Earth may have primed and protected each other in never-before-realized ways, according to new research led by University of Washington scientists.
It could mean a simpler scenario for how that first spark of life came about on the planet, according to Sarah Keller, UW professor of chemistry, and Roy Black, UW affiliate professor of bioengineering, co-authors of a paper published online July 29 in the Proceedings of the National Academy of Sciences.
Scientists have long thought that life started when the right combination of bases and sugars produced self-replicating ribonucleic acid, or RNA, inside a rudimentary “cell” composed of fatty acids. Under the right conditions, fatty acids naturally form into bag-like structures similar to today’s cell membranes.
In testing one of the fatty acids representative of those found before life began — decanoic acid — the scientists discovered that the four bases in RNA bound more readily to the decanoic acid than did the other seven bases tested.
By concentrating more of the bases and sugar that are the building blocks of RNA, the system would have been primed for the next steps, reactions that led to RNA inside a vesicle (bag).
“The bag is the easy part. Making RNA from scratch is very hard,” Keller said. “If the parts that come together to make RNA happen to preferentially stick to the surfaces of bags, then everything gets easier.”
Protecting vesicles from effects of salty seawater
The scientists also discovered a second, mutually reinforcing mechanism: The same bases of RNA that preferentially stuck to the fatty acid also protected the bags from disruptive effects of salty seawater. Salt causes the fatty acid bags to clump together instead of remaining as individual “cells.”
The researchers found that several sugars also give protective benefit but the sugar from RNA, ribose, is more effective than glucose or even xylose, a sugar remarkably similar to ribose, except its components are arranged differently.
The ability of the building blocks of RNA to stabilize the fatty acid bags simplifies one part of the puzzle of how life started, Keller said.
“Taken together, these findings yield mutually reinforcing mechanisms of adsorption, concentration and stabilization that could have driven the emergence of primitive cells,” she said.
Researchers from Augsburg College in Minneapolis, the University of Minnesota, and the University of California, Santa Cruz where also involved.
“For the average American consumer, 3D printing is ready for showtime,” said Associate Professor Joshua Pearce.
The reason is financial: the typical family can already save a great deal of money by making things with a 3D printer instead of buying them off the shelf.
Savings for consumers
Pearce drew that conclusion after conducting a lifecycle economic analysis on 3D printing in an average American household.
In the study, Pearce and his team chose 20 common household items listed on Thingiverse (a catalog of free designs).
Then they used Google Shopping to determine the maximum and minimum cost of buying those 20 items online, shipping charges not included.
Next, they calculated the cost of making them with 3D printers. The conclusion: it would cost the typical consumer from $312 to $1,944 to buy those 20 things compared to $18 to make them in a weekend.
Open-source 3D printers for home use have price tags ranging from about $350 to $2,000. Making the very conservative assumption a family would only make 20 items a year, Pearce’s group calculated that the printers would pay for themselves quickly, in a few months to a few years.
The group chose relatively inexpensive items for their study: cellphone accessories, a garlic press, a showerhead, a spoon holder, and the like. 3D printers can save consumers even more money on high-end items like customized orthotics and photographic equipment.
3D printing isn’t quite as simple as 2D printing a document from your home computer — yet. “But you don’t need to be an engineer or a professional technician to set up a 3D printer,” Pearce said. “Some can be set up in under half an hour, and even the RepRap can be built in a weekend by a reasonably handy do-it-yourselfer.”
It’s not just about the money. 3D printing may herald a new world that offers consumers many more choices, as everything can be customized. “With the exponential growth of free designs and expansion of 3D printing, we are creating enormous potential wealth for everyone.” explains Pearce.
Small-scale custom manufacturing
Before 3D printers become as ubiquitous as cellphones, they could form the basis of small-scale manufacturing concerns and have huge potential both here and for developing countries, where access to many products is limited.
“Say you are in the camping supply business and you don’t want to keep glow-in-the-dark tent stakes in stock,” Pearce said. “Just keep glow-in-the-dark plastic on hand, and if somebody needs those tent stakes, you can print them.”
“It would be a different kind of capitalism, where you don’t need a lot of money to create wealth for yourself or even start a business,” Pearce said.
The study is described in the article “Life-Cycle Economic Analysis of Distributed Manufacturing with Open-Source 3D Printers,” published in the journal Mechatronics (see Reference below).
Measurements tell us that global average sea level is currently rising by about 1 inch per decade. But in an invisible shadow process, our long-term sea level rise commitment or “lock-in” — the sea level rise we don’t see now, but which carbon emissions and warming have locked in for later years — is growing 10 times faster, and this growth rate is accelerating, writes Ben Strauss, vice president of Climate Central.
An international team of scientists led by Anders Levermann recently published a study that found for every degree Fahrenheit of global warming due to carbon pollution, global average sea level will rise by about 4.2 feet in the long run.
When multiplied by the current rate of carbon emissions, and the best estimate of global temperature sensitivity to pollution, this translates to a long-term sea level rise commitment that is now growing at about 1 foot per decade.
We have two sea levels: the sea level of today, and the far higher sea level that is already being locked in for some distant tomorrow.
In a new paper published Monday in the Proceedings of the National Academy of Sciences (PNAS), I analyze the growth of the locked-in amount of sea level rise and other implications of Levermann and colleagues’ work. …
To begin with, it appears that the amount of carbon pollution to date has already locked in more than 4 feet of sea level rise past today’s levels. That is enough, at high tide, to submerge more than half of today’s population in 316 coastal cities and towns (home to 3.6 million) in the lower 48 states.
By the end of this century, if global climate emissions continue to increase, that may lock in 23 feet of sea level rise, and threaten 1,429 municipalities that would be mostly submerged at high tide. Those cities have a total population of 18 million. [...]
Engineers at the California Institute of Technology (Caltech) have devised a method to convert a relatively inexpensive conventional microscope into a billion-pixel imaging system that significantly outperforms the best available standard microscope.
Such a system could greatly improve the efficiency of digital pathology, in which specialists need to review large numbers of tissue samples. By making it possible to produce robust microscopes at low cost, the approach also has the potential to bring high-performance microscopy capabilities to medical clinics in developing countries.
“In my view, what we’ve come up with is very exciting because it changes the way we tackle high-performance microscopy,” says Changhuei Yang, professor of electrical engineering, bioengineering and medical engineering at Caltech.
Physical limitations have forced researchers to decide between high resolution and a small field of view on the one hand, or low resolution and a large field of view on the other.
That has meant that scientists have either been able to see a lot of detail very clearly but only in a small area, or they have gotten a coarser view of a much larger area.
“We found a way to actually have the best of both worlds,” says Guoan Zheng, lead author on the new paper and the initiator of this new microscopy approach from Yang’s lab.
“We used a computational approach to bypass the limitations of the optics. The optical performance of the objective lens is rendered almost irrelevant, as we can improve the resolution and correct for aberrations computationally.”
Indeed, using the new approach, the researchers were able to improve the resolution of a conventional 2X objective lens to the level of a 20X objective lens. The images produced by the new system contain 100 times more information than those produced by conventional microscope platforms. And building upon a conventional microscope, the new system costs only about $200 to implement.
“One big advantage of this new approach is the hardware compatibility,” Zheng says, “You only need to add an LED array to an existing microscope. No other hardware modification is needed. The rest of the job is done by the computer.”
How it works
The new system acquires about 150 low-resolution images of a sample. Each image corresponds to one LED element in the LED array. Therefore, in the various images, light coming from known different directions illuminates the sample. A novel computational approach, termed Fourier ptychographic microscopy (FPM), is then used to stitch together these low-resolution images to form the high-resolution intensity and phase information of the sample — a much more complete picture of the entire light field of the sample.
Yang explains that when we look at light from an object, we are only able to sense variations in intensity. But light varies in terms of both its intensity and its phase, which is related to the angle at which light is traveling.
“What this project has developed is a means of taking low-resolution images and managing to tease out both the intensity and the phase of the light field of the target sample,” Yang says. “Using that information, you can actually correct for optical aberration issues that otherwise confound your ability to resolve objects well.”
Digital pathology uses
The very large field of view that the new system can image could be particularly useful for digital pathology applications, where the typical process of using a microscope to scan the entirety of a sample can take tens of minutes. Using FPM, a microscope does not need to scan over the various parts of a sample — the whole thing can be imaged all at once. Furthermore, because the system acquires a complete set of data about the light field, it can computationally correct errors — such as out-of-focus images — so samples do not need to be rescanned.
“It will take the same data and allow you to perform refocusing computationally,” Yang says.
The researchers say that the new method could also have wide applications in everything from hematology to wafer inspection to forensic photography. Zheng says the strategy could also be extended to other imaging methodologies, such as X-ray imaging and electron microscopy.
The work was supported by a grant from the National Institutes of Health.
A new technique for developing more targeted drugs with reduced side effects by using “molecular automata” — a mixture of antibodies and short strands of DNA — has been demonstrated by Hospital for Special Surgery (HSS) and Columbia University researchers.
These short DNA strands, aka oligonucleotides, can be manufactured by researchers in a laboratory for any user-specified sequence.
How it works
All cells have many receptors on their cell surface. When antibodies or drugs bind to a receptor, a cell is triggered to perform a certain function or behave in a certain manner. Drugs can target disease-causing cells by binding to a receptor, but in some cases, disease-causing cells do not have unique receptors and therefore drugs also bind to healthy cells and cause “off-target” side effects.
The researchers conducted their experiments using white blood cells. All white blood cells have CD45 receptors, but only subsets have other receptors such as CD20, CD3, and CD8. In one experiment, HSS researchers created three different molecular robots. Each one had an antibody component of either CD45, CD3 or CD8 and a DNA component.
The DNA components of the robots were created to have a high affinity to the DNA components of another robot. DNA can be thought of as a double stranded helix that contains two strands of coded letters, and certain strands have a higher affinity to particular strands than others.
The researchers mixed human blood from healthy donors with their molecular robots. When a molecular robot carrying a CD45 antibody latched on to a CD45 receptor of a cell and a molecular robot carrying a CD3 antibody latched on to a different welcoming receptor of the same cell, the close proximity of the DNA strands from the two robots triggered a cascade reaction.
Certain strands were ripped apart and more complementary strands joined together. The result was a unique, single strand of DNA that was displayed only on a cell that had these two receptors. The addition of a molecular robot carrying a CD8 antibody docking on a cell that expressed CD45, CD3 and CD8 caused this strand to grow.
The researchers also showed that the strand could be programmed to fluoresce when exposed to a solution. The robots can essentially label a subpopulation of cells allowing for more targeted therapy. The researchers say the use of increasing numbers of molecular robots will allow researchers to zero in on more and more specific subsets of cell populations.
“The automata trigger the growth of more strongly complementary oligonucleotides. The reactions occur fast. In about 15 minutes, we can label cells,” said Maria Rudchenko, M.S., the first author of the paper and a research associate at Hospital for Special Surgery. In terms of clinical applications, researchers could either label cells that they want to target or cells they want to avoid.
“This is a proof of concept study that it works in human whole blood,” said Dr. Rudchenko. “The next step is to test it in animals.”
If molecular robots work in studies with mice and eventually human clinical trials, the researches say there are a wide range of possible clinical applications. For example, cancer patients could benefit from more targeted chemotherapeutics. Drugs for autoimmune diseases could be more specifically tailored to impact disease-causing autoimmune cells and not the immune cells that people need to fight infection.
The study was funded, in part, by the National Institutes of Health, National Science Foundation, and the Lymphoma and Leukemia Foundation.
Researchers have developed a new method that can look at a specific segment of DNA and pinpoint a single mutation, which could help diagnose and treat diseases such as cancer and tuberculosis.
Modern genomics has shown that just one mutation can be the difference between successfully treating a disease and having it spread rampantly throughout the body.
These small mutations can be the root of a disease or the reason some infectious diseases resist certain antibiotics.
“We’ve really improved on previous approaches because our solution doesn’t require any complicated reactions or added enzymes, it just uses DNA,” said lead author Georg Seelig, a University of Washington assistant professor of electrical engineering and of computer science and engineering.
“This means that the method is robust to changes in temperature and other environmental variables, making it well-suited for diagnostic applications in low-resource settings.”
Detecting mutations in a single DNA base pair
Seelig, along with David Zhang of Rice University and Sherry Chen, a UW doctoral student in electrical engineering, designed probes that can pick out mutations in a single base pair in a target stretch of DNA. The probes allow researchers to look in much more detail for variations in long sequences — up to 200 base pairs — while current methods can detect mutations in stretches of up to only 20.
“In terms of specificity, our research suggests that we can do quadratically better, meaning that whatever the best level of specificity, our best will be that number squared,” said Zhang, an assistant professor of bioengineering at Rice University.
The testing probes are designed to bind with a sequence of DNA that is suspected of having a mutation. The researchers do this by creating a complementary sequence of DNA to the double-helix strand in question. Then, they allow molecules containing both sequences to mix in a test tube in salt water, where they naturally will match up to one another if the base pairs are intact.
Unlike previous technologies, the probe molecule checks both strands of the target double helix for mutations rather than just one, which explains the increased specificity.
The probe is engineered to emit a fluorescent glow if there’s a perfect match between it and the target. If it doesn’t illuminate, that means the strands didn’t match and there was in fact a mutation in the target strand of DNA.
The researchers have filed a patent on the technology and are working with the UW Center for Commercialization. They hope to integrate it into a paper-based diagnostic test for diseases that could be used in parts of the world with few medical resources.
The research was funded by the National Institutes of Health, the National Science Foundation and the Department of Defense’s Advanced Research Projects Agency.
MIT researchers, in collaboration with physicist Yves Couder at the Université Paris Diderot and his colleagues, report that they have produced the fluidic analogue of a classic quantum experiment, in which electrons are confined to a circular “corral” by a ring of ions.
In the new experiments, reported in the latest issue of the journal Physical Review E (PRE), bouncing drops of fluid mimicked the electrons’ statistical behavior with remarkable accuracy.
In the early days of quantum physics, in an attempt to explain the wavelike behavior of quantum particles, the French physicist Louis de Broglie proposed what he called a “pilot wave” theory.
According to de Broglie, moving particles — such as electrons, or the photons in a beam of light — are borne along on waves of some type, like driftwood on a tide.
Physicists’ inability to detect de Broglie’s posited waves led them, for the most part, to abandon pilot-wave theory. Recently, however, a real pilot-wave system has been discovered, in which a drop of fluid bounces across a vibrating fluid bath, propelled by waves produced by its own collisions.
In 2006, Yves Couder and Emmanuel Fort, at Université Paris Diderot, used this system to reproduce one of the most famous experiments in quantum physics: the so-called “double-slit” experiment, in which particles are fired at a screen through a barrier with two holes in it.
Joining Bush on the PRE paper are lead author Daniel Harris, a graduate student in mathematics at MIT; Couder and Fort; and Julien Moukhtar, also of Université Paris Diderot. In a separate pair of papers, appearing this month in the Journal of Fluid Mechanics, Bush and Jan Molacek, another MIT graduate student in mathematics, explain the fluid mechanics that underlie the system’s behavior.
“This hydrodynamic system is subtle, and extraordinarily rich in terms of mathematical modeling,” says John Bush, a professor of applied mathematics at MIT and corresponding author on the new paper. “It’s the first pilot-wave system discovered and gives insight into how rational quantum dynamics might work, were such a thing to exist.”
The double-slit experiment is seminal because it offers the clearest demonstration of wave-particle duality:
As the theoretical physicist Richard Feynman once put it, “Any other situation in quantum mechanics, it turns out, can always be explained by saying, ‘You remember the case of the experiment with the two holes? It’s the same thing.’”
If a wave traveling on the surface of water strikes a barrier with two slits in it, two waves will emerge on the other side. Where the crests of those waves intersect, they form a larger wave; where a crest intersects with a trough, the fluid is still.
A bank of pressure sensors struck by the waves would register an “interference pattern” — a series of alternating light and dark bands indicating where the waves reinforced or canceled each other.
Photons fired through a screen with two holes in it produce a similar interference pattern — even when they’re fired one at a time. That’s wave-particle duality: the mathematics of wave mechanics explains the statistical behavior of moving particles.
Oil particle behaves like electron in quantum corral
In the experiments reported in PRE, the researchers mounted a shallow tray with a circular depression in it on a vibrating stand. They filled the tray with a silicone oil and began vibrating it at a rate just below that required to produce surface waves.
They then dropped a single droplet of the same oil into the bath. The droplet bounced up and down, producing waves that pushed it along the surface.
The waves generated by the bouncing droplet reflected off the corral walls, confining the droplet within the circle and interfering with each other to create complicated patterns. As the droplet bounced off the waves, its motion appeared to be entirely random, but over time, it proved to favor certain regions of the bath over others.
It was found most frequently near the center of the circle, then, with slowly diminishing frequency, in concentric rings whose distance from each other was determined by the wavelength of the pilot wave.
The statistical description of the droplet’s location is analogous to that of an electron confined to a circular quantum corral and has a similar, wavelike form.
“It’s a great result,” says Paul Milewski, a math professor at the University of Bath, in England, who specializes in fluid mechanics. “Given the number of quantum-mechanical analogues of this mechanical system already shown, it’s not an enormous surprise that the corral experiment also behaves like quantum mechanics. But they’ve done an amazingly careful job, because it takes very accurate measurements over a very long time of this droplet bouncing to get this probability distribution.”
“If you have a system that is deterministic and is what we call in the business ‘chaotic,’ or sensitive to initial conditions, sensitive to perturbations, then it can behave probabilistically,” Milewski continues. “Experiments like this weren’t available to the giants of quantum mechanics. They also didn’t know anything about chaos.
“Suppose these guys — who were puzzled by why the world behaves in this strange probabilistic way — actually had access to experiments like this and had the knowledge of chaos, would they have come up with an equivalent, deterministic theory of quantum mechanics, which is not the current one? That’s what I find exciting from the quantum perspective.”
Tests of the new cladding material, a ceramic compound called silicon carbide (SiC), are described in a series of papers appearing in the journal Nuclear Technology.
A substitute for traditional zircaloy could greatly reduce the danger of hydrogen explosions.
In the aftermath of Japan’s earthquake and tsunami in March 2011, the Fukushima Daiichi nuclear plant was initially driven into shutdown by the magnitude 9.0 quake; its emergency generators then failed because they were inundated by the tsunami.
Hydrogen explosions caused greatest damage
But the greatest damage to the complex, and the greatest release of radiation, may have been caused by explosions of hydrogen gas that built up inside some of the reactors.
That hydrogen buildup was the result of hot steam coming into contact with overheated nuclear fuel rods covered by a cladding of zirconium alloy, or “zircaloy” — the material used as fuel-rod cladding in all water-cooled nuclear reactors, which constitute more than 90 percent of the world’s power reactors.
When it gets hot enough, zircaloy reacts with steam to produce hydrogen, a hazard in any loss-of-coolant nuclear accident.
“We are looking at all sides of the issue, regarding replacing the metallic cladding with ceramic,” says Mujid Kazimi, the TEPCO Professor of Nuclear Engineering at MIT, who is senior author of the papers. Because of the harsh environment fuel rods are exposed to — heat, steam, and neutrons that emanate from nuclear reactions — extensive further testing will be needed on any new cladding for use in commercial reactors, Kazimi says.
SiC is “very promising, but not at the moment ready for adoption” by the nuclear industry, he adds.
Other groups have suggested the use of SiC for cladding, but the material had never been subjected to the detailed tests and simulations that the MIT team carried out.
Kazimi and his colleagues not only tested the material’s response under normal operating conditions, with temperatures of 300 degrees Celsius (572 degrees Fahrenheit), but also under the more extreme conditions of an accident, with temperatures up to 1500 C (2732 F).
Nuclear fuel rods are made of hundreds of small pellets of enriched uranium placed end-to-end inside hollow tubes of zircaloy that are about a half-inch across.
The tubes are filled with inert helium gas to improve the heat conduction from the pellets to cladding that is cooled by the water that circulates outside the tubes. These tubes are then packed together in bundles that are inserted into the reactor core, where they heat water to produce steam to drive a turbine generator to produce electricity.
To test SiC cladding under normal operating conditions, the MIT team used a three-layer cladding design that features a middle layer made of a composite of SiC fibers reinforced with more SiC. The tubes were tested inside MIT’s research reactor in special loops that replicate the coolant temperature and chemistry conditions in large power reactors.
The irradiation apparatus was designed by MIT research scientist David Carpenter and research engineer Gordon Kohse. The effects of irradiation were studied by graduate student John Stempien and others, working with Kazimi. The results showed good strength retention during mechanical testing, Stempien says.
Graduate student Youho Lee and research scientist Tom McKrell conducted high-temperature oxidation studies on SiC. Under the extreme conditions of an accident, the corrosion rate was 100 to 1,000 times less than that of zircaloy. While zircaloy loses strength as temperature increases — becoming 2 percent weaker for every 10 C increase in temperature and losing all strength at about 1300 C, Stempien says — the strength of the SiC ceramic remains essentially constant to temperatures well above 1500 C.
The potential advantages of SiC cladding extend beyond reducing the risks in an accident, Kazimi explains. Because SiC reacts slowly with water, even under normal conditions it degrades less and can remain in a reactor core longer. That could allow reactor operators to squeeze extra energy out of fuel rods before refueling: The rods are typically replaced after four or five years in a reactor, and degradation of the cladding is a major limitation on their longevity.
In addition, the ability to leave fuel rods in place longer would reduce the spent fuel produced by each reactor, resulting in less volume for disposal, Kazimi says.
There are still further tests to be done: In particular, while zircaloy tubes can have their ends capped by welding a metal disk onto each end, ceramic can’t be welded, so a suitable bonding agent will need to be found. “We need to join the ceramic to ceramic in a way that can withstand the conditions in the nuclear core,” Kazimi says. “That’s not as perfected a science as it is for metals.” Other details, such as the optimal thickness of the tubes for durability and for heat transfer, also need to be determined.
In addition, the material needs to be tested further to determine its response to various stresses. “The fracture behavior is different,” co-author Lee says. In particular, while metal deforms predictably under pressure, a ceramic tends to fracture in a way that is “more statistical,” he says: It can only be predicted as a statistical likelihood of certain failure modes.
Regis Matzie, a former vice president and chief technology officer at Westinghouse, says that while SiC cladding has been investigated previously, such research “has only increased in importance after the core-melting and hydrogen explosions at the Fukushima site.” The three-layer design developed by the MIT team “appears to be the most promising of the new reactor fuel materials being proposed and investigated,” he says, adding that this is “very important research to the eventual implementation of the new cladding material.”
Silk has walked straight off the runway and into the lab. According to a new study published in the Journal of Clinical Investigation, silk implants placed in the brain of laboratory animals and designed to release a specific chemical, adenosine, may help stop the progression of epilepsy.
The research was supported by the National Institute of Neurological Disorders and Stroke (NINDS) and the National Institute of Biomedical Imaging and Bioengineering (NIBIB), which are part of the National Institutes of Health.
The epilepsies are a group of neurological disorders associated with recurring seizures that tend to become more frequent and severe over time. Adenosine decreases neuronal excitability and helps stop seizures. Earlier studies have suggested abnormally low levels of adenosine may be linked to epilepsy.
Rebecca L. Williams-Karnesky, Ph.D. and her colleagues from Legacy Research Institute, Portland, Ore., Oregon Health and Sciences University (OHSU), Portland, and Tufts University, Boston, looked at long-term effects of an adenosine-releasing silk-implant therapy in rats and examined the role of adenosine in causing epigenetic changes that may be associated with the development of epilepsy.
The investigators argue that adenosine’s beneficial effects are due to epigenetic modifications (chemical reactions that change the way genes are turned on or off without altering the DNA code, the letters that make up our genetic background). Specifically, these changes happen when a molecule known as a methyl group blocks a portion of DNA, affecting which genes are accessible and can be turned on. If methyl groups have been taken away (demethylated), genes are more likely to turn on.
The results reported in the paper provided evidence that changing adenosine levels affects DNA methylation in the brain. Specifically, greater amounts of adenosine were associated with lower levels of DNA methylation. The investigators also demonstrate that rats induced to develop epilepsy have higher levels of methylated DNA. Of particular note, epileptic rat brains that had received the adenosine-releasing silk implants exhibited DNA methylation levels close to brains of normal rats and this significantly lessened the worsening of the epilepsy over time.
“We know that there are mutations that are associated with epilepsy. However, there are few people such as Dr. Detlev Boison who are doing this type of work, focusing not just on genetic mutations but how the genes are regulated,” said Vicky Whittemore, Ph.D., program director at NINDS.
One mechanism involved in a specific type of epilepsy is an increase in mossy fiber sprouting — the formation of new excitatory circuits in the part of the brain where seizures commonly originate. At the end of the experiment, animals that had been treated with the adenosine-releasing silk implant showed less sprouting than animals that were not given the drug. “Based on our findings that 10 days of adenosine delivery prevented the sprouting of mossy fibers for at least three months in rats, we predict that the benefits of our adenosine therapy may extend even longer. However, this assumption needs to be validated in long-term experiments that go beyond three months,” said Dr. Boison, senior author of the paper from Legacy Research Institute and OHSU.
The rats did not receive the implants until they had experienced a number of seizures. The researchers noted that many studies investigating anti-epileptic drugs often test the treatments too early. “If the therapy interferes with the trigger for epilepsy development then the trigger is weakened and subsequent epilepsy is less severe. However, this is not necessarily indicative of a stop in the progression of the disease,” said Dr. Boison. They found that the adenosine-releasing silk did not completely abolish seizures in their animal model but reduced them four-fold.
“To avoid interference with the epilepsy-triggering mechanisms, we waited until all animals developed an early stage of epilepsy. In this model, the disease is life-long: seizures become more frequent and worsen with time. Therefore, we challenged ourselves to attempt treatment at a stage where epilepsy had already been established,” Dr. Boison continued.
The findings show that the implants are safe to use in rats and suggest that they may one day be used in the clinic. “Adenosine-releasing silk is a biodegradable implant. The release of adenosine occurs for 10 days and then the silk will completely dissolve. This is an ideal set-up for a transient preventative treatment,” said Dr. Boison. “Clinical applications could be the prevention of epilepsy following head trauma or the prevention of seizures that often — in about 50 percent of patients — follow conventional epilepsy surgery. In this case, adenosine-releasing silk might be placed into the resection cavity in order to prevent future seizures.”
However, before the silk implants are ready for their close-up, future studies will need to determine their optimal use and safety in humans. According to Dr. Boison, “We need to look into the efficacy of different doses of adenosine, the duration of adenosine release, and various time points of intervention.”
Future studies also need to demonstrate how long the effects of the adenosine-releasing silk implant will last.
“This work is important because 25-30 percent of people with epilepsy do not have effective therapies. This research may help us to prevent epilepsy in people who suffer some event that places them at risk for the disorder, such as individuals who have experienced head trauma,” said Dr. Whittemore.
This study was supported by grants from NINDS (NS061844, NS070359), NIBIB (EB002520), and the U.S. Department of Defense (W81XWH-12-1-0283).
For more information about epilepsy, please visit this page.
NINDS is the nation’s leading funder of research on the brain and nervous system. The NINDS mission is to reduce the burden of neurological disease – a burden borne by every age group, by every segment of society, by people all over the world.
“See-saw” molecule may offer clues to potential therapies in the long-term.
More than 11,000 Americans suffer spinal cord injuries each year, and since over a quarter of those injuries are due to falls, the number is likely to rise as the population ages.
The reason so many of those injuries are permanently disabling is that the human body lacks the capacity to regenerate nerve fibers. The best our bodies can do is route the surviving tissue around the injury site.
“It’s like a detour after an earthquake,” says Kuo-Fen Lee, the Salk Institute‘s Helen McLoraine Chair in Molecular Neurobiology. “If the freeway is down, but you can still take the side-streets, traffic can still move. So your strategy has to be to find a way to preserve as much tissue as possible, to give yourself a chance for that rerouting.”
In an open access paper published in PLOS ONE, Lee and his colleagues describe how a protein named P45 may yield insight into a possible molecular mechanism to promote rerouting for spinal cord healing and functional recovery. Because injured mice can recover more fully than human beings, Lee sought the source of the difference. He discovered that P45 had a previously unknown neuroprotective effect.
“As a biochemist and neurobiologist, this discovery gives me hope that we can find a potential target molecule for drug treatments,” says Lee. “Nevertheless, I must caution that this is only the first step in knowing what to look for.”
In a human or a mouse, the success of an attempted rerouting after a spinal cord injury depends on how much healthy tissue is left. But wounds set off a cascade of reactions within cells, which if not stopped in time will result in more dead and dying tissue extending beyond the injury site. Nerve traction from the injury site leads to disconnection of the network required for normal sensory and motor functions. Lee found that P45 is the key factor determining whether the cascade continues on to its destructive end.
A complex of proteins, by sequentially interacting with each other, induces this cascade of cell death. Lee discovered that P45 is a natural antagonist to this process. Antagonists are molecules, some naturally occurring, some made in pharmaceutical laboratories, that work essentially like sticking gum in a lock. Because the antagonist is in place, no other molecule can get in. In this case, P45 prevents two other proteins in the death cascade from connecting, rendering their actions harmless and stopping cell death.
But there’s more to how P45 works that gives Lee hope that he may be on to a unique approach to finding new ways to treat spinal cord injuries. In other recent findings, which are being prepared for publication, his team saw P45 also yield positive effects, specifically the encouragement of healthy tissue growth. Thus, Lee concludes its real role may be as a sort of “see-saw” molecule that tips the balance in the cascade from negative to positive.
“The great thing about P45 is that it can both inhibit the negative by blocking the conformational change that would lead to more cell death, while promoting the positive-the survival and growth of tissue-thus making it easier to foster recovery following spinal cord injury,” Lee explains.
“If you can understand where you could tilt the balance of positive/negative signal, it would give you less damage while helping to promote healing,” says Lee. “It could be combinatorial-maybe one molecule can do both, or maybe it’s a combination of two molecules, one to negate, one to promote. The hope is if such a control switch could be found, more tissue could be preserved at the site of injury, thus increasing the chances that movement might someday be restored.”
The next step for Lee’s laboratory will be to seek either a gene, or a process that works in a similar see-saw way in humans, or can be made to work with therapeutic intervention. Still, Lee cautions, this remains a proof of concept experiment in mice. Even if such a mechanism were found in humans, clinical applications would be years away.
Other researchers on the study were Tsung-Chang Sung, Zhijiang Chen, Sandrine Thuret, Marçal Vilar, Fred H. Gage and Roland Riek of the Salk Institute.
This work was supported by the National Institutes of Health, National Institute of Aging, MDA, Clayton Foundation, Paralyzed Veterans of America Spinal Cord Research Foundation, the Paralysis Project of America, Christopher and Dana Reeve Foundation, Ministerio de Economia y Competitividad and the Institute of Health Carlos III.
Study may advance fundamental understanding of how brain cells communicate.
Brain cells talk to each other in a variety of tones. Sometimes they speak loudly but other times struggle to be heard.
For many years scientists have asked why and how brain cells change tones so frequently. National Institutes of Health researchers showed that brief bursts of chemical energy coming from rapidly moving power plants, called mitochondria, may tune brain cell communication.
“We may have answered a long-standing, fundamental question about how brain cells communicate with each other in a variety of voice tones,” said Zu-Hang Sheng, Ph.D., a senior principal investigator and the chief of the Synaptic Functions Section at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS).
The network of nerve cells throughout the body typically controls thoughts, movements and senses by sending thousands of neurotransmitters, or brain chemicals, at communication points made between the cells called synapses. Neurotransmitters are sent from tiny protrusions found on nerve cells, called presynaptic boutons. Boutons are aligned, like beads on a string, on long, thin structures called axons. They help control the strength of the signals sent by regulating the amount and manner that nerve cells release transmitters.
Mitochondria are known as the cell’s power plant because they use oxygen to convert many of the chemicals cells use as food into adenosine triphosphate (ATP), the main energy that powers cells. This energy is essential for nerve cell survival and communication. Previous studies showed that mitochondria can rapidly move along axons, dancing from one bouton to another.
In this study, published in Cell Reports, Dr. Sheng and his colleagues show that these moving power plants may control the strength of the signals sent from boutons.
“This is the first demonstration that links the movement of mitochondria along axons to a wide variety of nerve cell signals sent during synaptic transmission,” said Dr. Sheng.
The researchers used advanced microscopic techniques to watch mitochondria move among boutons while they released neurotransmitters. They found that boutons sent consistent signals when mitochondria were nearby.
“It’s as if the presence of mitochondria causes a bouton to talk in a monotone voice,” said Tao Sun, Ph.D., a researcher in Dr. Sheng’s laboratory and the first author of the study.
Surprisingly, when the mitochondria were missing or moving away from boutons, the signal strength fluctuated. The results suggested that the presence of stationary power plants at synapses controls the stability of the nerve signal strength.
To test this idea further, the researchers manipulated mitochondrial movement in axons by changing levels of syntaphilin, a protein that helps anchor mitochondria to the nerve cell’s skeleton found inside axons. Removal of syntaphilin resulted in faster moving mitochondria and electrical recordings from these neurons showed that the signals they sent fluctuated greatly. Conversely, elevating syntaphilin levels in nerve cells arrested mitochondrial movement and resulted in boutons that spoke in monotones by sending signals with the same strength.
“It’s known that about one third of all mitochondria in axons move. Our results show that brain cell communication is tightly controlled by highly dynamic events occurring at numerous tiny cell-to-cell connection points,” said Dr. Sheng.
In separate experiments the researchers watched ATP energy levels in these tiny boutons as they sent nerve messages.
“The levels fluctuated more in boutons that did not have mitochondria nearby,” said Dr. Sun.
The researchers also found that blocking ATP production in mitochondria with the drug oligomycin reduced the size of the signals boutons sent even if a mitochondrial power plant was nearby.
“Our results suggest that local ATP production by nearby mitochondria is critical for consistent neurotransmitter release,” said Dr. Sheng. “It appears that variability in synaptic transmission is controlled by rapidly moving mitochondria which provide brief bursts of energy to the boutons they pass through.”
Problems with mitochondrial energy production and movement throughout nerve cells have been implicated in Alzheimer’s disease, Parkinson’s disease, amyotrophic lateral sclerosis, and other major neurodegenerative disorders. Dr. Sheng thinks these results will ultimately help scientists understand how these problems can lead to disorders in brain cell communication.
“Our findings reveal the cellular mechanisms that tune brain communication by regulating mitochondrial mobility, thus advancing our understanding of human neurological disorders,” said Dr. Sheng.
This study was funded by the NINDS’ Division of Intramural Research.