New solar steam technology developed at Rice University uses nanoparticles so effective at turning sunlight into heat that it can produce steam from icy-cold water. (Credit: Jeff Fitlow/Rice University)
Rice University scientists have unveiled a revolutionary new technology that uses nanoparticles to convert solar energy directly into steam. The new “solar steam” method from Rice’s Laboratory for Nanophotonics (LANP) is so effective it can even produce steam from icy cold water.
Details of the solar steam method were published online today in ACS Nano. The technology has an overall energy efficiency of 24 percent. Photovoltaic solar panels, by comparison, typically have an overall energy efficiency around 15 percent. However, the inventors of solar steam said they expect the first uses of the new technology will not be for electricity generation but rather for sanitation and water purification in developing countries.
“This is about a lot more than electricity,” said LANP Director Naomi Halas, the lead scientist on the project. “With this technology, we are beginning to think about solar thermal power in a completely different way.”
The efficiency of solar steam is due to the light-capturing nanoparticles that convert sunlight into heat. When submerged in water and exposed to sunlight, the particles heat up so quickly they instantly vaporize water and create steam. Halas said the solar steam’s overall energy efficiency can probably be increased as the technology is refined.
“We’re going from heating water on the macro scale to heating it at the nanoscale,” Halas said. “Our particles are very small — even smaller than a wavelength of light — which means they have an extremely small surface area to dissipate heat. This intense heating allows us to generate steam locally, right at the surface of the particle, and the idea of generating steam locally is really counterintuitive.”
To show just how counterintuitive, Rice graduate student Oara Neumann videotaped a solar steam demonstration in which a test tube of water containing light-activated nanoparticles was submerged into a bath of ice water. Using a lens to concentrate sunlight onto the near-freezing mixture in the tube, Neumann showed she could create steam from nearly frozen water.
Steam is one of the world’s most-used industrial fluids. About 90 percent of electricity is produced from steam, and steam is also used to sterilize medical waste and surgical instruments, to prepare food and to purify water.
Most industrial steam is produced in large boilers, and Halas said solar steam’s efficiency could allow steam to become economical on a much smaller scale.
People in developing countries will be among the first to see the benefits of solar steam. Rice engineering undergraduates have already created a solar steam-powered autoclave that’s capable of sterilizing medical and dental instruments at clinics that lack electricity. Halas also won a Grand Challenges grant from the Bill and Melinda Gates Foundation to create an ultra-small-scale system for treating human waste in areas without sewer systems or electricity.
“Solar steam is remarkable because of its efficiency,” said Neumann, the lead co-author on the paper. “It does not require acres of mirrors or solar panels. In fact, the footprint can be very small. For example, the light window in our demonstration autoclave was just a few square centimeters.”
Another potential use could be in powering hybrid air-conditioning and heating systems that run off of sunlight during the day and electricity at night. Halas, Neumann and colleagues have also conducted distillation experiments and found that solar steam is about two-and-a-half times more efficient than existing distillation columns.
Halas, the Stanley C. Moore Professor in Electrical and Computer Engineering, professor of physics, professor of chemistry and professor of biomedical engineering, is one of the world’s most-cited chemists. Her lab specializes in creating and studying light-activated particles. One of her creations, gold nanoshells, is the subject of several clinical trials for cancer treatment.
For the cancer treatment technology and many other applications, Halas’ team chooses particles that interact with just a few wavelengths of light. For the solar steam project, Halas and Neumann set out to design a particle that would interact with the widest possible spectrum of sunlight energy. Their new nanoparticles are activated by both visible sunlight and shorter wavelengths that humans cannot see.
“We’re not changing any of the laws of thermodynamics,” Halas said. “We’re just boiling water in a radically different way.”
The structure of the universe and the laws that govern its growth may be more similar than previously thought to the structure and growth of the human brain and other complex networks, such as the Internet or a social network of trust relationships between people, according to a new paper published in the science journal Nature’s Scientific Reports.
“By no means do we claim that the universe is a global brain or a computer,” said Dmitri Krioukov, co-author of the paper, published by the Cooperative Association for Internet Data Analysis (CAIDA), based at the San Diego Supercomputer Center (SDSC) at the University of California, San Diego. “But the discovered equivalence between the growth of the universe and complex networks strongly suggests that unexpectedly similar laws govern the dynamics of these very different complex systems.”
Having the ability to predict – let alone trying to control – the dynamics of complex networks remains a central challenge throughout network science. Structural and dynamical similarities among different real networks suggest that some universal laws might be in action, although the nature and common origin of such laws remain elusive.
By performing complex supercomputer simulations of the universe and using a variety of other calculations, researchers have now proven that the causal network representing the large-scale structure of space and time in our accelerating universe is a graph that shows remarkable similarity to many complex networks such as the Internet, social, or even biological networks.
“These findings have key implications for both network science and cosmology,” noted Krioukov. “We discovered that the large-scale growth dynamics of complex networks and causal networks are asymptotically (at large times) the same, explaining the structural similarity between these networks.”
“This is a perfect example of interdisciplinary research combining math, physics, and computer science in totally unexpected ways,” said SDSC Director Michael Norman. “Who would have guessed that the emergence of our universe’s four-dimensional spacetime from the quantum vacuum would have anything to do with the growth of the Internet? Causality is at the heart of both, so perhaps the similarity Krioukov and his collaborators found is to be expected.”
Of course the network representing the structure of the universe is astronomically huge – in fact it can be infinite. But even if it is finite, researchers’ best guess is that it is no smaller than 10250 atoms of space and time. (That’s the digit 1 followed by 250 zeros.) For comparison, the number of water molecules in all the oceans in the world has been estimated to be 4.4 x 1046.
Yet the researchers found a way to downscale this humongous network while preserving its vital properties, by proving mathematically that these properties do not depend on the network size in a certain range of parameters, such as the curvature and age of our universe.
After the downscaling, the research team turned to Trestles, one of SDSC’s data-intensive supercomputers, to perform simulations of the universe’s growing causal network. By parallelizing and optimizing the application, Robert Sinkovits, a computational scientist with SDSC, was able to complete in just over one day a computation that was originally projected to require three to four years.
“In addition to being able to complete these simulations much faster than previously ever imagined, the results perfectly matched the theoretical predictions of the researchers,” said Sinkovits.
“The most frequent question that people may ask is whether the discovered asymptotic equivalence between complex networks and the universe could be a coincidence,” said Krioukov. “Of course it could be, but the probability of such a coincidence is extremely low. Coincidences in physics are extremely rare, and almost never happen. There is always an explanation, which may be not immediately obvious.”
“Such an explanation could one day lead to a discovery of common fundamental laws whose two different consequences or limiting regimes are the laws of gravity (Einstein’s equations in general relativity) describing the dynamics of the universe, and some yet-unknown equations describing the dynamics of complex networks,” added Marián Boguñá, a member of the research team from the Departament de Física Fonamental at the Universitat de Barcelona, Spain.
The other researchers who worked on this project are Maksim Kitsak, CAIDA/SDSC/UC San Diego; and David Rideout and David Meyer, Department of Mathematics at UC San Diego.
This research was supported by multiple grants, including Defense Advanced Research Projects Agency (DARPA) grant number HR0011-12-1-0012; NSF grants number CNS-0964236 and CNS-1039646; Cisco Systems; Foundational Questions Institute grant number FQXi-RFP3-1018; George W. and Carol A. Lattimer Campus Professorship at UC San Diego; Office of the Ministry of Economy and Competitiveness, Spain (MICINN) project number FIS2010-21781-C02-02; Generalitat de Catalunya grant number 2009SGR838; and by the Catalan Institution for Research and Advanced Studies (ICREA) Academia Prize funded by the Generalitat de Catalunya, Spain.
The adverse side effects of certain hepatitis C medications can now be replicated and observed in Petri dishes and test tubes, thanks to a research team led by Craig Cameron, the Paul Berg Professor of Biochemistry and Molecular Biology at Penn State. “The new method not only will help us to understand the recent failures of hepatitis C antiviral drugs in some patients in clinical trials,” said Cameron. “It also could help to identify medications that eliminate all adverse effects.” The team’s findings, published in the current issue of the journal PLOS Pathogens, may help pave the way toward the development of safer and more-effective treatments for hepatitis C, as well as other pathogens such as SARS and West Nile virus.
First author Jamie Arnold, a research associate in Cameron’s lab at Penn State, explained that the hepatitis C virus (HCV), which affects more than 170,000,000 people worldwide, is the leading cause of liver disease and, although antiviral treatments are effective in many patients, they cause serious side effects in others. “Many antiviral medications for treating HCV are chemical analogs for the building blocks of RNA that are used to assemble new copies of the virus’ genome, enabling it to replicate,” he said. “These medications are close enough to the virus’ natural building blocks that they get incorporated into the virus’ genome. But they also are different in ways that lead to the virus’s incomplete replication. The problem, however, is that the medication not only mimics the virus’s genetic material but also the genetic material of the patient. So, while the drug causes damage to the virus, it also may affect the patient’s own healthy tissues.”
A method to reveal these adverse side effects in the safety of a laboratory setting, rather than in clinical trials where patients may be placed at risk, has been developed by the research team, which includes Cameron; Arnold; Suresh Sharma, a research associate in Cameron’s lab; other scientists at Penn State; and researchers from other academic, government and corporate labs. “We have taken anti-HCV medications and, in Petri dishes and test tubes, we have shown that these drugs affect functions within a cell’s mitochondria,” Cameron explained. “The cellular mitochondria — a tiny structure known as ‘the powerhouse of the cell’ that is responsible for making energy known as ATP — is affected by these compounds and is likely a major reason why we see adverse effects.” Cameron noted that scientists have known for some time that certain individuals have “sick” mitochondria. Such individuals are likely more sensitive to the mitochondrial side effects of antiviral drugs.
“We know that antiviral drugs, including the ones used to treat HCV, affect even normal, healthy mitochondria by slowing ATP output,” Arnold added. “While a person with normal mitochondria will experience some ATP and mitochondrial effects, a person who is already predisposed to mitochondrial dysfunction will be pushed over the ‘not enough cellular energy’ threshold by the antiviral drug. The person’s mitochondria simply won’t be able to keep up.”
One of the problems with clinical trials, Arnold explained, is that a drug may be shown to be quite effective but, if even a miniscule percentage of patients have side effects, the U.S. Food and Drug Administration is obligated to put the trial on hold or stop the trial altogether. This possibility makes drug companies reluctant to invest money in drug trials after an adverse event has been observed, even when the drugs could still help millions of people. The researchers hope that their methods eventually will become a part of the pre-clinical development process for this class of antiviral drugs. “If we can show, in the lab, that a drug will cause side effects, then these compounds will not enter lengthy, expensive clinical trials and cause harm to patients,” he said. “What’s more, a drug company can invest its money more wisely and carefully in drug research that will produce safe and effective products. Better and more-willing investments by drug companies ultimately will help patients because resources will be spent developing drugs that not only work, but that are safe for all patients.”
Cameron added that the next step for his team is to identify the genes that make some individuals respond poorly to these particular antiviral treatments. “By taking blood samples from various patients and using the new method to test for toxicity in the different samples, we hope to discover which individuals will respond well and which will experience mitochondrial reactions, based on their genetic profiles,” he said. “That is, we hope to use this method as a step toward truly personalized medicine, opening the door to pre-screening of patients so that those with mitochondrial diseases can be treated with different regimens from the start.”
The team members also hope their method will be a means to study toxicity and side effects in other diseases. “Specifically, our technology will illuminate toxicity of a particular class of compounds that interrupts viral RNA synthesis,” Cameron said. “While this class of compounds currently is being developed for treatment of HCV, a wide range of other RNA viruses, including West Nile virus, Dengue virus, SARS coronavirus, and perhaps even the Ebola virus, could be treated using this class of compounds as well.”
In addition to Cameron, Arnold and Sharma, other researchers who contributed to this study include Eric D. Smidansky from Penn State; Joy Y. Feng, Adrian S. Ray, Aesop Cho, Jason Perry, Jennifer E. Vela, Yeojin Park, Yili Xu, Yang Tian, Darius Babusis, Ona Barauskus and Weidong Zhong from Gilead Sciences Inc.; Maria L. Kireeva and Mikhail Kashlev from the Frederick National Laboratory for Cancer Research; Blake R. Peterson from the University of Kansas; and Averell Gnatt from the University of Maryland School of Medicine.
The research was funded by the National Institutes of Health and a Penn State Paul Berg Endowment.
Low antioxidant levels contribute to increased blood pressure during exercise for people with peripheral arterial disease, according to researchers at Penn State Hershey Heart and Vascular Institute.
Peripheral arterial disease, or PAD, affects an estimated 10 million Americans and increases the chance of death from a cardiovascular event. Reduced blood flow causes pain in the legs and increases blood pressure in people who have PAD. However, the causes of the disease are unknown.
“Past studies have shown that having low antioxidant levels and increased reactive oxygen species — chemical products that bind to body cells and cause damage — is related to more severe PAD,” said Matthew Muller, postdoctoral fellow in Larry Sinoway’s lab at Penn State College of Medicine, and lead author of the study.
Antioxidants prevent the reactive oxygen species from damaging cells.
“This study shows that blood pressure increases more with exercise in more severe PAD cases. By infusing the antioxidant vitamin C into the blood, we were able to lessen the increase in blood pressure during exercise,” said Muller.
Vitamin C does not lessen the increase in blood pressure of PAD patients to that of healthy people. As the intensity of exercise increases, the effects of vitamin C decrease but are still seen. The researchers report their findings in the Journal of Physiology.
Penn State Hershey researchers looked at three groups of PAD patients to study the blood pressure increase. A group of 13 PAD patients was compared to people without PAD to see the effects of doing low-intensity exercise on blood pressure. From that group, a second group of nine patients was used to measure the effects of vitamin C. A third group of five PAD patients and five without PAD had their leg muscles electrically stimulated to remove the brain’s role in raising blood pressure during muscle contraction in this disease.
Increased blood pressure during exercise occurs in both legs, before pain begins, and relates to the severity of the disease. By using electrical stimulation, the scientists show that the blood pressure increase comes from the muscle itself, since the brain is not telling the leg to contract and the pressure still increases.
“This indicates that during normal, everyday activities such as walking, an impaired antioxidant system — as well as other factors — plays a role in the increased blood pressure response to exercise,” Muller said. “Therefore, supplementing the diet with antioxidants may help these patients, but more studies are needed to confirm this concept.”
Other researchers are Rachel C. Drew, postdoctoral fellow; Cheryl A. Blaha, research coordinator; Jessica L. Mast, research coordinator; Jian Cui, associate professor of medicine; and Amy B. Reed, associate professor of surgery, all of Penn State College of Medicine.
The study was funded by the National Institutes of Health.
An ingredient in green tea that helps reduce blood sugar spikes in mice may lead to new diet strategies for people, according to Penn State food scientists.
Mice fed an antioxidant found in green tea — epigallocatechin-3-gallate, or EGCG — and corn starch had a significant reduction in increase in their blood sugar — blood glucose — levels compared to mice that were not fed the compound, according to Joshua Lambert, assistant professor of food science in agricultural sciences.
“The spike in blood glucose level is about 50 percent lower than the increase in the blood glucose level of mice that were not fed EGCG,” Lambert said.
The dose of EGCG fed to the mice was equivalent to about one and a half cups of green tea for a human.
Lambert, who worked with Sarah C. Forester, postdoctoral fellow, and Yeyi Gu, graduate student, both in food science, said EGCG was most effective when the compound was fed to the mice simultaneously with corn starch. For humans, this may mean that green tea could help them control the typical blood sugar increases that are brought on when they eat starchy foods, like breads and bagels that are often a part of typical breakfasts.
“If what you are eating with your tea has starch in it then you might see that beneficial effect,” Lambert said. “So, for example, if you have green tea with your bagel for breakfast, it may reduce the spike in blood glucose levels that you would normally get from that food.”
The EGCG had no significant effect on blood sugar spikes in mice that were fed glucose or maltose, according to the researchers who released their findings in the online version of Molecular Nutrition and Food Research. Lambert said that the reason blood sugar spikes are reduced when the mice ate starch, but not these sugars, may be related to the way the body converts starch into sugar.
An enzyme called alpha-amylase that is produced in both the mouth and by the pancreas helps break down starch into maltose and glucose. EGCG may inhibit the enzymes ability to break down the starch, the researchers indicated, since they also found that EGCG reduced the activity of alpha amylase in the pancreas by 34 percent.
If the mechanism holds in humans, this may mean that people who want to limit the blood sugar spike should skip adding sugar to their cup of green tea.
“That may mean that if you add sugar into your green tea, that might negate the effect that the green tea will have on limiting the rise in blood glucose level,” Lambert said.
Lambert added that the green tea and the starch would need to be consumed simultaneously. For example, drinking a cup of tea well after eating a piece of toast would probably not change the blood sugar spike.
For the study, researchers separated mice into several groups based on body weight. After a fasting period, the mice were given common corn starch, maltose, or sucrose. One group of mice received EGCG along with the feed, while a control group was not fed the compound.
The researchers then tested the blood sugar levels of both groups.
Lambert said the researchers next step is to test the compound on people.
“The relatively low effective dose of EGCG makes a compelling case for studies in human subjects,” the researchers said.
With its deeply embedded roots, sturdy trunk and dense profusion of branches, the Tree of Life is a structure of nearly unfathomable complexity and beauty. While major strides have been made to establish the evolutionary hierarchy encompassing every living species, the project is still in its infancy.
At Arizona State University’s Biodesign Institute, Sudhir Kumar has been filling in the Tree of Life by developing sophisticated methods and bioinformatics tools. His latest research, which appeared on the advance online edition of the Proceedings of the National Academy of Sciences will uniquely enable scientists to analyze very large datasests to set time to the multitude of branching points (nodes) on the tree, each representing a point of species divergence from a common ancestor. The new method differs significantly from currently used techniques and excels in providing results of equal or greater accuracy at speeds of 1000 times or faster.
For the proper study of evolutionary history, two components are key: the relationships between organisms (known as phylogeny) and their times of divergence. As Kumar explains, the powerful technique for estimating the time of divergence between species was initially realized over four decades ago, when the concept of molecular clocks was introduced. Initially the idea rested on the assumption that alterations in either the amino acid sequences of proteins or the nucleotide sequences of DNA between various species accumulate at a uniform rate over time and can be used to evaluate divergence times. The resulting phylogenetic structure is known as a “TimeTree,” that is, a tree of life scaled to time.
Prior to the use of molecular clocks, morphological changes between species were the primary means of identifying divergence times. Since then, molecular clocks have proved a vital tool for evolutionary biologists, supplementing the fossil record and providing a powerful means to time the divergence of species.
But there is a complication. The rate of change measured by molecular clocks can vary—sometimes radically—between groups of species. Rather than an ordered world running on a universal clock time, the Tree of Life is more like an antiques shop where clocks run at different speeds in different species.
Many approaches for dealing with this conundrum have been applied successfully, but their complexity rises exponentially with the number of species involved. Often such calculations swallow vast amounts of computing time, even for data sets of modest size.
By contrast, the new simplified method (known as RelTime) produces rapid results. Its main purpose is to estimate relative times of divergence. This avoids the need to use the fossil record, which is otherswise required in order to obtain absolute times.
“If, for example, we can establish that human and chimp divergence is 5 times younger than the human and monkey divergence, that would be very useful,” Kumar says. “What our method can do is to generate such relative time information for every divergence in the Tree of Life—without using the fossil record or other complicated model parameters. “ Once relative times for all the nodes on the tree of life are established, fossil calibration points for which a high degree of confidence exists can be applied post hoc to add the absolute time dimension.
Kumar points out that rapid DNA sequencing has allowed for huge datasets of comparative molecular sequences to be generated. Analyses of even a few hundred sequences through current methods however can severely strain computer resources and more massive data sets now being generated can not be solved in reasonable time through current methods, so a fresh approach was needed.
Using RelTime and restricting the analysis to relative divergence times produces results for large phylogenetic trees in hours rather than days. It can also deliver better accuracy, particularly when datasets are enormous and species of interest are from vastly different groups.
“The uses of such technique are only limited by one’s imagination. They can be used to estimate the origin of familiar species, emergence of human pathogens, and so forth,” Kumar says. “The method is applicable wherever you work with sequences and trees.”
RelTime may also help sort out troubling disparities between divergence times based on the fossil record versus those established through the use of molecular data. Examples of dramatic discrepancies between fossils and sequence change measurements have provoked spirited debate, particularly concerning the adaptive radiation of mammals posited to have occurred at the time of dinosaur extinction some 65 million years ago and the divergence of specific animal phyla believed to date to the beginning of the Cambrian period (~500–600 Mya). In both cases, for example, the molecular dates are about 50 percent older than fossil dates.
The ongoing Timetree of Life project will have important ramifications for many fields of research, providing deep insights into comparative biology, as well as generating data of relevance for paleontologists, geologists, geochemists, and climatologists. Establishing a comparative biological timeline synchronized with Earth history will enable scientists working in diverse areas to explore the long-term development of the biosphere and investigate the evolutionary underpinnings of all life.
Every six seconds, for millions of years, comets have been colliding with one another near a star in the constellation Cetus called 49 CETI, which is visible to the naked eye.
Over the past three decades, astronomers have discovered hundreds of dusty disks around stars, but only two — 49 CETI is one — have been found that also have large amounts of gas orbiting them.
Young stars, about a million years old, have a disk of both dust and gas orbiting them, but the gas tends to dissipate within a few million years and almost always within about 10 million years. Yet 49 CETI, which is thought to be considerably older, is still being orbited by a tremendous quantity of gas in the form of carbon monoxide molecules, long after that gas should have dissipated.
“We now believe that 49 CETI is 40 million years old, and the mystery is how in the world can there be this much gas around an otherwise ordinary star that is this old,” said Benjamin Zuckerman, a UCLA professor of physics and astronomy and co-author of the research, which was recently published in the Astrophysical Journal. “This is the oldest star we know of with so much gas.”
Zuckerman and his co-author Inseok Song, a University of Georgia assistant professor of physics and astronomy, propose that the mysterious gas comes from a very massive disk-shaped region around 49 CETI that is similar to the sun’s Kuiper Belt, which lies beyond the orbit of Neptune.
The total mass of the various objects that make up the Kuiper Belt, including the dwarf planet Pluto, is about one-tenth the mass of the Earth. But back when the Earth was forming, astronomers say, the Kuiper Belt likely had a mass that was approximately 40 times larger than the Earth’s; most of that initial mass has been lost in the last 4.5 billion years.
By contrast, the Kuiper Belt analogue that orbits around 49 CETI now has a mass of about 400 Earth masses — 4,000 times the current mass of the Kuiper Belt.
“Hundreds of trillions of comets orbit around 49 CETI and one other star whose age is about 30 million years. Imagine so many trillions of comets, each the size of the UCLA campus — approximately 1 mile in diameter — orbiting around 49 CETI and bashing into one another,” Zuckerman said. “These young comets likely contain more carbon monoxide than typical comets in our solar system. When they collide, the carbon monoxide escapes as a gas. The gas seen around these two stars is the result of the incredible number of collisions among these comets.
“We calculate that comets collide around these two stars about every six seconds,” he said. “I was absolutely amazed when we calculated this rapid rate. I would not have dreamt it in a million years. We think these collisions have been occurring for 10 million years or so.”
Using a radio telescope in the Sierra Nevada mountains of southern Spain in 1995, Zuckerman and two colleagues discovered the gas that orbits 49 CETI, but the origin of the gas had remained unexplained for 17 years, until now.
In a new study appearing this month in the Journal of Neuroscience, researchers have unlocked the complex cellular mechanics that instruct specific brain cells to continue to divide. This discovery overcomes a significant technical hurdle to potential human stem cell therapies; ensuring that an abundant supply of cells is available to study and ultimately treat people with diseases.
“One of the major factors that will determine the viability of stem cell therapies is access to a safe and reliable supply of cells,” said University of Rochester Medical Center (URMC) neurologist Steve Goldman, M.D., Ph.D., lead author of the study. “This study demonstrates that – in the case of certain populations of brain cells – we now understand the cell biology and the mechanisms necessary to control cell division and generate an almost endless supply of cells.”
The study focuses on cells called glial progenitor cells (GPCs) that are found in the white matter of the human brain. These stem cells give rise to two cells found in the central nervous system: oligodendrocytes, which produce myelin, the fatty tissue that insulates the connections between cells; and astrocytes, cells that are critical to the health and signaling function of oligodendrocytes as well as neurons.
Damage to myelin lies at the root of a long list of diseases, such as multiple sclerosis, cerebral palsy, and a family of deadly childhood diseases called pediatric leukodystrophies. The scientific community believes that regenerative medicine – in the form of cell transplantation – holds great promise for treating myelin disorders. Goldman and his colleagues, for example, have demonstrated in numerous animal model studies that transplanted GPCs can proliferate in the brain and repair damaged myelin.
However, one of the barriers to moving forward with human treatments for myelin disease has been the difficulty of creating a plentiful supply of necessary cells, in this case GPCs. Scientists have been successful at getting these cells to divide and multiply in the lab, but only for limited periods of time, resulting in the generation of limited numbers of usable cells.
“After a period of time, the cells stop dividing or, more typically, begin to specialize and form astrocytes which are not useful for myelin repair,” said Goldman. “These cells could go either way but they essentially choose the wrong direction.”
Overcoming this problem required that Goldman’s lab master the precise chemical symphony that occurs within stem cells, and which instructs them when to divide and multiply, and when to stop this process and become oligodendrocytes and astrocytes.
One of the key players in cell division is a protein called beta-catenin. Beta-catenin is regulated by another protein in the cell called glycogen synthase kinase 3 beta (GSK3B). GSK3B is responsible for altering beta-catenin by adding an additional phosphate molecule to its structure, essentially giving it a barcode that the cell then uses to sort the protein and send it off to be destroyed. During development, when cell division is necessary, this process is interrupted by another signal that blocks GSK3B. When this occurs, the beta-catenin protein is spared destruction and eventually makes its way to the cell’s nucleus where it starts a chemical chain reaction that ultimately instructs the cell to divide. However, after a period of time this process slows and, instead of replicating, the cells begin to then commit to becoming one type or another. The challenge for scientists was to find another way to essentially trick these cells into continuing to divide, and to do so without risking the uncontrolled growth that could otherwise result in tumor formation.
The new discovery hinges on a receptor called protein tyrosine phosphatase beta/zeta (PTPRZ1). Goldman and his team long suspected that PTPRZ1 played an important role in cell division; the receptor shows up prominently in molecular profiles of GPCs. After a six-year effort to discern the receptor’s function, they found that it works in concert with GSK3B and helps “label” beta-catenin protein for either destruction or nuclear activity. The breakthrough was the identification of a molecule – called pleiotrophin – that essentially blocks the function of the PTPRZ1 receptor. They found that by regulating the levels of pleiotrophin, they were able to essentially “short circuit” PTPRZ1’s normal influence on cell division, allowing the cells to continue dividing.
While the experiments were performed on cells derived from human brain tissue, the authors contend that the same process could also be applied to GPCs derived from embryos or from “reprogrammed” skin cells. This would greatly expand the number of cells potentially derived from single patient samples, whether for transplantation back to those same individuals or for use in other patients.
Additional authors on the paper include its first author, URMC graduate student Crystal McClain, Ph.D., and Fraser Sim, Ph.D., a member of Goldman’s lab and now an assistant professor at the University at Buffalo. The study was supported by the National Institute of Neurological Disorders and Stroke, the Department of Defense, the Adelson Medical Research Foundation, and the National Multiple Sclerosis Society.
Predicting how atherosclerosis, osteoporosis or cancer will progress or respond to drugs in individual patients is difficult. In a new study, researchers took another step toward that goal by developing a technique able to predict from a blood sample the amount of cathepsins—protein-degrading enzymes known to accelerate these diseases—a specific person would produce.
This patient-specific information may be helpful in developing personalized approaches to treat these tissue-destructive diseases.
“We measured significant variability in the amount of cathepsins produced by blood samples we collected from healthy individuals, which may indicate that a one-size-fits-all approach of administering cathepsin inhibitors may not be the best strategy for all patients with these conditions,” said Manu Platt, an assistant professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.
The study was published online on Oct. 19, 2012 in the journal Integrative Biology. This work was supported by the National Institutes of Health, Georgia Cancer Coalition, Atlanta Clinical and Translational Science Institute, and the Emory/Georgia Tech Regenerative Engineering and Medicine Center.
Platt and graduate student Keon-Young Park collected blood samples from 14 healthy individuals, removed white blood cells called monocytes from the samples and stimulated those cells with certain molecules so that they would become macrophages or osteoclasts in the laboratory. By doing this, the researchers recreated what happens in the body—monocytes receive these cues from damaged tissue, leave the blood, and become macrophages or osteoclasts, which are known to contribute to tissue changes that occur in atherosclerosis, cancer and osteoporosis.
Then the researchers developed a model that used patient-varying kinase signals collected from the macrophages or osteoclasts to predict patient-specific activity of four cathepsins: K, L, S and V.
“Kinases are enzymes that integrate stimuli from different soluble, cellular and physical cues to generate specific cellular responses,” explained Platt, who is also a Georgia Cancer Coalition Distinguished Cancer Scholar. “By using a systems biology approach to link cell differentiation cues and responses through integration of signals at the kinase level, we were able to mathematically predict relative amounts of cathepsin activity and distinguish which blood donors exhibited greater cathepsin activity compared to others.”
Predictability for all cathepsins ranged from 90 to 95 percent for both macrophages and osteoclasts, despite a range in the level of each cathepsin among the blood samples tested.
“We were pleased with the results because our model achieved very high predictability from a simple blood draw and overcame the challenge of incorporating the complex, unknown cues from individual patients’ unique genetic and biochemical backgrounds,” said Platt.
According to Platt, the next step will be to assess the model’s ability to predict cathepsin activity using blood samples from individuals with the diseases of interest: atherosclerosis, osteoporosis or cancer.
“Our ultimate goal is to create an assay that will inform a clinician whether an individual’s case of cancer or other tissue-destructive disease will be very aggressive from the moment that individual is diagnosed, which will enable the clinician to develop and begin the best personalized treatment plan immediately,” added Platt.
“Singing” sand dunes, such as these in Al-Askharah, Oman, often hum at multiple frequencies simultaneously. By sieving sand from the Omani dunes, scientists managed to narrow down the frequencies at which the sands sing. (Credit: Simon Dagois-Bohy, Université Paris Diderot)
What does Elvis Presley have in common with a sand dune? No, it’s not that people sometimes spot both in the vicinity of Las Vegas. Instead, some sand dunes, like The King, can sing. And new research looking for clues to how streams of sand can sing may explain why some dunes croon in more than one pitch at the same time.
Sand dunes only sing in a few areas across the globe, and their songs – always a low, droning sound — have been an object of curiosity for centuries. Marco Polo encountered their haunting drone during his travels and Charles Darwin, in his book “The Voyage of the Beagle,” wrote of testimonials from Chileans about the sound of a sandy hill they called the “bellower.”
The song of the sands is a low hum at a frequency within the bottom half of a cello’s musical range. These dunes only sing when the sand is sliding down their sides. People can set the sand in motion themselves or, more eerily, the wind can create sand avalanches, creating a sudden, booming chorus.
Scientists previously thought the sound arose because avalanching sand created vibrations in the more stable underlayers of the dunes. But evidence that the avalanche of sand itself sings, not the dunes, emerged from experiments in 2009 by researchers who got a shallow pile of sand to sing while spilling down a laboratory incline. Now, the same research team has investigated a deeper mystery of the dunes — how multiple notes can sound simultaneously from one dune.
To study this question, physicist Simon Dagois-Bohy and his fellow researchers at Paris Diderot University in France recorded two different dunes: one near Tarfaya, a port town in southwestern Morocco, and one near Al-Askharah, a coastal town in southeastern Oman. No matter where recordings were made near the Moroccan dune, the sands sang consistently at about 105 hertz, in the neighborhood of G-sharp two octaves below middle C. The Omani sands also sang powerfully, but sometimes unleashed a cacophony of almost every possible frequency from 90 to 150 hertz, or about F-sharp to D, a range of nine notes.
The research will be published this Friday in the American Geophysical Union journal Geophysical Research Letters.
Even though the Omani dunes are somewhat sloppy singers, the researchers identified some tones that were slightly stronger than others. But with all the sand avalanching at once, those prominent frequencies were often buried in sea of notes. The scientists also observed that sand grains from the Omani dune came in a much wider range of sizes than their Moroccan counterparts. The Omani dune’s grains were 150 to 310 microns, while the Moroccan dune’s grains were only 150 to 170 microns.
So Dagois-Bohy and his colleagues brought grains from the Omani dune back to the lab. First, they ran the mix of the Omani sands down a constructed incline, recording its sound with microphones and measuring the sand’s vibrations with sensors that floated on the surface. Then, they used a sieve to isolate the sand grains that were between 200 and 250 microns, and ran those sands down the same slope.
The researchers then compared the sound of the isolated sands with the sound of the mixed-size control. They found that while the grains of a broad size range sang noisily, the sands of a narrow size range sang a clear note at about 90 hertz, much like the Moroccan sands do naturally. This suggested that grain size is an important factor in what tone the dunes sing, Dagois-Bohy said.
“The sound we hear is correlated to the size of the grains,” he said. “So we can start to say that the size of the grains is important.”
The research team suggests the grain size affects the purity of tones generated by the dunes. When grain size varies, the streams of sand flow at varied speeds, producing a wider range of notes. When the grains of sand are all about the same size, the streams of sand within the avalanche move at more consistent speeds, causing the sound to narrow in on specific tones. But scientists still don’t know how the erratic motion of flowing grains translates into sounds coherent enough to resemble musical notes, Dagois-Bohy said.
His team’s hypothesis is that the vibrations of flowing sand grains synchronize, causing stretches of the sand grains to vibrate in unison. Their thousands of meager vibrations combine to push the air together, like the diaphragm of a loud speaker, Dagois-Bohy said. “But why do they synchronize with each other?” he noted. “That’s still not resolved.”
“The study attempts, and I think succeeds in many ways, to solve the problem of what’s the mechanism” that translates tumbling sand into a song, said Tom Patitsas, a theoretical physicist at Laurentian University in Sudbury, Ontario, who did not participate in the study. Patitsas said the theory behind the sound still requires more elaboration to explain why, for example, the flowing sand still needs a thin layer of stationary sand underneath it to make a sound. He suggests the sliding sands resonate with similar-sized grains beneath the avalanche. Those buried grains may lie in chain-like patterns that intensify the resonance. “Once you have this resonance, the amplitude of the vibration will be large,” Patitsas said.