The water flowing through Venice’s famous canals laps at buildings a little higher every year – and not only because of a rising sea level. Although previous studies had found that Venice has stabilized, new measurements indicate that the historic city continues to slowly sink, and even to tilt slightly to the east.
“Venice appears to be continuing to subside, at a rate of about 2 millimeters (.07 inches) a year,” said Yehuda Bock, a research geodesist with Scripps Institution of Oceanography at UC San Diego, and the lead author of the new research paper on the city’s downward drift. “It’s a small effect, but it’s important,” he added. Given that sea level is rising in the Venetian lagoon, also at 2 millimeters per year, the slight subsidence doubles the rate at which the heights of surrounding waters are increasing relative to the elevation of the city, he noted. In the next 20 years, if Venice and its immediate surroundings subsided steadily at the current rate, researchers would expect the land to sink up to 80 millimeters (3.2 inches) in that period of time, relative to the sea.
Bock worked with colleagues from the University of Miami in Florida and Italy’s Tele-Rilevamento Europa, a company that measures ground deformation, to analyze data collected by GPS and space-borne radar (InSAR) instruments regarding Venice and its lagoon. The GPS measurements provide absolute elevations, while the InSAR data are used to calculate elevations relative to other points. By combining the two datasets from the decade between 2000 and 2010, Bock and his colleagues found that the city of Venice was subsiding on average of 1 to 2 millimeters a year (0.04 to 0.08 inches per year). The patches of land in Venice’s lagoon (117 islands in all) are also sinking, they found, with northern sections of the lagoon dropping at a rate of 2 to 3 millimeters (0.08 to 0.12 inches) per year, and the southern lagoon subsiding at 3 to 4 millimeters (0.12 to 0.16 inches) per year.
The findings will be published March 28 in Geochemistry, Geophysics, Geosystems, a journal of the American Geophysical Union.
“Our combined GPS and InSAR analysis clearly captured the movements over the last decade that neither GPS nor InSAR could sense alone” said Shimon Wdowinski, associate research professor of Marine Geology and Geophysics at the University of Miami.
In the new study, using the GPS instruments, Bock and his colleagues were able to take absolute readings of the city and its surrounding lagoons. And not only did they find the sinking, but they found that the area was tilting a bit, about a millimeter or two eastward per year. That means the western part – where the city of Venice is – is higher than the eastern sections. Prior satellite analyses didn’t pick up on the tilt, Bock said, possibly because the scientists had been taking measurements using InSAR, which only provided the change elevation relative to other sites.
The relative nature of InSAR measurements might also explain why the new study detected Venice’s subsidence, while other recent studies did not, Bock conjectured.
Venice’s subsidence was recognized as a major issue decades ago, he noted, when scientists realized that pumping groundwater from beneath the city, combined with the ground’s compaction from centuries of building, was causing the city to settle. But officials put a stop to the groundwater pumping, and subsequent studies in the 2000s indicated that the subsidence had stopped, he said.
“It’s possible that it was stable in that decade, and started subsiding since then, but this is unlikely,” Bock said. The current subsidence is due to natural causes, which probably have been affecting the area for a long time.
A significant part of those natural causes are plate tectonics. The Adriatic plate, which includes Venice, subducts beneath the Apennines Mountains and causes the city and its environs to drop slightly in elevation. And although the groundwater pumping has stopped, the compaction of the sediments beneath Venice remains a factor.
The frequency of floods in Venice is increasing, Bock said, and now about four or five times a year residents have to walk on wooden planks to stay above the floodwaters in large parts of the city. A multi-billion-dollar effort to install flood-protection walls that can be raised to block incoming tides is nearing completion, he said. The adjustable barriers were designed to protect the city from tides that are coming in higher as overall sea levels are rising in response to climate change.
To ensure that the gates can hold back sufficient tidal water in the long run, their builders “have to take into account not only the rising of sea level, but also the subsidence,” Wdowinski said. The land on which those gates are being built is descending and taking the barriers down with it relative to the rising seas, he said, effectively doubling the amount of elevation change in store for Venice.
The patchy land in the surrounding lagoon could need shoring up as well. Over the next 40 years, the natural barriers that protect the Venice lagoon and city could drop by 150-200 millimeters, Wdowinski said, so officials may need to reinforce those sinking sediments as well.
Chemists at the University of California, San Diego have produced the first high resolution structure of a molecule that when attached to the genetic material of the hepatitis C virus prevents it from reproducing.
Hepatitis C is a chronic infectious disease that affects some 170 million people worldwide and causes chronic liver disease and liver cancer. According to the Centers for Disease Control and Prevention, hepatitis C now kills more Americans each year than HIV.
The structure of the molecule, which was published in a paper in this week’s early online edition of the journal Proceedings of the National Academy of Sciences, provides a detailed blueprint for the design of drugs that can inhibit the replication of the hepatitis C virus, which proliferates by hijacking the cellular machinery in humans to manufacture duplicate viral particles.
Finding a way to stop that process could effectively treat viral infections of hepatitis C, for which no vaccine is currently available. But until now scientists have identified few inhibiting compounds that directly act on the virus’s ribonucleic acid (RNA) genome—the organism’s full complement of genetic material.
“This lack of detailed information on how inhibitors lock onto the viral genome target has hampered the development of better drugs,” said Thomas Hermann, an associate professor of chemistry and biochemistry at UC San Diego who headed the research team, which also included scientists from San Diego State University. The team detailed the structure of a molecule that induces the viral RNA to open up a portion of its hinge-like structure and encapsulate the inhibitor like a perfectly fit glove, blocking the ability of the hepatitis C virus to replicate.
The molecule is from a class of compounds called benzimidazoles, known to stop the production of viral proteins in infected human cells. Its three-dimensional atomic structure was determined by X-ray crystallography, a method of mapping the arrangement of atoms within a crystal, in which a beam of X-rays strikes a crystal and causes the beam of light to spread. The angles and intensities of the light beams allowed the scientists to calculate the structure of the viral RNA-inhibitor complex.
The molecule prompts the Hepatitis C’s viral RNA to open up a portion of its hinge-like structure and encapsulate the inhibitor like a perfectly fit glove.(Credit: Image courtesy of University of California – San Diego)
“This structure will guide approaches to rationally design better drug candidates and improve the known benzimidazole inhibitors,” said Hermann. “Also, the crystal structure demonstrates that the binding pocket for the inhibitors in the hepatitis C virus RNA resembles drug-binding pockets in proteins. This is important to help overcome the notion that RNA targets are so unlike traditional protein targets that drug discovery approaches with small molecule inhibitors are difficult to achieve for RNA.”
The study was supported by the National Institutes of Health and National Science Foundation.
UT Southwestern Medical Center investigators have identified a genetic manipulation that increases the development of neurons in the brain during aging and enhances the effect of antidepressant drugs.
The research finds that deleting the Nf1 gene in mice results in long-lasting improvements in neurogenesis, which in turn makes those in the test group more sensitive to the effects of antidepressants.
“The significant implication of this work is that enhancing neurogenesis sensitizes mice to antidepressants – meaning they needed lower doses of the drugs to affect ‘mood’ – and also appears to have anti-depressive and anti-anxiety effects of its own that continue over time,” said Dr. Luis Parada, director of the Kent Waldrep Center for Basic Research on Nerve Growth and Regeneration and senior author of the study published in the Journal of Neuroscience.
Just as in people, mice produce new neurons throughout adulthood, although the rate declines with age and stress, said Dr. Parada, chairman of developmental biology at UT Southwestern. Studies have shown that learning, exercise, electroconvulsive therapy and some antidepressants can increase neurogenesis. The steps in the process are well known but the cellular mechanisms behind those steps are not.
“In neurogenesis, stem cells in the brain’s hippocampus give rise to neuronal precursor cells that eventually become young neurons, which continue on to become full-fledged neurons that integrate into the brain’s synapses,” said Dr. Parada, an elected member of the prestigious National Academy of Sciences, its Institute of Medicine, and the American Academy of Arts and Sciences.
The researchers used a sophisticated process to delete the gene that codes for the Nf1 protein only in the brains of mice, while production in other tissues continued normally. After showing that mice lacking Nf1 protein in the brain had greater neurogenesis than controls, the researchers administered behavioral tests designed to mimic situations that would spark a subdued mood or anxiety, such as observing grooming behavior in response to a small splash of sugar water.
The researchers found that the test group mice formed more neurons over time compared to controls, and that young mice lacking the Nf1 protein required much lower amounts of anti-depressants to counteract the effects of stress. Behavioral differences between the groups persisted at three months, six months and nine months. “Older mice lacking the protein responded as if they had been taking antidepressants all their lives,” said Dr. Parada.
“In summary, this work suggests that activating neural precursor cells could directly improve depression- and anxiety-like behaviors, and it provides a proof-of-principle regarding the feasibility of regulating behavior via direct manipulation of adult neurogenesis,” Dr. Parada said.
Dr. Parada’s laboratory has published a series of studies that link the Nf1 gene – best known for mutations that cause tumors to grow around nerves – to wide-ranging effects in several major tissues. For instance, in one study researchers identified ways that the body’s immune system promotes the growth of tumors, and in another study, they described how loss of the Nf1 protein in the circulatory system leads to hypertension and congenital heart disease.
The current study’s lead author is former graduate student Dr. Yun Li, now a postdoctoral researcher at the Massachusetts Institute of Technology. Other co-authors include Yanjiao Li, a research associate of developmental biology, Dr. Renée McKay, assistant professor of developmental biology, both of UT Southwestern, and Dr. Dieter Riethmacher of the University of Southampton in the United Kingdom.
The study was supported by the National Institutes of Health’s National Institute of Neurological Disorders and Stroke, and National Institute of Mental Health. Dr. Parada is an American Cancer Society Research Professor.
UT Southwestern Medical Center cardiologists have uncovered how a specific protein’s previously unsuspected role contributes to the deterioration of heart muscle in patients with diabetes. Investigators in the mouse study also have found a way to reverse the damage caused by this protein.
The new research, available online and published in the March 1 issue of the Journal of Clinical Investigation, was carried out in the laboratory of Dr. Joseph Hill, director of the Harry S. Moss Heart Center at UT Southwestern.
“If we can protect the heart of diabetic patients, it would be a significant breakthrough,” said Dr. Hill, the study’s senior author who also serves as chief of cardiology at the medical center. “These are fundamental research findings that can be applied to a patient’s bedside.”
Cardiovascular disease is the leading cause of illness and death in patients with diabetes, which affects more than 180 million people around the world, according to the American Heart Association. Diabetes puts additional stress on the heart – above and beyond that provoked by risk factors such as high blood pressure or coronary artery disease, Dr. Hill said.
“Elevated glucose and the insulin-resistant diabetic state are both toxic to the heart,” he said.
Dr. Hill and his colleagues in this study were able to maintain heart function in mice exposed to a high fat diet by inactivating a protein called FoxO1. Previous investigations from Dr. Hill’s laboratory demonstrated that FoxO proteins, a class of proteins that govern gene expression and regulate cell size, viability and metabolism, are tightly linked to the development of heart disease in mice with type 2 diabetes.
“If you eliminate FoxO1, the heart is protected from the stress of diabetes and continues to function normally,” Dr. Hill said. “If we can prevent FoxO1 from being overactive, then there is a chance that we can protect the hearts of patients with diabetes.”
Other UT Southwestern investigators participating in the study were lead author Dr. Pavan Battiprolu, and Drs. Zhao Wang and Myriam Iglewski, all postdoctoral researchers in internal medicine; Dr. Berdymammet Hojayev, postdoctoral researcher in pathology; Nan Jiang and John Shelton, senior research scientists in internal medicine; Dr. Xiang Luo, instructor in internal medicine; Dr. Robert Gerard, associate professor of internal medicine and molecular biology; Dr. Beverly Rothermel, assistant professor of internal medicine and molecular biology; Dr. Thomas Gillette, assistant professor of internal medicine; and Dr. Sergio Lavandero, visiting professor of internal medicine.
The research was supported by grants from the National Institutes of Health, the American Heart Association, the American Diabetes Association and the Jon Holden DeHaan Foundation.
Astronomers at The Ohio State University have calculated the odds that, sometime during the next 50 years, a supernova occurring in our home galaxy will be visible from Earth.
The good news: they’ve calculated the odds to be nearly 100 percent that such a supernova would be visible to telescopes in the form of infrared radiation.
The bad news: the odds are much lower—dipping to 20 percent or less—that the shining stellar spectacle would be visible to the naked eye in the nighttime sky.
Yet, all this is great news to astronomers, who, unlike the rest of us, have high-powered infrared cameras to point at the sky at a moment’s notice. For them, this study suggests that they have a solid chance of doing something that’s never been done before: detect a supernova fast enough to witness what happens at the very beginning of a star’s demise. A massive star “goes supernova” at the moment when it’s used up all its nuclear fuel and its core collapses, just before it explodes violently and throws off most of its mass into space.
“We see all these stars go supernova in other galaxies, and we don’t fully understand how it happens. We think we know, we say we know, but that’s not actually 100 percent true,” said Christopher Kochanek, professor of astronomy at Ohio State and the Ohio Eminent Scholar in Observational Cosmology. “Today, technologies have advanced to the point that we can learn enormously more about supernovae if we can catch the next one in our galaxy and study it with all our available tools.”
The results will appear in an upcoming issue of The Astrophysical Journal.
First through calculations and then through computer models, generations of astronomers have worked out the physics of supernovae based on all available data, and today’s best models appear to match what they see in the skies. But actually witnessing a supernova—that is, for instance, actually measuring the changes in infrared radiation from start to finish while one was happening—could prove or disprove those ideas.
Kochanek explained how technology is making the study of Milky Way supernovae possible. Astronomers now have sensitive detectors for neutrinos (particles emitted from the core of a collapsing star) and gravitational waves (created by the vibrations of the star’s core) which can find any supernova occurring in our galaxy. The question is whether we can actually see light from the supernova because we live in a galaxy filled with dust—soot particles that Kochanek likened to those seen in diesel truck exhaust—that absorb the light and might hide a supernova from our view.
“Every few days, we have the chance to observe supernovae happening outside of our galaxy,” said doctoral student Scott Adams. “But there’s only so much you can learn from those, whereas a galactic supernova would show us so much more. Our neutrino detectors and gravitational wave detectors are only sensitive enough to take measurements inside our galaxy, where we believe that a supernova happens only once or twice a century.”
Adams continued: “Despite the ease with which astronomers find supernovae occurring outside our galaxy, it wasn’t obvious before that it would be possible to get complete observations of a supernova occurring within our galaxy. Soot dims the optical light from stars near the center of the galaxy by a factor of nearly a trillion by the time it gets to us. Fortunately, infrared light is not affected by this soot as much and is only dimmed by a factor of 20.”
By balancing all these factors, the astronomers determined that they have nearly a 100 percent chance of catching a prized Milky Way supernova during the next 50 years. Adams summarized the findings in an online video.
The astronomers’ plan takes advantage of the fact that supernovae issue neutrinos immediately after the explosion starts, but don’t brighten in infrared or visible light until minutes, hours, or even days later.
So, in the ideal scenario, neutrino detectors such as Super-Kamiokande (Super-K) in Japan would sound the alert the moment they detect neutrinos, and indicate the direction the particles were coming from. Then infrared detectors could target the location almost immediately, thus catching the supernova before the brightening begins. Gravitational wave observatories would do the same.
But given that not all neutrinos come from supernovae—some come from nuclear reactors, Earth’s atmosphere or the sun—how could a detector know the difference? A supernova would cause short bursts of neutrinos to be detected within a few seconds of each other. But rare glitches in the electronics can do the same thing, explained John Beacom, professor of physics and astronomy and director of the Center for Cosmology and Astro-Particle Physics at Ohio State.
“We need some way to tell immediately that a burst is due to a supernova,” Beacom said.
He and colleague Mark Vagins, an American neutrino expert working at Super-K, pointed out a decade ago how this could be done. Now Vagins and others have built a scale model of a special kind of neutrino detector in a new underground cave in Japan.
As coauthors on the Astrophysical Journal paper, Vagins and Beacom described the new detector, which they call EGADS for “Evaluating Gadolinium’s Action on Detector Systems.” At 200 tons, EGADS is much smaller than the 50,000-ton Super-K, but both consist of a tank of ultra-pure water.
In the case of EGADS, the water is spiked with a tiny amount of the element gadolinium, which helps register supernova neutrinos in a special way. When a neutrino from a Milky Way supernova enters the tank, it can collide with the water molecules and release energy, along with some neutrons. Gadolinium has a great affinity for neutrons, and will absorb them and then re-emit energy of its own. The result would be one detection signal followed by another a tiny fraction of a second later—a “heartbeat” signal inside the tank for each detected neutrino.
Vagins and Beacom hope that EGADS’ unmistakable heartbeat signal will enable neutrino detector teams to make more timely and confident announcements about supernova neutrino detections.
Vagins said that the experiment is going well so far, and he and the rest of the Super-K scientists may decide to add gadolinium to the tank as early as 2016. Because of its larger size, Super-K would also be able to measure the direction of the neutrinos. So the possibility of using Super-K to pinpoint a Milky Way supernova is on the rise.
For those of us who would hope to see a Milky Way supernova with our own eyes, however, the chances are lower and depend on our latitude on Earth. The last time it happened was in 1604, when Johannes Kepler spotted one some 20,000 light years away in the constellation Ophiuchus. He was in northern Italy at the time.
Could such a sighting happen again in the next half-century?
Adams did the math: the probability of a galactic supernova being visible with the unaided eye from somewhere on Earth within the next 50 years is approximately 20-50 percent, with people in the southern hemisphere getting the best of those odds, since they can see more of our galaxy in the night sky. The odds worsen as you go north; in Columbus, Ohio, for example, the chance could dip as low as 10 percent.
And Adams placed the odds that Ohioans would spy a truly dazzling supernova—like the one in 1604 that outshone all other stars in the sky—at only around 5 percent.
“The odds of seeing a spectacular display aren’t in our favor, but it is still an exciting possibility!” he concluded.
“With only one or two happening a century, the chance of a Milky Way supernova is small, but it would be a tragedy to miss it, and this work is designed to improve the chances of being ready for the scientific event of a lifetime,” Beacom concluded.
Schematic: HIV TAT (blue) permeates membrane, interacts with cytoskeleton (green).(Credit: Image courtesy of University of California – Los Angeles)
Further, these cell-penetrating peptides, or CPPs, can facilitate the cellular transfer of various molecular cargoes, from small chemical molecules to nano-sized particles and large fragments of DNA. Because of this ability, CPPs hold great potential as in vitro and in vivo delivery vehicles for use in research and for the targeted delivery of therapeutics to individual cells.
But exactly how cell-penetrating peptides — and particularly the HIV TAT peptide — accomplish these tasks has so far been a mystery.
“The HIV TAT peptide is special. People discovered that one can attach almost anything to this peptide and it could drag it across the cell,” said Gerard Wong, a professor of bioengineering and of chemistry and biochemistry at the UCLA Henry Samueli School of Engineering and Applied Science and the California NanoSystems Institute at UCLA. “So there are obvious beneficial drug-delivery and biotechnology applications.”
In a new study published in Proceedings of the National Academy of Science, UCLA Engineering researchers, including Wong and bioengineering professors Timothy Deming and Daniel Kamei, identify how HIV TAT peptides can have multiple interactions with the cell membrane, the actin cytoskeleton and specific cell-surface receptors to produce multiple pathways of translocation under different conditions.
Moreover, because the researchers now understand how cell-penetrating peptides work, they say it is possible to formulate a general recipe for reprograming normal peptides into CPPs.
“Prior to this, people didn’t really know how it all worked, but we found that the HIV TAT peptide is really kind of like a Swiss Army Knife molecule, in that it can interact very strongly with membranes, as well as with the cytoskeletons of cells,” said Wong, the study’s lead author. “The second part wasn’t well appreciated by the field.”
In addition to the membrane activity, researchers discovered that the HIV TAT peptide also creates its own binding site out of the membrane. This means the peptide can actually go through the membrane and induce the cytoskeleton directly to have an endocytotic event.
“We found that there are two channels of activity,” Wong said. “Because of the peculiar sequence of HIV TAT, it’s very good at being able to interact with membranes. Further, with the high-density packing of charged amino acids in the peptide, it can also interact very strongly with the cell’s cytoskeleton, as well as its receptors.”
In addition, the researchers noticed that small cargoes can be transferred directly, while cargoes larger than a few nanometers needed to be anchored to the membrane by the TAT peptide.
Deming, who specializes in synthetic methods, prepared the polypeptide samples for use in the experiments. Kamei, an expert in cellular trafficking, performed cell-based endocytosis experiments using inhibitor drugs and confocal microscopy to identify dominant mechanisms of endocytosis.
“This research is exciting because cell-penetrating peptides have been used in the area of drug delivery for some time,” Kamei said. “Gaining any additional understanding of these delivery agents will help in future drug-carrier designs.”
It is the group’s hope that the new understanding gained from their study will be used to engineer new molecules that are more effective in delivering therapeutic agents.
“This collaboration was important because it combined expertise in the areas of synthesis, characterization and cellular trafficking to address a very relevant problem,” Kamei said. “I definitely see more opportunity for combining these areas to tackle other problems in the growing field of biomaterials.”
The study was funded by the National Science Foundation and the National Institutes of Health.
The image shows a neuron with a tree trunk-like dendrite. Each triangular shape touching the dendrite represents a synapse, where inputs from other neurons, called spikes, arrive (the squiggly shapes). Synapses that are further away on the dendritic tree from the cell body require a higher spike frequency (spikes that come closer together in time) and spikes that arrive with perfect timing to generate maximal learning.(Credit: Image courtesy of University of California – Los Angeles)
Contrary to what was previously assumed, Mehta and Kumar found that when it comes to stimulating synapses with naturally occurring spike patterns, stimulating the neurons at the highest frequencies was not the best way to increase synaptic strength.
When, for example, a synapse was stimulated with just 10 spikes at a frequency of 30 spikes per second, it induced a far greater increase in strength than stimulating that synapse with 10 spikes at 100 times per second.
“The expectation, based on previous studies, was that if you drove the synapse at a higher frequency, the effect on synaptic strengthening, or learning, would be at least as good as, if not better than, the naturally occurring lower frequency,” Mehta said. “To our surprise, we found that beyond the optimal frequency, synaptic strengthening actually declined as the frequencies got higher.”
The knowledge that a synapse has a preferred frequency for maximal learning led the researchers to compare optimal frequencies based on the location of the synapse on a neuron. Neurons are shaped like trees, with the nucleus being the base of the tree, the dendrites resembling the extensive branches and the synapses resembling the leaves on those branches.
When Mehta and Kumar compared synaptic learning based on where synapses were located on the dendritic branches, what they found was significant: The optimal frequency for inducing synaptic learning changed depending on where the synapse was located. The farther the synapse was from the neuron’s cell body, the higher its optimal frequency.
“Incredibly, when it comes to learning, the neuron behaves like a giant antenna, with different branches of dendrites tuned to different frequencies for maximal learning,” Mehta said.
The researchers found that not only does each synapse have a preferred frequency for achieving optimal learning, but for the best effect, the frequency needs to be perfectly rhythmic — timed at exact intervals. Even at the optimal frequency, if the rhythm was thrown off, synaptic learning was substantially diminished.
Their research also showed that once a synapse learns, its optimal frequency changes. In other words, if the optimal frequency for a naïve synapse — one that has not learned anything yet — was, say, 30 spikes per second, after learning, that very same synapse would learn optimally at a lower frequency, say 24 spikes per second. Thus, learning itself changes the optimal frequency for a synapse.
This learning-induced “detuning” process has important implications for treating disorders related to forgetting, such as post-traumatic stress disorder, the researchers said.
Although much more research is needed, the findings raise the possibility that drugs could be developed to “retune” the brain rhythms of people with learning or memory disorders, or that many more of us could become Einstein or Mozart if the optimal brain rhythm was delivered to each synapse.
“We already know there are drugs and electrical stimuli that can alter brain rhythms,” Mehta said. “Our findings suggest that we can use these tools to deliver the optimal brain rhythm to targeted connections to enhance learning.”
Funding for the study was provided by the National Science Foundation, the National Institutes of Health, the Whitehall Foundation, and the W.M. Keck Foundation.
Each day, many students cross the Fifth Street Bridge not thinking much of the downtown connector that exhales exhaust below; but a few are working to electrify the cars that pass beneath.
In a competition hosted by the City of Atlanta and Emory University’s Goizueta Business School, a team of Georgia Tech students earned first prize and a monetary award for proposing a system for electric vehicle adoption in Atlanta.
Undergraduate students Corbin Klett, Matt Jacobson, Logan Marett, Kevin Miron and Andrew Vaziri earned $5,000 for their proposal of how to drive demand for 50,000 electric cars on Atlanta’s roads during a two-year period. The students represent both Solar Jackets, Georgia Tech’s student group dedicated to the design, creation and expansion of solar technology, and the College of Management’s Technology and Management Program.
“Our approach was to devise creative and unique solutions to electric vehicle adoption, emphasizing ways of reducing the cost to the city government,” said Jacobson. “We stressed branding and education, creating a new ‘EV Brand’ we dubbed ChargeATL, and a website mockup to go along with it.”
The City will use funding received from the Department of Energy to implement ideas generated from the competition, with the goal of the Atlanta area being the first region in the country to have 50,000 electric vehicles on its roads. The Mayor’s office wanted to utilize the creativity of Georgia students to find ways to make the state competitive in this market.
“The Solar Jackets were incredible, coming up with as much as they did on their own,” said Jules Toraya, program manager in the City of Atlanta Mayor’s Office of Sustainability. “They stood out over the rest because they had answers — answers to tough questions, how to get budgets — and you could tell they had scoped out their ideas and had conviction about them.” Execution of these ideas will begin with an effort to pass electric vehicle-related legislation in the fall.
Four other teams presented at the competition on Sept. 13, including three from Tech and one from Emory. The groups were chosen from a pool of nearly 30 team applications spanning many Georgia universities, including Tech, Emory and the University of Georgia.
“It was an exciting opportunity to be able to tackle a problem the City of Atlanta is facing and feel like we could have an impact,” said Melissa McCoy, who participated on another Georgia Tech team. “The Solar Jackets team did a truly amazing job.”
Researchers have shown they can reverse the aging process for human adult stem cells, which are responsible for helping old or damaged tissues regenerate. The findings could lead to medical treatments that may repair a host of ailments that occur because of tissue damage as people age. A research group led by the Buck Institute for Research on Aging and the Georgia Institute of Technology conducted the study in cell culture, which appears in the September 1, 2011 edition of the journal Cell Cycle
The regenerative power of tissues and organs declines as we age. The modern day stem cell hypothesis of aging suggests that living organisms are as old as are its tissue specific or adult stem cells. Therefore, an understanding of the molecules and processes that enable human adult stem cells to initiate self-renewal and to divide, proliferate and then differentiate in order to rejuvenate damaged tissue might be the key to regenerative medicine and an eventual cure for many age-related diseases. A research group led by the Buck Institute for Research on Aging in collaboration with the Georgia Institute of Technology, conducted the study that pinpoints what is going wrong with the biological clock underlying the limited division of human adult stem cells as they age.
“We demonstrated that we were able to reverse the process of aging for human adult stem cells by intervening with the activity of non-protein coding RNAs originated from genomic regions once dismissed as non-functional ‘genomic junk’,” said Victoria Lunyak, associate professor at the Buck Institute for Research on Aging.
Adult stem cells are important because they help keep human tissues healthy by replacing cells that have gotten old or damaged. They’re also multipotent, which means that an adult stem cell can grow and replace any number of body cells in the tissue or organ they belong to. However, just as the cells in the liver, or any other organ, can get damaged over time, adult stem cells undergo age-related damage. And when this happens, the body can’t replace damaged tissue as well as it once could, leading to a host of diseases and conditions. But if scientists can find a way to keep these adult stem cells young, they could possibly use these cells to repair damaged heart tissue after a heart attack; heal wounds; correct metabolic syndromes; produce insulin for patients with type 1 diabetes; cure arthritis and osteoporosis and regenerate bone.
The team began by hypothesizing that DNA damage in the genome of adult stem cells would look very different from age-related damage occurring in regular body cells. They thought so because body cells are known to experience a shortening of the caps found at the ends of chromosomes, known as telomeres. But adult stem cells are known to maintain their telomeres. Much of the damage in aging is widely thought to be a result of losing telomeres. So there must be different mechanisms at play that are key to explaining how aging occurs in these adult stem cells, they thought.
Researchers used adult stem cells from humans and combined experimental techniques with computational approaches to study the changes in the genome associated with aging. They compared freshly isolated human adult stem cells from young individuals, which can self-renew, to cells from the same individuals that were subjected to prolonged passaging in culture. This accelerated model of adult stem cell aging exhausts the regenerative capacity of the adult stem cells. Researchers looked at the changes in genomic sites that accumulate DNA damage in both groups.
“We found the majority of DNA damage and associated chromatin changes that occurred with adult stem cell aging were due to parts of the genome known as retrotransposons,” said King Jordan, associate professor in the School of Biology at Georgia Tech.
“Retroransposons were previously thought to be non-functional and were even labeled as ‘junk DNA’, but accumulating evidence indicates these elements play an important role in genome regulation,” he added.
While the young adult stem cells were able to suppress transcriptional activity of these genomic elements and deal with the damage to the DNA, older adult stem cells were not able to scavenge this transcription. New discovery suggests that this event is deleterious for the regenerative ability of stem cells and triggers a process known as cellular senescence.
“By suppressing the accumulation of toxic transcripts from retrotransposons, we were able to reverse the process of human adult stem cell aging in culture,” said Lunyak.
“Furthermore, by rewinding the cellular clock in this way, we were not only able to rejuvenate ’aged’ human stem cells, but to our surprise we were able to reset them to an earlier developmental stage, by up-regulating the “pluripotency factors” – the proteins that are critically involved in the self-renewal of undifferentiated embryonic stem cells.” she said.
Next the team plans to use further analysis to validate the extent to which the rejuvenated stem cells may be suitable for clinical tissue regenerative applications.
The study was conducted by a team with members from the Buck Institute for Research on Aging, the Georgia Institute of Technology, the University of California, San Diego, Howard Hughes Medical Institute, Memorial Sloan-Kettering Cancer Center, International Computer Science Institute, Applied Biosystems and Tel-Aviv University.
Civil engineering professor Ernest “Chip” R. Blatchley III inspects a parabolic reflector for a prototype water-disinfection system he built as part of an effort to help provide safe drinking water to a large segment of the world’s population in developing nations. The system uses ultraviolet radiation from the sun to kill waterborne pathogens. Sunlight is captured by the reflector and focused onto a UV-transparent pipe though which water flows continuously. (Purdue University photo/Andrew Hancock)
A team of Purdue University researchers has invented a prototype water-disinfection system that could help the world’s 800 million people who lack safe drinking water.
The system uses the sun’s ultraviolet radiation to inactivate waterborne pathogens. Sunlight is captured by a parabolic reflector and focused onto a UV-transparent pipe though which water flows continuously.
“We’ve been working on UV disinfection for about 20 years,” said Ernest “Chip” R. Blatchley III, a professor of civil engineering. “All of our work up until a couple years ago dealt with UV systems based on an artificial UV source. What we are working on more recently is using ultraviolet radiation from the sun.”
Motivating the research is the need to develop practical, inexpensive water-treatment technologies for a large segment of the world’s population in developing nations.
“More than 800 million people lack access to what we consider to be ‘improved’ water,” Blatchley said. “The water available for people to drink in many developing countries hasn’t been treated to remove contaminants, including pathogenic microorganisms. As a result, thousands of children die daily from diarrhea and its consequences, including dehydration. Half of the world’s hospital beds are occupied by people who are sickened by the water they drink.”
Blatchley built the parabolic reflector in his garage. The team, including an undergraduate student supported by a National Science Foundation program, finished the prototype in the lab, lining it with aluminum foil. The system was then tested on the roof of Purdue’s Civil Engineering Building.
“It turns out that the solar radiation we receive in Indiana at some times of year is intense enough to inactivate some waterborne microorganisms with this type of system,” he said. “We demonstrated that we can disinfect water using sunlight. The reactor was very inexpensive to build, less than $100 for the materials.”
The natural UV system inactivated E. coli bacteria. However, the system must be able to kill dangerous pathogens such as Vibrio cholerae, which causes cholera, and Salmonella typhi, which causes typhoid, and Cryptosporidium parvum, which causes cryptosporidiosis, a parasitic disease that causes diarrhea.
“In the future we want to prove that our solar-UV system is going work against these other pathogens,” said Blatchley, who has worked on the project with doctoral student Eric Gentil Mbonimpa, who is from Rwanda, and Bryan Vadheim, an undergraduate from Montana State University. “We also want to automate it and build sensors for it so that we know how fast the water should be pumped through the system, depending on how sunny it is at any particular time.”
The NSF funded Vadheim’s work through its Research Experiences for Undergraduates program.
The parabolic reflector is made out of a wood called paulownia.
“That material was selected because the tree grows very rapidly in regions near the equator, where many people lack safe drinking water,” Blatchley said. “It is very light, strong and stable, so it’s not going to twist or warp or bend or crack in a climate that’s alternating between humid and dry.”
Natural UV has a longer wavelength than most artificial UV sources, which means it has less energy. Blatchley’s hypothesis, however, is that UV from sunlight will inactivate pathogens via the same mechanism as artificial UV: The radiation damages the genetic material of microbes, preventing them from reproducing.
“We are looking at other inexpensive reflecting materials, for example metalized plastic,” Blatchley said. “It’s similar to the material that’s used to make potato chip bags. We’ve done measurements, and some of these materials are about twice as reflective as aluminum foil.”
Improving water quality in developing countries is one of 14 “grand challenges” established by the National Academy of Engineering and also has been named a “millennium development goal” by the United Nations.
Blatchley also is working on an inexpensive filtration system that uses layers of sand and gravel to clean water. The filters were developed by Aqua Clara International, a Michigan-based non-profit corporation. Purdue and Aqua Clara are teaming up with Moi University in Kenya on that project. Purdue students tested the behavior of the filters in a Global Design Team project in Africa through Purdue’s Global Engineering Program.
Water flows slowly through the filter, allowing a bacterial film to establish near the top of the filter to remove organic contaminants while certain pathogens also are removed by attachment to the sand.