About Us

ScienceUpdates features breaking news about the latest discoveries in science, health, the environment, technology, and more — from major news services and leading universities, scientific journals, and research organizations.

Visitors can browse individual topics, grouped main sections (listed under the top navigational menu), covering: the medical sciences and health; physical sciences and technology; biological sciences and the environment; and social sciences, business and education. Headlines and summaries of relevant news stories, as well as links to topic-specific RSS feeds and email newsletters, are provided on each topic page.

Updated several times a day with breaking news and feature articles, seven days a week, the site covers discoveries in all fields of the physical, biological, earth and applied sciences. Stories are integrated with photographs and illustrations, links to journals and academic studies, related research and topics, and encyclopedic terms, to provide a wealth of relevant information on almost every science topic imaginable – from astrophysics to zoology. And thanks to a custom search function, readers can do their own research using the site’s extensive archive of stories.

New Species of Bird-Like Dinosaur Identified in Canada

New Species of Bird-Like Dinosaur Identified in Canada

Albertavenator curriei, as the paleontologists call the new dinosaur species, belongs to Troodontidae, a family of bird-like theropod dinosaurs.

It lived about 71 million years ago (Cretaceous period) in what is now Alberta, Canada.

Its specific name, curriei, honors the renowned Canadian paleontologist Dr. Philip J. Currie.

The bones of Albertavenator curriei were found in the badlands surrounding the Royal Tyrrell Museum, which Dr. Currie played a key role in establishing in the early 1980s.

Scientists initially thought that the dinosaur’s bones belonged to its close relative, Troodon inequalis, which lived around 5 million years earlier.

Both bird-like creatures walked on two legs, were covered in feathers, and were about the size of a person.

New comparisons of bones forming the top of the head reveal that Albertavenator curriei had a distinctively shorter and more robust skull than Troodon.

“The delicate bones of these feathered dinosaurs are very rare,” said Dr. David Evans, Temerty Chair and Senior Curator of Vertebrate Paleontology at the Royal Ontario Museum and lead author of a new paper in the Canadian Journal of Earth Sciences describing the discovery.

“We were lucky to have a critical piece of the skull that allowed us to distinguish Albertavenator curriei as a new species.”

“We hope to find a more complete skeleton of Albertavenator curriei in the future, as this would tell us so much more about this fascinating animal.”

“It was only through our detailed anatomical and statistical comparisons of the skull bones that we were able to distinguish between Albertavenator currieiand Troodon,” added co-author Thomas Cullen, a Ph.D. student at the University of Toronto.

“This discovery really highlights the importance of finding and examining skeletal material from these rare dinosaurs,” said co-author Dr. Derek Larson, Assistant Curator of the Philip J. Currie Dinosaur Museum.

These baby fish exercise to change the shape of their faces

These baby fish exercise to change the shape of their faces

Humans aren’t the only animals who exercise. Baby Lake Malawi cichlids—a group of 10-centimeter-long striped fish native to East Africa—open and close their mouths up to 260 times per minute to develop a short jaw and a long retroarticular process, a critical bone for jaw opening, researchers report today in the Proceedings of the Royal Society B. Both of those features are an advantage for scraping algae from rocks. Some species of young cichlids “exercise” less, only gaping about 180 times per minute. They develop a long jaw and a short retroarticular process, which are also advantageous for feeding by sucking prey into their mouth. When researchers manipulated the baby fish gaping behavior, they produced changes in bone shape that were similar to those driven by genes, suggesting that the fishes’ environment can influence development as much as their DNA does.

Wildfires continue to beleaguer Western Canada

Wildfires continue to beleaguer Western Canada

Wildfires in British Columbia are common at this time of year due to rising temperatures, however, this year is the third worst year in the region for forest fires. To date 840 fires have broken out since April 1 of this year. Although it started slow, 2017 is shaping up to be a record breaking fire season if not for numbers of fires, then for the sheer amount of hectares burned. In an area where rainfall is the norm, to have days and weeks without rainfall is unusual and helps to create a hot, dry environment with plenty of underbrush that fires use as fuel.

Firefighting costs for the 426,000 hectares (1,052,668 acres) that have burned this fire season have hit $172.5 million. Close to 4,000 personnel are working these fires across the province and ground crews are supported by 200 aircraft.

Besides the actual fire, smoke becomes an issue when so many fires are in the area. An information bulletin from the BC Wildfire Service is calling for smoky skies on the coast as the wind is expected to shift and these conditions could remain for a better part of this week. It has always been known that smoke can be hazardous to your health but a new study from researchers at Georgia Tech found that particle pollution from wildfires, long known for containing soot and other fine particles known to be dangerous to human health, is much worse than previously thought. Naturally burning timber and brush from wildfires release dangerous particles into the air at a rate three times as high as levels known by the EPA. The study also found wildfires spew methanol, benzene, ozone and other noxious chemicals.

NASA’s Terra satellite collected this natural-color image with the Moderate Resolution Imaging Spectroradiometer, MODIS, instrument on July 31, 2017. Actively burning areas, detected by MODIS’s thermal bands, are outlined in red. NASA image courtesy Jeff Schmaltz LANCE/EOSDIS MODIS Rapid Response Team, GSFC. Caption by Lynn Jenner with information from the BC Wildfire Service, and the Georgia Tech study.

‘Omnipresent’ effects of human impact on England’s landscape revealed

‘Omnipresent’ effects of human impact on England’s landscape revealed

Concrete structures forming a new, human-made rock type; ash particles in the landscape; and plastic debris are just a few of the new materials irreversibly changing England’s landscape and providing evidence of the effects of the Anthropocene, the research suggests.

The research, which is published in the journal Proceedings of the Geologists’ Association, has been conducted by geologists Jan Zalasiewicz, Colin Waters, Mark Williams and Ian Wilkinson at the University of Leicester, working together with zoologist David Aldridge at Cambridge University, as part of a major review of the geological history of England organised by the Geologists’ Association.

Professor Jan Zalasiewicz, from the University of Leicester’s Department of Geology, said: “We are realising that the Anthropocene is a phenomenon on a massive scale — it is the transformation of our planet by human impact, in ways that have no precedent in the 4.54 billion years of Earth history. Our paper explores how these changes appear when seen locally, on a more modest scale, amid the familiar landscapes of England.”

Professor Mark Williams, from the University of Leicester’s Department of Geology, said: “These changes taken together are now virtually omnipresent as the mark of the English Anthropocene. They are only a small part of the Anthropocene changes that have taken place globally. But, to see them on one’s own doorstep brings home the sheer scale of these planetary changes — and the realisation that geological change does not recognise national boundaries.”

The Anthropocene — the concept that humans have so transformed geological processes at Earth’s surface that we are living in a new epoch — was formulated by Nobel Laureate Paul Crutzen in 2000.

The research suggests that some of the changes surround us in the most obvious and visible way, though we rarely think of them as geology.

Examples include the concrete structures of our cities, which have almost all been built since the Second World War, and are just one small part of steep rise in the global prominence of this new, human-made rock type.

Other changes need a microscope to see, such as fly ash particles that have sprinkled over the landscape — a fossil signal of the smoke that belched out during industrialisation — or the skeletons of tiny algae in ponds and lakes across England, the types of which dramatically changed as the waters then acidified too.

Larger future fossils include the shells of highly successful biological invaders such as the zebra mussel and Asian clam, which now dominate large parts of the Thames and other river systems.

There are subterranean rock changes too, as coal mines, metro systems and boreholes have riddled the subsurface with holes and caverns.

The research also shows how the chemistry of soils and sediments has been marked by an influx of lead, copper and cadmium pollution — and by plastic debris, pesticide residues and radioactive plutonium.

Two degrees of warming already baked in

Two degrees of warming already baked in

“This ‘committed warming’ is critical to understand because it can tell us and policy makers how long we have, at current emission rates, before the planet will warm to certain thresholds,” said co-author Robert Pincus, a scientist with CIRES at the University of Colorado Boulder and NOAA’s Physical Sciences Division. “The window of opportunity on a 1.5-degree [C] target is closing.”

During United Nations meetings in Paris last year, 195 countries including the United States signed an agreement to keep global temperature rise less than 3.5 degrees F (2 C) above pre-industrial levels, and pursue efforts that would limit it further, to less than 3 degrees Fahrenheit (1.5 C) by 2100.

The new assessment by Pincus and lead author Thorsten Mauritsen, from the Max Planck Institute for Meteorology is unique in that it does not rely on computer model simulations, but rather on observations of the climate system to calculate Earth’s climate commitment. Their work accounts for the capacity of oceans to absorb carbon, detailed data on the planet’s energy imbalance, the climate-relevant behavior of fine particles in the atmosphere, and other factors.

Among Pincus’ and Mauritsen’s findings:

  • Even if all fossil fuel emissions stopped in 2017, warming by 2100 is very likely to reach about 2.3 F (range: 1.6-4.1) or 1.3 degrees C (range: 0.9-2.3).
  • Oceans could reduce that figure a bit. Carbon naturally captured and stored in the deep ocean could cut committed warming by 0.4 degrees F (0.2 C).
  • There is some risk that warming this century cannot be kept to 1.5 degrees C beyond pre-industrial temperatures. In fact, there is a 13 percent chance we are already committed to 1.5-C warming by 2100.

“Our estimates are based on things that have already happened, things we can observe, and they point to the part of future warming that is already committed to by past emissions,” said Mauritsen. “Future carbon dioxide emissions will then add extra warming on top of that commitment.”

The research was funded by the Max-Planck-Gesellschaft, the U.S. Department of Energy and the National Science Foundation.

Tiny Electronic Tags Could Fit Inside Cells

Tiny Electronic Tags Could Fit Inside Cells

Electronics small enough to fit inside cells may one day help scientists track individual cells and monitor their behavior in real time, a new study finds. These new devices could help analyze diseases from their origins in single cells, researchers said.

The new electronics are microscopic radio-frequency identification tags, which are essentially bar codes that can be read from a distance.

An RFID tag usually consists of an antenna connected to a microchip. A nearby reader known as a transceiver can emit electromagnetic signals at the tags, and the tags can respond with what data it has stored, such as its identity, when and where it was made, how to best store and handle it, and so on. Many RFID tags do not have batteries — instead, they rely on the energy in the signals from the transceivers.

These tags are already being used in many applications today, including key cards, toll passes, library books and many other items, but the typical RFID tags are millimeters to centimeters in size. The new microscopic tags in comparison are only 22 microns wide each, or roughly one-fifth the average diameter of a human hair, making them the smallest known RFID tags, the researchers said. They detailed their findings online July 26 in the journal Physical Review Applied.

The microscopic tags are each made of two metal layers — one made of a 5-nanometer-thick titanium and 200-nanometer-thick gold film, the other of a 1,000-nanometer-wide aluminum sheet — sandwiching a 16-nanometer-thick electrically insulating layer of hafnium dioxide.

Each tag is octagonal in shape. This is the closest the scientists can get to a circular shape, which is ideal for interacting with the magnetic fields from transceivers, said study lead author Jasmine Xiaolin Hu at Stanford University in California. Finally, the devices are fully encapsulated in silicon dioxide, the same material found in sand, to make them safe for biological applications.

Conventional RFID readers used to communicate with the tags have just one antenna. Instead, the researchers used two antennas, each roughly twice the tag’s diameter. Doing so boosted the magnitude of the tag signals by more than tenfold, which can make the difference between detecting a moving tagged cell in a complex biological setting or losing track of the cell just a few microns away.

Although these new microscopic tags are still larger than many cells, they do “fit into a variety of cells of great interest,” Hu said. The researchers found that this includes mouse melanoma cells, human melanoma cells, human breast cancer cells, human colorectal cancer cells and healthy human connective tissue cells, she said.

The researchers soon plan to monitor tagged living cells flowing within microscopic silicone rubber channels from a range of a few microns. Future research can explore developing smaller tags and finding ways to keep track of them, Hu added.

“This is step one towards sending signals within the cell to the outside world without probing through or perturbing the cell membrane and risking damaging and destroying the cell in due process,” said Stephen Wong, a bioengineer and systems biologist at the Houston Methodist Research Institute, who did not take part in this research. “It opens up a whole new world of live-cell studies.”

Sensors and other devices could get coupled with these microscopic tags “to measure and perform a variety of things,” Hu said. “We will have a measure of control within a cell that has not been achieved before.”

The ability to embed electronics into cells could help researchers understand and manipulate cell activities to an unprecedented degree. “Most disease processes start at a single- to few-cell level, but currently we have no technology to monitor a few cells inside the living body of a person,” Hu said. “Tracking and monitoring single cells may enable the early detection of diseases and allow for the start of treatments as soon as possible so that treatments can be more successful.”

For example, a pH sensor within a cell could help measure its acidity, “which indicates the healthiness of a cell,” Wong said. “We can also measure glucose to measure a cell’s metabolism, as well as many other molecules in cells.”

Future research should also focus on extending the range at which the researchers can scan the tags, Wong said. “Currently, the wireless receiver has to be very close to the cells, which is not ideal,” Wong said. “Still, what they’ve shown is a good step forward.”

Boosting the Sensitivity of Bio/Chemical Sensing with Nanogap Metasurfaces

Boosting the Sensitivity of Bio/Chemical Sensing with Nanogap Metasurfaces

In this regard, one of the most promising features of metallic nanostructures is their ability to confine optical fields and realize significant localized-field enhancement to produce plasmonic “hot spots”. This enables extremely high sensitivities for spectroscopic techniques such as surface-enhanced Raman spectroscopy (SERS) and surface-enhanced infrared absorption spectroscopy (SEIRA). These two techniques are complementary, i.e., Raman scattering peaks in SERS correspond to absorption peaks of SEIRA. However, due to differing dependence on field enhancement, the signal enhancement factors for SEIRA are typically orders of magnitude less than those for SERS. One option for boosting SEIRA signals is to strengthen the plasmonic field enhancement by reducing the gap size between surface nanostructures, thereby confining light to volumes on the order of nanometers. However, due to the conventional diffraction limit, it is challenging to squeeze incident light into these extreme dimensions with high efficiency, particularly in the mid-infrared (IR) wavelengths that are of interest in SEIRA spectroscopy.

Researchers led by Qiaoqiang Gan’s team at University at Buffalo experimentally demonstrate a metamaterial superabsorber structure with sub-5-nanometer gaps that can trap mid-IR light within these extreme volumes with efficiencies up to 81%, significantly enhancing light–matter interaction on the nanoscale. By using these structures as a substrate for chemical/biological molecule analysis with SEIRA spectroscopy, they demonstrate an enhancement factor for molecular fingerprinting of chemical molecules of up to ca. 106–107, which approaches the enhancement factors of SERS. In addition, the methods used to produce these metasurfaces are amenable to large area fabrication such as optical interference patterning and nanoimprint lithography, making this a revolutionary material for biochemical infrared absorption spectroscopy.

How to ‘Film’ Firing Neurons

How to ‘Film’ Firing Neurons

Muybridge’s high-speed, stop-action photographs — of which the horse is just one famous example — captured detailed motions of humans and animals that the human eye alone could not observe. More than 130 years later, scientists are using a similar approach to reveal new insights about life at an even faster and much tinier scale: firing neurons.

Shigeki Watanabe, a cell biologist at the Johns Hopkins School of Medicine in Baltimore, and colleagues have developed a method to take flipbook-like images of brain cells in action.

The scientists start by cultivating modified mouse neurons that have been designed to fire in response to light. When the researchers hit the cells with a flash of light, it acts like a starter gun, sending electrical signals shooting down the neurons like runners in a race. In about one-thousandth of a second the “runners” reach the end of the cells, where they trigger the release of chemicals called neurotransmitters that pass the signal to other cells. After a set period — ranging from milliseconds to seconds — a high-pressure cooling system quickly douses the neurons with liquid nitrogen, literally freezing the moment in time.

The cold bath kills the cells, so the researchers can’t capture the continuous action of any individual neuron. But by looking with an electron microscope at thousands of cells frozen at various times, they can piece together key steps in the signaling process. They can see cell components that are hundreds of times smaller than a speck of dust and movements that happen faster than the blink of an eye.

A similar freezing experiment was performed with frog nerves in 1979 by lead scientists John Heuser and Thomas Reese. They dropped the tissue past an electrical switch that stimulated the nerve, then slammed it into an ultra-cold metal block to freeze it. The approach was both simple and effective, but Watanabe and his colleagues’ new method is far more flexible, said Graeme Davis, a neuroscientist at the University of California, San Francisco.

Comparing the experimental apparatuses, “Heuser and Reese had the Model-T, and Shigeki’s driving the Tesla,” he said.

Watanabe and his colleagues have been using their technique to study what happens at the synapse, or junction between neurons. The cells store neurotransmitters near the synapse in membrane-enclosed containers called vesicles. The vesicles merge with the outer membrane of the neuron to release their contents, but because there’s a limited number of vesicles at each synapse, the cell needs to regenerate the containers locally to communicate for longer than a few seconds.

The flipbook images have already illustrated one major discovery: a new, ultrafast way that neurons recycle the vesicles. Less than one-tenth of a second after the vesicles merge with the outer cell membrane, the membrane folds back in on itself, creating a large container that is later divided into multiple new vesicles. The process is similar to recycling beer bottles by melting the glass, Watanabe said. “[The neurons] are re-making the vesicles, but they are doing it in bulk, so that’s why it’s much faster,” he said.

The results are an excellent example of how new technology often drives new discoveries, said Alberto Pereda, a neuroscientist at the Albert Einstein College of Medicine in New York who was not involved in the study.

Most recently, the team has identified key proteins that make the fast recycling possible, and whose absence may be linked to neurological diseases. Watanabe presented the technique and the new findings at a meeting of the American Crystallographic Association in late May in New Orleans, in a session devoted in part to cryo-electron microscopy, an increasingly popular approach to studying biological materials by freezing them and examining them with an electron microscope.

Traditionally, electron microscopy was best suited to seeing membranes in cells, but scientists are now figuring out ways to use the technique to see proteins — the cellular machines that “make things happen,” Davis said. Watanabe’s technique can add exquisite time resolution to the detailed static images that electron microscopy provides, he said.

Davis is so enthusiastic about Watanabe’s methods, in fact, that the two scientists recently started a collaboration to use the flash-and-freeze technique to study how neurons can work steadily for decades, even as all their component parts are replaced over time.

For anyone who’s interested in how life works inside of cells, “this is going to be one very powerful way to go forward,” he said.

Laboratory In A Needle Promises Rapid Diagnosis

Laboratory In A Needle Promises Rapid Diagnosis

Researchers in the U.S. and Singapore have designed a miniature chemistry laboratory inside a needle that could yield almost instantaneous results from routine laboratory tests, potentially accelerating the diagnosis and treatment of medical conditions.

The prototype device, created by miniaturizing existing “lab on a chip” technology, has shown its capability in studies of mice with liver toxicity, a common side effect of cancer chemotherapy in humans.

“It really integrates the whole laboratory process in one testing without any human in between,” said Stephen Wong of Houston Methodist Research Institute and Weill Cornell Medical Center, who created the idea for the new technology.

Diagnosis of medical conditions depends on the results of blood tests to identify toxicity and potential reactions to drugs. Obtaining the results of the tests can typically take a week. However, Wong said, “Using our approach, it takes less than an hour.”

The patented design combines individual components from a chemistry laboratory into a single small package attached to a conventional 32-gauge needle, a size used for several simple injections.

“This is a change in paradigm – a really disruptive technology,” Wong said. “You are no [longer] tied down to the lab” to carry out diagnostic procedures. “You can have a wireless device attached to your cell phone.”

Medical specialists could use the technology in healthcare offices, patients’ homes, or even remote locations to carry out diagnoses normally performed in hospitals.

“It’s a point-of-care mobile device,” Wong said. “But it can also be a device that you can use during the surgery to get instant results.” That would permit doctors and patients to discuss treatment options as early as possible.

“I found it very exciting,” said Shari Rubin, an internist at Houston Methodist Hospital. She was not involved in the research.

“Many of our patients travel really far to come to the medical center for blood work,” Rubin added. “If you can have them do things at home, that would be incredibly helpful. Anything that can keep patients away from the hospital is wonderful.”

She noted, however, that developers of the technology would need to persuade patients to use it at home and to convince insurers to cover it.

The technology stems from the “lab on a chip” approach.

“This is basically a device that includes one of several functions on a single chip, measuring square millimeters to a few square centimeters,” Wong said. The approach combines microfluidics, a technology that deals with miniscule volumes of liquids, and semiconductors, the gizmos at the heart of all modern computers and communications methods.

The lab on a needle is designed to carry out several steps in testing a patient’s tissue sample for any particular medical condition. It extracts the sample; prepares it; amplifies the material in it called messenger ribonucleic acid, or mRNA, a carrier of genetic material; and runs a process called the polymerase chain reaction, or PCR, to detect the existence and concentration of the gene or genes related to the sought-after disease.

The prototype needle uses two chips. The first carries out the initial three tasks while the second contains the chemicals that perform the PCR process.

“The prototype puts the two chips together and obtains a readout,” Wong said. “We’ve proved that the two can be put together in one package.”

To test the prototype needle, the team hit on liver toxicity, which has the advantage of needing only two genetic markers to identify it.

Wong’s group induced liver toxicity in mice and used the needle to identify the markers for it, and the lack of the markers in untreated mice. They reported the results in the online publication Lab on a Chip.

The researchers emphasize that their lab in a needle is still in the development stage.

Wong’s team, along with collaborators in Singapore’s Nanyang Technological University and the Singapore Institute for Manufacturing Technology (SIMTech), are now engineering a practical version of the technology.

The teams also plan to develop the necessary procedures for testing the needle in humans. Those tests will seek the same genetic markers as the studies involving mice. But they will have to comply with much tighter government regulation.

The research team also aims to apply the technique to various medical conditions.

“We are planning to test other tissues or body fluids based on respective testing protocols for other human disease detection and diagnosis beyond liver toxicity,” said Zhiping Wang, director of research programs at SIMTech, in an e-mail message.

“The concept is working; the rest will be engineering,” Wong said.

If it passes the trials, the device could yield significant improvements in clinical practice.

“It’s less risky, faster, and cheaper than current methods,” Wong said. It also puts the lab on a chip concept at the service of medical personnel in addition to life scientists in the laboratory.

“In the long run if it’s successful it can deal with everything,” Wong said. “We try to bring the hospital to the patient, not the patient to the hospital.”

X-ray Science Gets New ‘Glasses’

X-ray Science Gets New ‘Glasses’

X-rays are short wavelengths of light with a long list of scientific accomplishments. Now researchers have made a simple quartz plate that could help take X-ray-powered science to new heights, such as uncovering how chemical reactions happen and making fundamental particles of mass from colliding beams of light.

X-rays can famously penetrate soft tissue, revealing broken bones. But their resume isn’t limited to the medical field. Cutting-edge scientific instruments like the Linac Coherent Light Source, located at the SLAC National Accelerator Laboratory in Menlo Park, California, generate ultra-short, intense X-ray pulses to probe matter at the scale of atoms and molecules.

Manufacturing the lenses and mirrors to steer the X-rays in such machines is a big technical challenge, and imperfections inevitably creep in that make it difficult to perfectly focus the beam. An international team of scientists has developed “glasses” that correct for the defects.

They demonstrated the technique for a stack of 20 X-ray lenses — the type of optics equipment that might be used for experiments requiring an intense beam of X-rays focused down to an extremely small area. The corrective plate helped focus most of the X-rays onto a spot just 250 nanometers across, tripling the intensity in that center area. The results were published in the journal Nature Communications.

The team plans to install corrective plates at the Linac Coherent Light Source and at the PETRA III X-ray source in Hamburg, Germany. The plates have the potential to push the capabilities of many X-ray instruments to new levels, the team said, which means X-rays may soon reveal even more scientific secrets.

Artificial Intelligence Predicts a Picture’s Future

Artificial Intelligence Predicts a Picture’s Future

Given a still image, a new artificial intelligence system can generate videos that simulate the future of that scene to predict what might happen next. Currently, these videos are less than two seconds long and can make people look like blobs. But researchers hope that in the future, more powerful versions of this system could help robots navigate homes and offices and also lead to safer self-driving cars.

Computers have grown steadily better at recognizing faces and other items within images. However, they still have major problems envisioning how the scenes they see might change, given the virtually limitless number of ways that items within images can interact.

To confront this challenge, computer scientist Carl Vondrick at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Lab in Cambridge and his colleagues explore machine learning, a branch of artificial intelligence devoted to developing computers that can improve with experience. Specifically, they research “deep learning,” where machine learning algorithms are run on advanced artificial neural networks designed to mimic the human brain.

In an artificial neural network, software or hardware components known as artificial neurons receive data, then cooperate to solve a problem such as reading handwriting or recognizing an image. The network can then alter the pattern of connections between those neurons to change the way they interact, after which the network attempts to solve the problem again. Over time, the network learns which patterns are best at computing solutions.

The scientists first trained their system on how to generate videos by having it analyze more than 2 million videos downloaded from the image and video hosting website Flickr. Next, they took images from beaches, train stations, hospitals and golf courses and had their system generate videos predicting what the next few seconds of that scene might look like. For instance, beach scenes had crashing waves, while golf scenes had people walking on grass.

Vondrick and his colleagues used a deep-learning technique called “adversarial learning” that involves two competing neural networks. One network generates videos, while the other attempts to discriminate between real videos and the fakes its rival creates. Over time, the generator learns to fool the discriminator. A key trick for generating more realistic videos involved simulating moving foregrounds and stationary backgrounds.

Flexible Graphene Energy Storage Membrane

Flexible Graphene Energy Storage Membrane

“Free of conductive additives, binders, commercial separators, and current collectors.” This claim, from researchers at Tsinghua University, China, reads like the health claims on my box of afternoon cereal. More seriously, it reads like the recipe for a highly simplified, low cost energy storage device, which they have produced using a TiO2-assisted UV reduction of sandwiched graphene components.

The sandwich structure consists of two active layers of reduced graphene oxide hybridised with TiO2, with a graphene oxide separator (rGO-TiO2/rGO/rGO-TiO2). In the completed device, the separator layer also acts as a reservoir for the electrolyte, which affects ion diffusion—a known problem for layered membrane devices—and affects both the capacity and rate performance.

A step-by-step vacuum filtration process is used to form the membrane structure, and the amount of graphene oxide used in the filtration solutions can be adjusted to precisely tune t he thickness of each layer. Irradiation of the dried membrane with UV light then reduces the graphene oxide to rGO with assistance from the TiO2.

The electrochemical performance of the hybrid active layer was clearly affected by the reduction time, with anything less than 40 minutes being too short to completely reduce the graphene oxide, leading to lower electrical conductivity and, therefore, reduced capacitance of the membrane. Going beyond 40 minutes of UV irradiation, suggest the researchers, strips the functional groups from the rGO surface, leading to a lower pseudocapacitance.

The membrane supercapacitor also demonstrated good mechanical stability, with an essentially unchanged electrochemical performance when tested at bending angles of 90 and 180 degrees.

The method used by these researchers to generate compact, thin-film, energy storage structures offers good control over the synthetic parameters while being very easy and user-friendly, and is not limited to the production of supercapacitors.

Researchers Are Developing Shape-Shifting Fluid Robots

Researchers Are Developing Shape-Shifting Fluid Robots

By using fluids similar to Silly Putty that can behave as both liquids and solids, researchers say they have created fluid robots that might one day perform tasks that conventional machines cannot.

Conventional robots are made of rigid parts that are vulnerable to bumps, scrapes, twists and falls. In contrast, researchers worldwide are increasingly developing robots made from soft, elastic plastic and rubber that are inspired by worms, starfish and octopuses. These soft robots can resist many of the kinds of damage, and can squirm past many of the obstacles, that can impede hard robots.

However, even soft robots and the living organisms they are inspired by are limited by their solidity — for example, they remain vulnerable to cutting. Instead, researcher Ido Bachelet of Bar-Ilan University in Israel and his colleagues have now created what they call fluid robots that they say could operate better than solid robots in chaotic, hostile environments. They detailed their findings online Jan. 22 in the journal Artificial Life.

The researchers experimented with so-called non-Newtonian fluids. Water acts mostly like a Newtonian fluid, meaning the degree to which it resists flowing — its viscosity — generally stays constant regardless of the mechanical force applied against it. In contrast, the viscosity of a non-Newtonian fluid can vary depending on the rate that mechanical force is applied against it. For instance, the non-Newtonian fluid Silly Putty can flow like a viscous liquid but also snap or bounce like an elastic solid.

Suspensions — that is, liquids with particles mixed into them — are often non-Newtonian fluids. For example, when water is filled with starch particles, it becomes a doughy substance known as oobleck that acts solid if you run across it but liquid if you stand still on it.

After testing a variety of non-Newtonian fluids, Bachelet and his colleagues developed prototype fluid robots made of blobs of starch grains suspended in a sugary solution. Sound waves from audio speakers underneath the surface where the blobs rested helped control their mechanical properties, and depending on the volumes and frequencies of the sounds, the researchers could make the blobs move.

The scientists could make the fluid robots drag metal items more than five times their weight. The blobs could also change shape, split into smaller blobs that could be controlled individually, merge to form larger blobs, and drip through gratings. These qualities suggest that fluid robots might find use in search and rescue missions, dripping into otherwise unreachable places and merging at their destinations to carry weights and perform work, the researchers noted.

“It’s really novel — it’s a robot that basically doesn’t have any parts,” said mechanical engineer David Hu at the Georgia Institute of Technology, who did not take part in this research.

The blobs could even be made to “count” up to three, Bachelet and his colleagues said. When they absorbed aluminum oxide, they became more rigid, and after they engulfed three dough packets laced with aluminum oxide, they became too stiff to move.

The scientists suggest that fluid robots carrying chemical payloads could interact and perform chemical reactions. Potential applications might include multiple fluid robots working together in an assembly line to synthesize compounds or break down waste, they said.

The fact that these blobs need to rest on sound-generating platforms to move is an obvious limitation, Bachelet and his colleagues admitted. However, the researchers said that future research could likely extend the concept to new control methods. For instance, sound beams could steer these blobs from a distance, they said. Moreover, using magnetic or electrically charged fluids could lead to fluid robots that could also be steered with magnetic or electric fields. Combining multiple techniques of control might lead to very elaborate and capable designs, they added.

Bachelet and his colleagues suggested that fluid robots could be given coatings much like those protecting cells, which could prevent incidental mixing and reduce water loss from evaporation. “In this regard, fluid robots could show unexpected similarities to primitive life forms,” they wrote in their paper.

The researchers developed their fluid robot designs through trial and error, since the physics underlying non-Newtonian fluids is still not well understood. They suggested further research into fluid robots could in turn help scientists better understand non-Newtonian fluid behavior.

“Overall, I think the concept of fluid robots is exciting,” said roboticist Michael Tolley at the University of California, San Diego, who did not participate in this work. However, he noted that in order to classify a machine as a robot, most researchers would require it to have the ability to make decisions by itself. “We are a long way off from addressing the tough challenge of designing a fluid that is able to think and act autonomously,” Tolley said.

The New Age in Clinical Digital Pathology

The New Age in Clinical Digital Pathology

The compound microscopes were invented in the 1590s.  XVII and XVIII centuries were productive for medical science thanks to the prolific careers of Antonie van Leeuwenhoek, Giovanni Battista Morgagni, and Marcello Malpighi – the “fathers” of microbiology, modern anatomical pathology, and microscopic anatomy, respectively. Microscopes have been reliable companions of clinical pathologists ever since. With the emergence of high-resolution scanners, cameras and computer software, the field is now opening up to a new endeavor – Digital Pathology.

The computer-aided image analysis of clinical pathology samples can either be fully automated on the whole slide image or the pathologist can select the regions of interest as he scans the field:

  • The first option provides an immediate and preset automated processing following the scan. However, it requires full automation adjusting e.g. to different staining intensities and setting the processing parameters automatically.
  • The interactive option allows input of critical parameters and direct monitoring and tuning of the data analysis by the pathologist in real time. However, this approach must produce fast results to expedite the diagnosis.

This Special Issue includes the Editorial by Guest Editors and 7 articles that report the latest advances in both approaches used to identify cancerous and/or metastatic tissues based on their morphology, cell proliferation frequency, and cancer-specific surface receptors that can be used as biomarkers in ER-positive breast cancer, follicular lymphoma, and several other types of cancer.

Moths’ Eyes Inspire New Tech

Moths’ Eyes Inspire New Tech

Moths’ eyes sport nanoscale structures on their surfaces that minimize light reflection. This feature helps the insects to see better in the dark by reducing glare, and also makes it more difficult for predators to spot them by looking for the twinkle in their eyes.

Inspired by moths, engineers and materials scientists have been developing anti-reflection films to increase efficiencies for solar panels, to lower battery consumption for smartphone displays, or even just to improve the appearance of highway billboards. Moth lovers, don’t be alarmed — we are not harvesting the insects for their eyeballs. Instead, the researchers are experimenting with materials and fabrication techniques to recreate this nanoscale structure in the lab.

Just this week, a research group at the University of Central Florida in Orlando published a paper in the journal Optica that introduces a new anti-reflection film. They claim to have optimized their product specifically for smartphone screens, and they also provide a model that other researchers can use to optimize their own films in the future.

How it’s made

The scientists, part of the optics and photonics research group led by Shin-Tson Wu, use a technique similar to stamping to create the special new film. They first deposit a solution with nanoscale silicon oxide spheres onto a surface. The nanospheres are only about 100 nanometers across, or almost one-thousandth the width of a human hair. Then, they spin the surface to spread out the nanospheres — the faster the spin, the farther apart the particles. The surface is then dried with the now embedded tiny spheres, and used as a stamp to imprint tiny dimples onto the final product, creating the nanoscale structure that mimics what moths have on the surface of their eyes.

The technique still has some issues to iron out. According to Wu, some of the silicon oxide nanoparticles would come loose during the imprinting process and get stuck to the film, which makes the stamp nonreusable. In the future the researchers hope to develop a reusable stamp using a mold instead of nanoparticles.

“Whenever you talk about applications where you have to fabricate things with large areas, imprinting is usually a good choice,” said Dietmar Knipp, a materials scientist from Stanford University in California who was not involved in the study. “But the way that they are doing it, the stamp with the nanostructure is destroyed in the end, so that if you want to do it again, you’d have to start the process all over again.”

The road from lab to market is often lined with such obstacles. In the case of anti-reflection films, manufacturing cost is a big one. The consumer electronics company Sharp has been talking about making a TV with the technology since at least 2012, and Philips has actually made one, but with a $3,000 price tag.

“There are already some commercial products that use this kind of anti-reflection surface, but there are still some issues,” said Guanjun Tan, a graduate student in Wu’s research group at the University of Central Florida.

For instance, the anti-reflection film used in the Philips TV is known to stain easily. If it coated smartphone screens, the constant touching from people’s fingertips would damage it. Tan said that after experimenting with several materials, they have found a film that is considerably more resistant to scratches and staining from water and oil. They also considered flexibility to be an important feature, since scientists and engineers are working to develop foldable displays for the near future.

“For this paper, we optimized the material to have more flexibility, but then we lose some surface hardness, and less anti-scratching,” said Tan. Such tradeoffs, it seems, are common in the world of engineering. Nevertheless, the researchers claim their film can provide a four-fold improvement in color contrast for a smartphone screen viewed under sunlight.

More to learn from nature

Another part of the study may help researchers develop different applications for the anti-glare technology.

“We have also developed a simulation model that other people can use to optimize the nanoparticle’s shape, depth [and] diameter, for the optimal anti-reflection,” said Wu.

For example, an anti-reflection film for solar cells or highway billboards might not require the same resolution as a smartphone screen, so researchers can choose to sacrifice certain attributes for other ones.

For now, the researchers’ model is based on their approach of imprinting films with tiny spherical dimples — but there might be better nanostructures out there. Other researchers have tried nanoscale pillars and even cones, but according to a 2011 paper by Knipp, the ideal nanostructure for anti-reflection may be tiny, parabolically curved domes.

“Ultimately, that is probably the closest to optimum — after all, that’s what’s found in nature, in moth eyes,” said Knipp.

However, compared to pillars, cones and dimples, a surface with tiny parabolic domes is even trickier to make. So, when it comes to perfecting anti-reflection, we are still chasing after the moths.

Researchers Find Giant Helium Gas Field in Tanzania

Researchers Find Giant Helium Gas Field in Tanzania

Helium is an odorless, tasteless and colorless gas that has unique properties.

It is the first of a group of elements often referred to as the noble gases.

Helium is a critical component in many fields of scientific research and is needed in a number of high-technology processes. However, known reserves are quickly running out.

Until now helium has never been found intentionally – being accidentally discovered in small quantities during oil and gas drilling.

Now, researchers from Norway and UK have developed a brand new exploration approach. The first use of this method has resulted in the discovery of a world-class helium gas field in Tanzania.

Their research, presented in Yokohama, Japan at the Goldschmidt Geochemistry Conference, shows that volcanic activity provides the intense heat necessary to release the gas from ancient, helium-bearing rocks.

“The high concentrations of helium in the region are likely related to the heating and fracturing of the Archean Tanzanian Craton and Proterozoic Mozambique Belt by the younger arms of the East African Rift System,” the scientists said.

“The distribution of high helium seeps along active faults shows increased communication between the shallow and deep crust. This combined with the presence of gas traps in the area suggests that there may be a significant helium resource.”

“We show that volcanoes in the Rift play an important role in the formation of viable helium reserves,” said lead author Dr. Diveena Danabalan, from Durham University.

“Volcanic activity likely provides the heat necessary to release the helium accumulated in ancient crustal rocks.”

“However, if gas traps are located too close to a given volcano, they run the risk of helium being heavily diluted by volcanic gases such as carbon dioxide, just as we see in thermal springs from the region.”

“We are now working to identify the goldilocks-zone between the ancient crust and the modern volcanoes where the balance between helium release and volcanic dilution is just right.”

“We sampled helium gas (and nitrogen) just bubbling out of the ground in the Tanzanian East African Rift Valley,” added co-author Prof. Chris Ballentine, from the University of Oxford.

“By combining our understanding of helium geochemistry with seismic images of gas trapping structures, independent experts have calculated a probable resource of 54 Billion Cubic Feet (BCf) in just one part of the Rift Valley.”

To put this discovery into perspective, global consumption of helium is about 8 BCf per year and the U.S. Federal Helium Reserve, which is the world’s largest supplier, has a current reserve of just 24.2 BCf.

“Total known reserves in the USA are around 153 BCf. This is a game changer for the future security of society’s helium needs and similar finds in the future may not be far away,” Prof. Ballentine said.

Distant earthquakes can cause underwater landslides

Distant earthquakes can cause underwater landslides

Researchers analyzing data from ocean bottom seismometers off the Washington-Oregon coast tied a series of underwater landslides on the Cascadia Subduction Zone, 80 to 161 kilometers (50 to 100 miles) off the Pacific Northwest coast, to a 2012 magnitude-8.6 earthquake in the Indian Ocean — more than 13,500 kilometers (8,390 miles) away. These underwater landslides occurred intermittently for nearly four months after the April earthquake.

Previous research has shown earthquakes can trigger additional earthquakes on other faults across the globe, but the new study shows earthquakes can also initiate submarine landslides far away from the quake.

“The basic assumption … is that these marine landslides are generated by the local earthquakes,” said Paul Johnson, an oceanographer at the University of Washington in Seattle and lead author of the new study published in the Journal of Geophysical Research: Solid Earth, a journal of the American Geophysical Union. “But what our paper said is, ‘No, you can generate them from earthquakes anywhere on the globe.'”

The new findings could complicate sediment records used to estimate earthquake risk. If underwater landslides could be triggered by earthquakes far away, not just ones close by, scientists may have to consider whether a local or a distant earthquake generated the deposits before using them to date local events and estimate earthquake risk, according to the study’s authors.

The submarine landslides observed in the study are smaller and more localized than widespread landslides generated by a great earthquake directly on the Cascadia margin itself, but these underwater landslides generated by distant earthquakes may still be capable of generating local tsunamis and damaging underwater communications cables, according to the study authors.

A happy accident

The discovery that the Cascadia landslides were caused by a distant earthquake was an accident, Johnson said.

Scientists had placed ocean bottom seismometers off the Washington-Oregon coast to detect tiny earthquakes, and also to measure ocean temperature and pressure at the same locations. When Johnson found out about the seismometers at a scientific meeting, he decided to analyze the data the instruments had collected to see if he could detect evidence of thermal processes affecting seafloor temperatures, such as methane hydrate formation.

Johnson and his team combined the seafloor temperature data with pressure and seismometer data and video stills of sediment-covered instruments from 2011-2015. Small variations in temperature occurred for several months, followed by large spikes in temperature over a period of two to 10 days. They concluded these changes in temperature could only be signs of multiple underwater landslides that shed sediments into the water. These landslides caused warm, shallow water to become denser and flow downhill along the Cascadia margin following the 8.6-magnitude Indian Ocean earthquake on April 11, 2012, causing the temperature spikes.

The Cascadia margin runs for more than 1,100 kilometers (684 miles) off the Pacific Northwest coastline from north to south, encompassing the area above the underlying subduction zone, where one tectonic plate slides beneath another.

Steep underwater slopes hundreds of feet high line the margin. Sediment accumulates on top of these steep slopes. When the seismic waves from the Indian Ocean earthquake reached these steep underwater slopes, they jostled the thick sediments piled on top of the slopes. This shaking caused areas of sediment to break off and slide down the slope, creating a cascade of landslides all along the slope. The sediment did not fall all at once so the landslides occurred for up to four months after the earthquake, according to the authors.

The steeper-than-average slopes off the Washington-Oregon coast, such as those of Quinault Canyon, which descends 1,420 meters (4,660 feet) at up to 40-degree angles, make the area particularly susceptible to submarine landslides. The thick sediment deposits also amplify seismic waves from distant earthquakes. Small sediment particles move like ripples suspended in fluid, amplifying the waves.

“So these things are all primed, ready to collapse, if there is an earthquake somewhere,” Johnson said.

Disrupting the sediment record

The new finding could have implications for tsunamis in the region and may complicate estimations of earthquake risk, according to the study’s authors.

Subduction zones like the Cascadia margin are at risk for tsunamis. As one tectonic plate slides under the other, they become locked together, storing energy. When the plates finally slip, they release that energy and cause an earthquake. Not only does this sudden motion give any water above the fault a huge shove upward, it also lowers the coastal land next to it as the overlying plate flattens out, making the shoreline more vulnerable to the waves of displaced water.

Submarine landslides increase this risk. They also push ocean water out of the way when they occur, which could spark a tsunami on the local coast, Johnson said.

Scientists also use underwater sediment records to estimate earthquake risk. By drilling sediment cores offshore and calculating the age between landslide deposits, scientists can create a timeline of past earthquakes used to predict how often an earthquake might occur in the region in the future and how intense it could be.

An earthquake off the Pacific Northwest would create submarine landslides all along the coast from British Columbia to California. But the new study found that a distant earthquake might only result in landslides up to 20 or 30 kilometers (12 to 19 miles) wide. That means when scientists take sediment cores to determine how frequent local earthquakes occur, they may not be able to tell if the sediment layers arrived on the seafloor as a result of a distant or local earthquake.

Johnson says more core sampling over a wider range of the margin would be needed to determine a more accurate reading of the geologic record and to update estimates of earthquake risk.

Putnisite: New Mineral Discovered in Australia

Putnisite: New Mineral Discovered in Australia

The new mineral is named putnisite after Drs Christine and Andrew Putnis from the University of Münster, Germany, for their outstanding contributions to mineralogy.

Putnisite occurs as isolated pseudocubic crystals, up to 0.5 mm in diameter, and is associated with quartz and a near amorphous Cr silicate.

It is translucent, with a pink streak and vitreous lustre. It is brittle and shows one excellent and two good cleavages parallel to {100}, {010} and {001}.

“What defines a mineral is its chemistry and crystallography. By x-raying a single crystal of mineral you are able to determine its crystal structure and this, in conjunction with chemical analysis, tells you everything you need to know about the mineral,” explained Dr Elliott, who, along with colleagues, described putnisite in the Mineralogical Magazine.

“Most minerals belong to a family or small group of related minerals, or if they aren’t related to other minerals they often are to a synthetic compound – but putnisite is completely unique and unrelated to anything.”

Putnisite combines the elements strontium, calcium, chromium, sulfur, carbon, oxygen and hydrogen:

SrCa4Cr83+(CO3)8SO4(OH)16•25H2O

The mineral has a Mohs hardness of 1.5–2, a measured density of 2.20 g/cm3 and a calculated density of 2.23 g/cm3. It was discovered during prospecting by a mining company in Western Australia.

“Nature seems to be far cleverer at dreaming up new chemicals than any researcher in a laboratory,” Dr Elliott concluded.

New map highlights sinking Louisiana coast

New map highlights sinking Louisiana coast

The map, published in GSA Today, has long been considered the “holy grail” by researchers and policy makers as they look for solutions to the coastal wetland loss crisis, the researchers said.

“The novel aspect of this study is that it provides a map that shows subsidence rates as observed at the land surface,” said Torbjörn Törnqvist, professor of geology and chair of the Department of Earth and Environmental Sciences at Tulane University.

“This sets it apart from previous attempts to map subsidence rates.”

Jaap Nienhuis, a postdoctoral fellow in earth and environmental sciences, is the lead author of the study. He said that while the present-day subsidence rate averages about nine millimeters, or just over a third of an inch each year, there is plenty of variability among specific sites along the coast.

“This information will be valuable for policy decisions about coastal restoration, such as planning of large sediment diversions that are intended to make portions of Louisiana’s coast more sustainable,” Nienhuis said.

The researchers used data obtained by a network of hundreds of instruments known as surface-elevation tables, scattered along the Louisiana coast. These instruments enabled the Tulane team to calculate subsidence rates in the shallow subsurface (up to about 10 meters or 30 feet depth) where most of the subsidence happens. This large network of surface-elevation tables was installed during the post-Katrina period, so determining subsidence rates with this method has only recently become possible.

20 Ancient Supervolcanoes Discovered in Utah and Nevada

20 Ancient Supervolcanoes Discovered in Utah and Nevada

Supervolcanoes are giant volcanoes that blast out more than 1,000 cubic km of volcanic material when they erupt. They are different from the more familiar stratovolcanoes because they aren’t as obvious to the naked eye and affect enormous areas.

“Supervolcanoes as we’ve seen are some of Earth’s largest volcanic edifices, and yet they don’t stand as high cones. At the heart of a supervolcano instead, is a large collapse. Those collapses in supervolcanoes occur with the eruption and form enormous holes in the ground in plateaus, known as calderas,” said Dr Eric Christiansen of Brigham Young University, who is a co-author of two papers published in the journal Geosphere (paper 1 & paper 2).

The newly discovered supervolcanoes aren’t active today, but 30 million years ago more than 5,500 cubic km of magma erupted during a one-week period near a place called Wah Wah Springs.

“In southern Utah, deposits from this single eruption are 4 km thick. Imagine the devastation – it would have been catastrophic to anything living within hundreds of miles,” Dr Christiansen said.

Dinosaurs were already extinct during this time period, but what many people don’t know is that 25-30 million years ago, North America was home to rhinos, camels, tortoises and even palm trees.

Dr Christiansen with colleagues measured the thickness of the pyroclastic flow deposits. They used radiometric dating, X-ray fluorescence spectrometry, and chemical analysis of the minerals to verify that the volcanic ash was all from the same ancient super-eruption.

The scientists found that the Wah Wah Springs eruption buried a vast region extending from central Utah to central Nevada and from Fillmore on the north to Cedar City on the south. They even found traces of ash as far away as Nebraska.

The team also found evidence of 15 super-eruptions and 20 large calderas – the so-called Indian Peak-Caliente caldera complex.

These supervolcanoes have diameters up to 60 km and are filled with intracaldera tuff and breccias. They have been hidden in plain sight for millions of years despite their enormous size.

“The ravages of erosion and later deformation have largely erased them from the landscape, but our careful work has revealed their details. The sheer magnitude of this required years of work and involvement of dozens of students in putting this story together,” Dr Christiansen said.

‘Bulges’ in volcanoes could be used to predict eruptions

‘Bulges’ in volcanoes could be used to predict eruptions

Using a technique called ‘seismic noise interferometry’ combined with geophysical measurements, the researchers measured the energy moving through a volcano. They found that there is a good correlation between the speed at which the energy travelled and the amount of bulging and shrinking observed in the rock. The technique could be used to predict more accurately when a volcano will erupt. Their results are reported in the journal Science Advances.

Data was collected by the US Geological Survey across Kīlauea in Hawaii, a very active volcano with a lake of bubbling lava just beneath its summit. During a four-year period, the researchers used sensors to measure relative changes in the velocity of seismic waves moving through the volcano over time. They then compared their results with a second set of data which measured tiny changes in the angle of the volcano over the same time period.

As Kīlauea is such an active volcano, it is constantly bulging and shrinking as pressure in the magma chamber beneath the summit increases and decreases. Kīlauea’s current eruption started in 1983, and it spews and sputters lava almost constantly. Earlier this year, a large part of the volcano fell away and it opened up a huge ‘waterfall’ of lava into the ocean below. Due to this high volume of activity, Kīlauea is also one of the most-studied volcanoes on Earth.

The Cambridge researchers used seismic noise to detect what was controlling Kīlauea’s movement. Seismic noise is a persistent low-level vibration in the Earth, caused by everything from earthquakes to waves in the ocean, and can often be read on a single sensor as random noise. But by pairing sensors together, the researchers were able to observe energy passing between the two, therefore allowing them to isolate the seismic noise that was coming from the volcano.

“We were interested in how the energy travelling between the sensors changes, whether it’s getting faster or slower,” said Clare Donaldson, a PhD student in Cambridge’s Department of Earth Sciences, and the paper’s first author. “We want to know whether the seismic velocity changes reflect increasing pressure in the volcano, as volcanoes bulge out before an eruption. This is crucial for eruption forecasting.”

One to two kilometres below Kīlauea’s lava lake, there is a reservoir of magma. As the amount of magma changes in this underground reservoir, the whole summit of the volcano bulges and shrinks. At the same time, the seismic velocity changes. As the magma chamber fills up, it causes an increase in pressure, which leads to cracks closing in the surrounding rock and producing faster seismic waves — and vice versa.

“This is the first time that we’ve been able to compare seismic noise with deformation over such a long period, and the strong correlation between the two shows that this could be a new way of predicting volcanic eruptions,” said Donaldson.

Volcano seismology has traditionally measured small earthquakes at volcanoes. When magma moves underground, it often sets off tiny earthquakes, as it cracks its way through solid rock. Detecting these earthquakes is therefore very useful for eruption prediction. But sometimes magma can flow silently, through pre-existing pathways, and no earthquakes may occur. This new technique will still detect the changes caused by the magma flow.

Seismic noise occurs continuously, and is sensitive to changes that would otherwise have been missed. The researchers anticipate that this new research will allow the method to be used at the hundreds of active volcanoes around the world.

Diamonds and Chocolate: New Volcanic Process Discovered

Diamonds and Chocolate: New Volcanic Process Discovered

The team studied how a process called ‘fluidized spray granulation’ can occur during kimberlite eruptions to produce well-rounded particles containing fragments from the Earth’s mantle, most notably diamonds. This physical process is similar to the gas injection and spraying process used to form smooth coatings on confectionary, and layered and delayed-release coatings in the manufacture of pharmaceuticals and fertilizers.

Kimberlite volcanoes are the primary source of diamonds on Earth, and are formed by gas-rich magmas from mantle depths of over 150 km. Kimberlite volcanism involves high-intensity explosive eruptions, forming diverging pipes or ‘diatremes’, which can be several hundred meters wide and several kilometers deep. A conspicuous and previously mysterious feature of these pipes are ‘pelletal lapilli ’ – well-rounded magma coated fragments of rock consisting of an inner ‘seed’ particle with a complex rim, thought to represent quenched magma.

These pelletal lapilli form by spray granulation when kimberlite magma intrudes into earlier volcaniclastic infill close to the diatreme root zone. Intensive degassing produces a gas jet in which the seed particles are simultaneously fluidized and coated by a spray of low-viscosity melt.

In kimberlites, the occurrence of pelletal lapilli is linked to diamond grade (carats per tonne), size and quality, and therefore has economic as well as academic significance.

“The origin of pelletal lapilli is important for understanding how magmatic pyroclasts are transported to the surface during explosive eruptions, offering fundamental new insights into eruption dynamics and constraints on vent conditions, notably gas velocity,” said Dr. Thomas Gernon, a lecturer in earth science at the University of Southampton and a lead author of the study published in the journal Nature Communications.

“The ability to tightly constrain gas velocities is significant, as it enables estimation of the maximum diamond size transported in the flow. Gas fluidisation and magma-coating processes are also likely to affect the diamond surface properties.”

The scientists studied two of the world’s largest diamond mines in South Africa and Lesotho. In the Letseng pipe in Lesotho, pelletal lapilli have been found in association with concentrations of large diamonds (up to 215 carat), which individually can fetch up to tens of millions of pounds. Knowledge of flow dynamics will inform models of mineral transport, and ultimately could improve resource assessments.

“This multidisciplinary research, incorporating Earth sciences, chemical and mechanical engineering, provides evidence for fluidized granulation in natural systems which will be of considerable interest to engineers and chemical, pharmaceutical and food scientists who use this process routinely. The scale and complexity of this granulation process is unique, as it has not previously been recognized in natural systems,” Dr. Gernon concluded.

Early Earth’s Atmosphere was Similar to Present-Day One

Early Earth’s Atmosphere was Similar to Present-Day One

Scientists have used the oldest minerals on Earth to reconstruct the atmospheric conditions. The findings, published in the journal Nature, prove that the atmosphere of early Earth was dominated by the  oxygen-rich compounds found within our current atmosphere – including water, carbon dioxide, and sulfur dioxide.

“We can now say with some certainty that many scientists studying the origins of life on Earth simply picked the wrong atmosphere,” said Bruce Watson, Professor of Science at Rensselaer Polytechnic Institute. The findings rest on the widely held theory that Earth’s atmosphere was formed by gases released from volcanic activity on its surface. Today, as during the earliest days of the Earth, magma flowing from deep in the Earth contains dissolved gases. When that magma nears the surface, those gases are released into the surrounding air.

“Most scientists would argue that this outgassing from magma was the main input to the atmosphere,” Watson said. “To understand the nature of the atmosphere ‘in the beginning,’ we needed to determine what gas species were in the magmas supplying the atmosphere.”

As magma approaches the Earth’s surface, it either erupts or stalls in the crust, where it interacts with surrounding rocks, cools, and crystallizes into solid rock. These frozen magmas and the elements they contain can be literal milestones in the history of Earth. One important milestone is zircon. The scientists sought to determine the oxidation levels of the magmas that formed ancient zircons to quantify, for the first time ever, how oxidized were the gases being released early in Earth’s history. “By determining the oxidation state of the magmas that created zircon, we could then determine the types of gases that would eventually make their way into the atmosphere,” said Dustin Trail, lead author of the study .

To do this researchers recreated the formation of zircons in the laboratory at different oxidation levels. They literally created lava in the lab. This procedure led to the creation of an oxidation gauge that could then be compared with the natural zircons.

During this process they looked for concentrations of a rare Earth metal called cerium in the zircons. Cerium is an important oxidation gauge because it can be found in two oxidation states, with one more oxidized than the other. The higher the concentrations of the more oxidized type cerium in zircon, the more oxidized the atmosphere likely was after their formation.

The calibrations reveal an atmosphere with an oxidation state closer to present-day conditions.

Why the Sumatra earthquake was so severe

Why the Sumatra earthquake was so severe

The earthquake, measuring magnitude 9.2, and the subsequent tsunami, devastated coastal communities of the Indian Ocean, killing over 250,000 people.

Research into the earthquake was conducted during a scientific ocean drilling expedition to the region in 2016, as part of the International Ocean Discovery Program (IODP), led by scientists from the University of Southampton and Colorado School of Mines.

During the expedition on board the research vessel JOIDES Resolution, the researchers sampled, for the first time, sediments and rocks from the oceanic tectonic plate which feeds the Sumatra subduction zone. A subduction zone is an area where two of the Earth’s tectonic plates converge, one sliding beneath the other, generating the largest earthquakes on Earth, many with destructive tsunamis.

Findings of a study on sediment samples found far below the seabed are now detailed in a new paper led by Dr Andre Hüpers of the MARUM-Center for Marine Environmental Sciences at University of Bremen – published in the journal Science.

Expedition co-leader Professor Lisa McNeill, of the University of Southampton, says: “The 2004 Indian Ocean tsunami was triggered by an unusually strong earthquake with an extensive rupture area. We wanted to find out what caused such a large earthquake and tsunami and what this might mean for other regions with similar geological properties.”

The scientists concentrated their research on a process of dehydration of sedimentary minerals deep below the ground, which usually occurs within the subduction zone. It is believed this dehydration process, which is influenced by the temperature and composition of the sediments, normally controls the location and extent of slip between the plates, and therefore the severity of an earthquake.

In Sumatra, the team used the latest advances in ocean drilling to extract samples from 1.5 km below the seabed. They then took measurements of sediment composition and chemical, thermal, and physical properties and ran simulations to calculate how the sediments and rock would behave once they had travelled 250 km to the east towards the subduction zone, and been buried significantly deeper, reaching higher temperatures.

The researchers found that the sediments on the ocean floor, eroded from the Himalayan mountain range and Tibetan Plateau and transported thousands of kilometres by rivers on land and in the ocean, are thick enough to reach high temperatures and to drive the dehydration process to completion before the sediments reach the subduction zone. This creates unusually strong material, allowing earthquake slip at the subduction fault surface to shallower depths and over a larger fault area – causing the exceptionally strong earthquake seen in 2004.

Dr Andre Hüpers of the University of Bremen says: “Our findings explain the extent of the large rupture area, which was a feature of the 2004 earthquake, and suggest that other subduction zones with thick and hotter sediment and rocks, could also experience this phenomenon.

“This will be particularly important for subduction zones with limited or no historic subduction earthquakes, where the hazard potential is not well known. Subduction zone earthquakes typically have a return time of a few hundred to a thousand years. Therefore our knowledge of previous earthquakes in some subduction zones can be very limited.”

Similar subduction zones exist in the Caribbean (Lesser Antilles), off Iran and Pakistan (Makran), and off western USA and Canada (Cascadia). The team will continue research on the samples and data obtained from the Sumatra drilling expedition over the next few years, including laboratory experiments and further numerical simulations, and they will use their results to assess the potential future hazards both in Sumatra and at these comparable subduction zones.

Hubble Spies Cosmic ‘David and Goliath’ in Gravitational Dance

Hubble Spies Cosmic ‘David and Goliath’ in Gravitational Dance

NGC 1512 resides in the southern constellation of Horologium and is roughly 39 million light-years away from Earth.

Also known as ESO 250-4, LEDA 14391 and IRAS 04022-4329, the galaxy spans 70,000 light years, nearly as much as our own Milky Way Galaxy.

It is classified as a barred spiral galaxy, named after the bar composed of stars, gas and dust slicing through its center.

The bar acts as a cosmic funnel, channeling the raw materials required for star formation from the outer ring into the heart of the galaxy.

This pipeline of gas and dust in NGC 1512 fuels intense star birth in the bright, blue, shimmering inner disc known as a circumnuclear starburst ring, which spans 2,400 light-years.

Both the bar and the starburst ring are thought to be at least in part the result of the cosmic scuffle between the two galaxies — a merger that has been going on for 400 million years.

NGC 1512 is also home to a second, more serene, star-forming region in its outer ring.

This ring is dotted with dozens of HII regions, where large swathes of hydrogen gas are subject to intense radiation from nearby, newly formed stars. This radiation causes the gas to glow and creates the bright knots of light seen throughout the ring.

Remarkably, NGC 1512 extends even further than we can see in this image — beyond the outer ring — displaying malformed, tendril-like spiral arms enveloping the elliptical galaxy NGC 1510 (also known as ESO 250-3, LEDA 14375 and IRAS F04018-4332).

These huge arms are thought to be warped by strong gravitational interactions with NGC 1510 and the accretion of material from it.

But these interactions are not just affecting NGC 1512; they have also taken their toll on the smaller of the pair.

The constant tidal tugging from its neighbor has swirled up the gas and dust in NGC 1510 and kick-started star formation that is even more intense than in NGC 1512.

This causes the galaxy to glow with the blue hue that is indicative of hot new stars.

NGC 1510 is not the only galaxy to have experienced the massive gravitational tidal forces of NGC 1512.

Observations made in 2015 showed that the outer regions of the spiral arms of NGC 1512 were indeed once part of a separate, older galaxy. This galaxy was ripped apart and absorbed by NGC 1512, just as it is doing now to NGC 151

Mars-to-Earth-Mass Planet May Lurk in Outer Solar System: Planet 10

Mars-to-Earth-Mass Planet May Lurk in Outer Solar System: Planet 10

In the paper, University of Arizona researchers Dr. Kathryn Volk and Professor Renu Malhotra present compelling evidence of a yet-to-be-discovered planetary body with a mass somewhere between that of Mars and Earth.

The mysterious mass has given away its presence — for now — only by controlling the orbital planes of a population of space rocks known as Kuiper Belt objects (KBOs).

While most KBOs — debris left over from the formation of the Solar System — orbit the Sun with orbital tilts that average out to what planetary scientists call the invariable plane of the Solar System, the most distant KBOs do not.

Their average plane, the authors discovered, is tilted away from the invariable plane by about 8 degrees.

In other words, something unknown is warping the average orbital plane of the outer Solar System.

“The most likely explanation for our results is that there is some unseen mass. According to our calculations, something as massive as Mars would be needed to cause the warp that we measured,” said Dr. Volk, lead author of the study.

The team analyzed the tilt angles of the orbital planes of more than 600 KBOs in order to determine the common direction about which these orbital planes all precess (precession refers to the slow change or ‘wobble’ in the orientation of a rotating object).

“KBOs operate in an analogous way to spinning tops,” Prof. Malhotra explained.

“Imagine you have lots and lots of fast-spinning tops, and you give each one a slight nudge. If you then take a snapshot of them, you will find that their spin axes will be at different orientations, but on average, they will be pointing to the local gravitational field of Earth.”

“We expect each of the KBOs’ orbital tilt angle to be at a different orientation, but on average, they will be pointing perpendicular to the plane determined by the sun and the big planets.”

“If one were to think of the average orbital plane of objects in the outer Solar System as a sheet, it should be quite flat past 50 AU,” Dr. Volk said.

“But going further out from 50 to 80 AU, we found that the average plane actually warps away from the invariable plane. There is a range of uncertainties for the measured warp, but there is not more than 1-2% chance that this warp is merely a statistical fluke of the limited observational sample of KBOs.”

“In other words, the effect is most likely a real signal rather than a statistical fluke.”

According to the calculations, an object with the mass of Mars orbiting roughly 60 AU from the Sun on an orbit tilted by about 8 degrees (to the average plane of the known planets) has sufficient gravitational influence to warp the orbital plane of the distant KBOs within about 10 AU to either side.

“The observed distant KBOs are concentrated in a ring about 30 AU wide and would feel the gravity of such a planetary mass object over time, so hypothesizing one planetary mass to cause the observed warp is not unreasonable across that distance,” Dr. Volk said.

This rules out the possibility that the postulated object in this case could be Planet Nine, predicted to be much more massive (about 10 Earth masses) and much farther out at 500 to 700 AU.

“That is too far away to influence these KBOs. It certainly has to be much closer than 100 AU to substantially affect the KBOs in that range,” Dr. Volk said.

Because a planet, by definition, has to have cleared its orbit of minor planets such as KBOs, the authors refer to the hypothetical mass as a planetary mass object.

The data also do not rule out the possibility that the warp could result from more than one planetary mass object.

“So why haven’t we found it yet? Most likely because we haven’t yet searched the entire sky for distant solar system objects,” the scientists said.

The most likely place a planetary mass object could be hiding would be in the galactic plane, an area so densely packed with stars that solar system surveys tend to avoid it.

“The chance that we have not found such an object of the right brightness and distance simply because of the limitations of the surveys is estimated to be to about 30%,” Dr. Volk said.

A possible alternative to an unseen object that could have ruffled the plane of outer KBOs could be a star that buzzed the Solar System in recent history.

“A passing star would draw all the ‘spinning tops’ in one direction,” Prof. Malhotra said.

“Once the star is gone, all the KBOs will go back to precessing around their previous plane.”

“That would have required an extremely close passage at about 100 AU, and the warp would be erased within 10 million years, so we don’t consider this a likely scenario.”

Hubble Space Telescope Sees ‘Hidden Galaxy’

Hubble Space Telescope Sees ‘Hidden Galaxy’

IC 342 is a spiral galaxy located in the constellation Camelopardalis, approximately 8.9 million light-years away.

The galaxy was discovered in 1895 by British astronomer William Frederick Denning.

Also known as UGC 2847, LEDA 13826 and Caldwell 5, it is one of the brightest galaxies in the IC 342/Maffei group of galaxies.

Although IC 342 is bright, it sits near the equator of the Milky Way’s galactic disc, where the sky is thick with glowing cosmic gas, bright stars, and dark, obscuring dust.

In order for astronomers to see its intricate spiral structure, they must gaze through a large amount of material contained within the Milky Way.

As a result IC 342 is relatively difficult to spot and image, giving rise to its intriguing nickname: the ‘Hidden Galaxy.’

In the Catalogue of Named Galaxies, IC 342 is called Stellivelatus Camelopardalis (star-veiled galaxy).

The galaxy is very active, as indicated by the range of colors visible in this Hubble image, depicting the very central region of the galaxy.

A beautiful mixture of hot, blue star-forming regions, redder, cooler regions of gas, and dark lanes of opaque dust can be seen, all swirling together around a bright core.

In 2003, astronomers confirmed this core to be a specific type of central region known as an HII nucleus — a name that indicates the presence of ionized hydrogen — that is likely to be creating many hot new stars.

The color image of IC 342 was made from separate exposures taken in the visible and UV regions of the spectrum with Hubble’s Wide Field Camera 3(WFC3).

Five filters were used to sample various wavelengths. The color results from assigning different hues to each monochromatic image associated with an individual filter.

Sun is Solar-Type Star After All

Sun is Solar-Type Star After All

The Sun’s activity, including sun-spot activity, levels of radiation and ejection of material, varies on an 11-year cycle, driven by changes in its magnetic field.

Other nearby solar-type stars have their own cycles, but the Sun does not seem to match their behavior.

Understanding the Sun’s cycle is one of the biggest outstanding problems in solar physics.

In a series of simulations of stellar magnetic fields, University of Montreal researcher Antoine Strugarek and co-authors found that the magnetic cycle of the Sun depends on its rotation rate and luminosity.

“This relationship can be expressed in terms of the so-called Rossby number,” they said.

“What we showed is that the Sun’s magnetic cycle is inversely proportional to this number.”

The researchers then compared the results of their simulations with available observations of cyclic activity in a sample of nearby solar-type stars.

They found that the cycle periods of the Sun and other solar-type stars all follow the same relationship with the Rossby number.

“The magnetic field of a star draws its energy from the flows of matter which animate its interior,” the authors explained.

“Thanks to the simulations, we now know that the rotation of the star influences the efficiency of the transfer of energy between these turbulent flows and the magnetic field.”

“The same phenomenon also determines the cycle period, which has been shown to decrease with the Rossby number, a dimensionless number widely used in geophysical fluid dynamics that measures the effects of centrifugal forces.”

“The discovery of such a scaling law for the period of the star magnetic cycle from self-consistent turbulent 3D simulations is a world first.”

The results demonstrate that the Sun is indeed a solar-type star, and also advance scientists’ understanding of how stars generate their magnetic fields.

“These results provide a new theoretical interpretation of stellar magnetic cycles, and place the Sun as the cornerstone of our understanding of the dynamics of stars,” the scientists said.

“By characterizing the magnetism of solar-type stars, our simulations will make it possible in particular to prepare the scientific return of the next European missions Solar Orbiter and PLATO.”

HD 3167d: New Super-Earth Discovered around Nearby Star

HD 3167d: New Super-Earth Discovered around Nearby Star

HD 3167 is a K0-type dwarf star, also designated as EPIC 220383386 and 2MASS J00345752+0422531.

The star has a radius and a mass roughly 86% that of the Sun, and is approximately 8 billion years old.

At a distance of just 149 light-years, HD 3167 is one of the closest and brightest stars hosting multiple transiting planets.

In September 2016, Vanderburg et al announced they had spotted two small, short-period planets — HD 3167b with a period of 0.95 days and HD 3167c with a period of 29.8 days — in orbit around the star.

Assisted by several telescopes and instruments, Christiansen et al confirmed the existence of HD 3167b and HD 3167c planets and discovered additional one, increasing the number of known planets in the system to three.

The newfound planet, named HD 3167d, is a super-Earth with a mass 6.9 times that of our home planet.

It whips around its parent star in just 8.5 days (between the orbits of the previously known planets).

The astronomers also precisely measured radii, masses, and densities of HD 3167b and HD 3167c.

With a mass of 5 Earth-masses and a radius approximately 1.7 times that of Earth’s, HD 3167b is a hot super-Earth with a likely rocky composition.

“The measured mass and radius of HD 3167b indicate a bulk density of 5.6 g/cm3; consistent with a predominantly rocky composition, but potentially having a thin envelope of hydrogen/helium or other low-density volatiles,” Dr. Christiansen and co-authors said.

HD 3167c is a warm sub-Neptune planet. It has a mass 9.8 times that of Earth’s and a radius 3 times that of Earth’s.

“The resulting bulk density of HD 3167c is 1.97 g/cm3. The mass and radius can be explained by a wide range of compositions, all of which include low-density volatiles such as water and hydrogen/helium,” the scientists said.

“HD 3167 promises to be a fruitful system for further study and a preview of the many exciting systems expected from the upcoming NASA TESS mission,” they concluded.

In a separate, independent study of the HD 3167 system, Gandolfi et al also reached the conclusion that HD 3167b is a rocky super-Earth and that HD 3167c is a low-density mini-Neptune.

KELT-11b: ‘Puffy’ Gas Giant Found 320 Light-Years Away

KELT-11b: ‘Puffy’ Gas Giant Found 320 Light-Years Away

KELT-11b is an extreme version of a gas planet, like Solar System’s Jupiter or Saturn, but is orbiting very close to its host star in an orbit that lasts less than 5 days.

“This planet is highly inflated, so that while it’s only a fifth as massive as Jupiter, it is nearly 40% larger, making it about as dense as styrofoam,” said Dr. Pepper, an astronomer and assistant professor of physics at Lehigh University.

“We were very surprised by the amazingly low density of this planet. It’s extremely big for its mass.”

The planet’s host star, KELT-11, is extremely bright, allowing precise measurement of the planet’s atmosphere properties and making it ‘an excellent testbed for measuring the atmospheres of other planets.’

Also known as HD 93396, the host star has started using up its nuclear fuel and is evolving into a red giant, so the planet will be engulfed by the star and not survive the next hundred million years.

KELT-11b was first spotted by the Kilodegree Extremely Little Telescope(KELT) survey, and is described in a study published in the Astronomical Journal (arXiv.org preprint).

The planet is the third-lowest density exoplanet with a precisely measured mass and radius that has been discovered.

“KELT-11b is one of the most inflated planets known, with an exceptionally large atmospheric scale height (1,717 miles, or 2,763 km), and an associated size of the expected atmospheric transmission signal of 5.6%,” Dr. Pepper and co-authors said.

“These attributes make the KELT-11 system a valuable target for follow-up and atmospheric characterization, and it promises to become one of the benchmark systems for the study of inflated exoplanets.”

Supermassive Black Hole Found 35,000 Light-Years from Home

Supermassive Black Hole Found 35,000 Light-Years from Home

Though several other suspected runaway black holes have been seenelsewhere, none has so far been confirmed.

Now an international team of researchers has detected a supermassive black hole — with a mass of one billion times the Sun’s — being kicked out of its host galaxy.

“We estimate that it took the equivalent energy of 100 million supernovae exploding simultaneously to jettison the black hole,” said co-author Dr. Stefano Bianchi, from the Roma Tre University, Italy.

The images taken by the NASA/ESA Hubble Space Telescope provided the first clue that 3C 186, located 8 billion light-years away, was unusual.

“When I first saw this, I thought we were seeing something very peculiar,” said lead author Dr. Marco Chiaberge, from the Space Telescope Science Institute and Johns Hopkins University.

“When we combined observations from Hubble, Chandra X-ray Observatory, and the Sloan Digital Sky Survey, it all pointed towards the same scenario. The amount of data we collected, from X-rays to ultraviolet to near-infrared light, is definitely larger than for any of the other candidate rogue black holes.”

Hubble images of 3C 186 revealed a bright quasar, the energetic signature of an active black hole, located far from the galactic core.

“Black holes reside in the centers of galaxies, so it’s unusual to see a quasar not in the center,” Dr. Chiaberge said.

The astronomers calculated that the black hole has already traveled about 35,000 light-years from 3C 186’s center, which is more than the distance between the Sun and the center of the Milky Way.

And it continues its flight at a speed of 4.7 million mph (7.5 million km per hour). At this speed the black hole could travel from Earth to the Moon in 3 min.

Although other scenarios to explain the observations cannot be excluded, the most plausible source of the propulsive energy is that this supermassive black hole was given a kick by gravitational waves unleashed by the merger of two massive black holes at the centre of its host galaxy.

This theory is supported by arc-shaped tidal tails identified by the team, produced by a gravitational tug between two colliding galaxies.

According to the team’s theory, 1-2 billion years ago two galaxies — each with central, supermassive black holes – merged.

The black holes whirled around each other at the center of the newly-formed elliptical galaxy, creating gravitational waves that were flung out like water from a lawn sprinkler.

As the two black holes did not have the same mass and rotation rate, they emitted gravitational waves more strongly along one direction.

When the two black holes finally merged, the anisotropic emission of gravitational waves generated a kick that shot the resulting black hole out of the galactic center.

“If our theory is correct, the observations provide strong evidence that supermassive black holes can actually merge,” Dr. Bianchi said.

“There is already evidence of black hole collisions for stellar-mass black holes, but the process regulating supermassive black holes is more complex and not yet completely understood.”

The astronomers now want to secure further observation time with Hubble, in combination with the Atacama Large Millimeter/submillimeter Array and other facilities, to more accurately measure the speed of the black hole and its surrounding gas disc, which may yield further insights into the nature of this rare object.

Astronomers discover rare fossil relic of early Milky Way

Astronomers discover rare fossil relic of early Milky Way

Terzan 5, 19 000 light-years from Earth in the constellation of Sagittarius (the Archer) and in the direction of the galactic centre, has been classified as a globular cluster for the forty-odd years since its detection. Now, an Italian-led team of astronomers have discovered that Terzan 5 is like no other globular cluster known. The team scoured data from the Multi-conjugate Adaptive Optics Demonstrator [1], installed at the Very Large Telescope, as well as from a suite of other ground-based and space telescopes [2]. They found compelling evidence that there are two distinct kinds of stars in Terzan 5 which not only differ in the elements they contain, but have an age-gap of roughly 7 billion years [3].

The ages of the two populations indicate that the star formation process in Terzan 5 was not continuous, but was dominated by two distinct bursts of star formation. “This requires the Terzan 5 ancestor to have large amounts of gas for a second generation of stars and to be quite massive. At least 100 million times the mass of the Sun,” explains Davide Massari, co-author of the study, from INAF, Italy, and the University of Groningen, Netherlands.

Its unusual properties make Terzan 5 the ideal candidate for a living fossil from the early days of the Milky Way. Current theories on galaxy formation assume that vast clumps of gas and stars interacted to form the primordial bulge of the Milky Way, merging and dissolving in the process.

“We think that some remnants of these gaseous clumps could remain relatively undisrupted and keep existing embedded within the galaxy,” explains Francesco Ferraro from the University of Bologna, Italy, and lead author of the study. “Such galactic fossils allow astronomers to reconstruct an important piece of the history of our Milky Way.”

While the properties of Terzan 5 are uncommon for a globular cluster, they are very similar to the stellar population which can be found in the galactic bulge, the tightly packed central region of the Milky Way. These similarities could make Terzan 5 a fossilised relic of galaxy formation, representing one of the earliest building blocks of the Milky Way.

This assumption is strengthened by the original mass of Terzan 5 necessary to create two stellar populations: a mass similar to the huge clumps which are assumed to have formed the bulge during galaxy assembly around 12 billion years ago. Somehow Terzan 5 has managed to survive being disrupted for billions of years, and has been preserved as a remnant of the distant past of the Milky Way.

“Some characteristics of Terzan 5 resemble those detected in the giant clumps we see in star-forming galaxies at high-redshift, suggesting that similar assembling processes occurred in the local and in the distant Universe at the epoch of galaxy formation,” continues Ferraro.

Hence, this discovery paves the way for a better and more complete understanding of galaxy assembly. “Terzan 5 could represent an intriguing link between the local and the distant Universe, a surviving witness of the Galactic bulge assembly process,” explains Ferraro while commenting on the importance of the discovery. The research presents a possible route for astronomers to unravel the mysteries of galaxy formation, and offers an unrivaled view into the complicated history of the Milky Way.

Astronomers Create ‘Image’ of Dark Matter Bridge that Connects Galaxies

Astronomers Create ‘Image’ of Dark Matter Bridge that Connects Galaxies

“For decades, researchers have been predicting the existence of dark-matter filaments between galaxies that act like a web-like superstructure connecting galaxies together,” said Prof. Mike Hudson, from the Department of Physics and Astronomy at the University of Waterloo.

“This image moves us beyond predictions to something we can see and measure.”

Prof. Hudson and his colleague, Seth Epps, used a technique called weak gravitational lensing, an effect that causes the images of distant galaxies to warp slightly under the influence of an unseen mass such as a planet, a black hole, or in this case, dark matter.

The effect was measured in images from a multi-year sky survey at the Canada-France-Hawaii Telescope.

The researchers combined lensing images from more than 23,000 pairs of galaxies located approximately 4.5 billion light-years away to create a composite image that shows the presence of dark matter between the two galaxies.

The results show the dark matter filament bridge is strongest between systems less than 40 million light-years apart.

“By using this technique, we’re not only able to see that these dark matter filaments in the Universe exist, we’re able to see the extent to which these filaments connect galaxies together,” Epps said.

NASA science flights study effect of summer melt on Greenland ice sheet

NASA science flights study effect of summer melt on Greenland ice sheet

Operation IceBridge, NASA’s airborne survey of polar ice, is flying in Greenland for the second time this year, to observe the impact of the summer melt season on the ice sheet. The IceBridge flights, which began on August 27 and will continue until September 16, are mostly repeats of lines that the team flew in early May, so that scientists can observe changes in ice elevation between the spring and late summer. “Earlier in IceBridge’s history, we only surveyed the elevation of these glaciers once a year,” said Joe MacGregor, IceBridge’s deputy project scientist and a glaciologist with NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “But these glaciers experience the climate year-round. Now we’re starting to complete the picture of what happens to them as the year goes on, especially after most of the summer melting has already occurred, so we can measure their cumulative response to that melt.”

The image above, taken during a high-priority flight that IceBridge carried on Aug. 29, shows Helheim Glacier, with its characteristic wishbone-shaped channels, as seen from about 20,000 feet in the sky. Helheim is one of Greenland’s largest and fastest-melting glaciers. During the first week of the summer land ice campaign, IceBridge has also flown over glaciers along Greenland’s northwest, southeast and southwest coasts, and also over lines that the Ice, Cloud, and land Elevation Satellite (ICESat) flew over Greenland during its 2003-2009 period of operations, to observe how ice elevation has evolved since then. Future flights will cover critical areas in central and southern Greenland, such as the world’s fastest glacier, Jakobshavn Isbræ.

For this short, end-of-summer campaign, the IceBridge scientists are flying aboard an HU-25A Guardian aircraft from NASA’s Langley Research Center in Hampton, Virginia. The Guardian is a version of an early-generation Falcon 20 business jet, modified for service with the US Coast Guard and later acquired by NASA. It The plane carries a laser instrument that measures changes in the ice elevation, a high-resolution camera system to image the surface, and an instrument to infer the surface temperature. Due to the Guardian’s limited range, the flights will be shorter (3.5 hours long) than the 8-hour missions carried during IceBridge’s spring Arctic campaign, but the team expects to fly twice a day whenever possible.

1,900-Year-Old Roman Gold Coin Found in Eastern Galilee

1,900-Year-Old Roman Gold Coin Found in Eastern Galilee

“Laurie demonstrated exemplary civic behavior by handing this important coin over to the Israel Antiquities Authority (IAA),” said Dr. Nir Distelfeld, an inspector with the IAA Unit for the Prevention of Antiquities Robbery.

“This is an extraordinarily remarkable and surprising discovery. I believe that soon, thanks to Laurie, the public will be able to enjoy this rare find.”

According to archaeologists at the IAA, the find is so rare that only one other such coin is known to exist.

“This coin, minted in Rome in 107 CE, is rare on a global level,” explained IAA numismatist Dr. Danny Syon.

“On the reverse we have the symbols of the Roman legions next to the name of the Roman emperor Trajan, and on the obverse — instead of an image of Trajan, as was usually the case — there is the portrait of the emperor ‘Augustus Deified’ (Divus Augustus).”

This coin is part of a series of coins minted by the emperor Trajan (reigned 98 – 117 CE) as a tribute to the emperors that preceded him.

“The coin may reflect the presence of the Roman army in the region some 2,000 years ago – possibly in the context of activity against Bar Kokhba supporters in the Galilee – but it is very difficult to determine that on the basis of a single coin,” added Dr. Donald T. Ariel, head curator of the IAA Coin Department.

“Historical sources describing the period note that some Roman soldiers were paid a high salary of three gold coins, the equivalent of 75 silver coins, each payday. Because of their high monetary value soldiers were unable to purchase goods in the market with gold coins, as the merchants could not provide change for them.”

“Whilst the bronze and silver coins of Trajan are common in the country, his gold coins are extremely rare,” Dr. Ariel said.

First Humans Arrived in North America 116,000 Years Earlier than Thought: Evidence from Cerutti Mastodon Site

First Humans Arrived in North America 116,000 Years Earlier than Thought: Evidence from Cerutti Mastodon Site

The Cerutti Mastodon site was discovered by San Diego Natural History Museum researchers in November 1992 during routine paleontological mitigation work.

This site preserves 131,000-year-old hammerstones, stone anvils, and fragmentary remains — bones, tusks and molars — of a mastodon (Mammut americanum) that show evidence of modification by early humans.

An analysis of these finds ‘substantially revises the timing of arrival of Homointo the Americas,’ according to a paper published this week in the journal Nature.

“This discovery is rewriting our understanding of when humans reached the New World,” said Dr. Judy Gradwohl, president and chief executive officer of the San Diego Natural History Museum.

Until recently, the oldest records of human activity in North America generally accepted by archaeologists were about 15,000 years old.

But the fossils from the Cerutti Mastodon site — named in recognition of San Diego Natural History Museum paleontologist Richard Cerutti, who discovered the site and led the excavation — were found embedded in fine-grained sediments that had been deposited much earlier, during a period long before humans were thought to have arrived on the continent.

“When we first discovered the site, there was strong physical evidence that placed humans alongside extinct Ice Age megafauna,” said lead co-author Dr. Tom Deméré, curator of paleontology at the San Diego Natural History Museum.

“Since the original discovery, dating technology has advanced to enable us to confirm with further certainty that early humans were here much earlier than commonly accepted.”

Since its initial discovery, the Cerutti Mastodon site has been the subject of research by top scientists to date the fossils accurately and evaluate microscopic damage on bones and rocks that authors now consider indicative of human activity.

In 2014, U.S. Geological Survey geologist Dr. James Paces used state-of-the-art radiometric dating methods to determine that the mastodon bones were 130,700 years old, with a conservative error of plus or minus 9,400 years.

“The distributions of natural uranium and its decay products both within and among these bone specimens show remarkably reliable behavior, allowing us to derive an age that is well within the wheelhouse of the dating system,” Dr. Paces said.

The finding poses a lot more questions than answers.

“Who were the hominins at work at this site? We don’t know. No hominin fossil remains were found. Our own species, Homo sapiens, has been around for about 200,000 years and arrived in China sometime before 100,000 years ago,” the researchers said.

“Modern humans shared the planet with other hominin species that are now extinct (such as Neanderthals) until about 40,000 years ago. If a human-like species was living in North America 130,000 years ago, it could be that modern humans didn’t get here first.”

“How did these early hominins get here? We don’t know. Hominins could have crossed the Bering Land Bridge linking modern-day Siberia with Alaska prior to 130,000 years ago before it was submerged by rising sea levels,” they said.

“For some time prior to 130,000 years ago, the Earth was in a glacial period during which water was locked up on land in great ice sheets. As a consequence, sea levels dropped dramatically, exposing land that lies underwater today.”

“If hominins had not already crossed the land bridge prior to 130,000 years, they may have used some form of watercraft to cross the newly formed Bering Strait as glacial ice receded and sea levels rose.”

“We now know that hominins had invented some type of watercraft before 100,000 years ago in Southeast Asia and the Mediterranean Sea area. Hominins using watercraft could have followed the coast of Asia north and crossed the short distance to Alaska and then followed the west coast of North America south to present-day California.”

“Although we are not certain if the earliest hominins arrived in North America on foot or by watercraft, recognition of the antiquity of the Cerutti Mastodon site will stimulate research in much older deposits that may someday reveal clues to help solve this mystery.”

The authors also conducted experiments with the bones of large modern mammals, including elephants, to determine what it takes to break the bones with large hammerstones and to analyze the distinctive breakage patterns that result.

“It’s this sort of work that has established how fractures like this can be made,” said co-author Daniel Fisher, a professor in the Department of Earth and Environmental Sciences and in the Department of Ecology and Evolutionary Biology at the University of Michigan, and director of the University of Michigan Museum of Paleontology.

“And based on decades of experience seeing sites with evidence of human activity, and also a great deal of work on modern material trying to replicate the patterns of fractures that we see, I really know of no other way that the material of the Cerutti Mastodon site could have been produced than through human activity.”

“There’s no doubt in my mind this is an archaeological site,” added lead co-author Dr. Steve Holen, director of research at the Center for American Paleolithic Research.

“The bones and several teeth show clear signs of having been deliberately broken by humans with manual dexterity and experiential knowledge. This breakage pattern has also been observed at mammoth fossil sites in Kansas and Nebraska, where alternative explanations such as geological forces or gnawing by carnivores have been ruled out.”

The scientists also created 3D digital models of bone and stone specimens from the Cerutti Mastodon site.

“The models were immensely helpful in interpreting and illustrating these objects,” said co-author Dr. Adam Rountrey, collection manager at the University of Michigan Museum of Paleontology.

“We were able to put together virtual refits that allow exploration of how the multiple fragments from one hammerstone fit back together.”

“The 3D models helped us understand what we were looking at and to communicate the information much more effectively.”

New Evidence Pushes Back Aboriginal Occupation of Australia to 65,000 Years Ago

New Evidence Pushes Back Aboriginal Occupation of Australia to 65,000 Years Ago

The discovery was made by a team of archaeologists and dating specialists led by University of Queensland researcher Dr. Chris Clarkson.

The team found new evidence at the Madjedbebe rockshelter in the World Heritage-listed Kakadu National Park, near Jabiru in northern Australia.

“This latest evidence pushes back the initial human occupation estimate by some 10,000 years or more, and supports a longer Aboriginal connection with the continent than previously thought,” said team member Dr. Lee Arnold, from the University of Adelaide.

“Intriguingly, the new occupation age implies at least 20,000 years of overlap between humans and the megafauna in the far north of Australia.”

“The evidence suggests that the causes of Australian megafauna extinction may be much more complex than is often assumed.”

“The new date makes a difference,” said team member Dr. Ben Marwick, from the University of Washington.

“Against the backdrop of theories that place humans in Australia anywhere between 47,000 and 60,000 years ago, the concept of earlier settlement calls into question the argument that humans caused the extinction of unique megafauna such as giant kangaroos, wombats and tortoises more than 45,000 years ago.”

“Previously it was thought that humans arrived and hunted them out or disturbed their habits, leading to extinction, but these dates confirm that people arrived so far before that they wouldn’t be the central cause of the death of megafauna.”

“It shifts the idea of humans charging into the landscape and killing off the megafauna. It moves toward a vision of humans moving in and coexisting, which is quite a different view of human evolution.”

The Madjedbebe rockshelter, also known as Malakunanja II, has been excavated four times since the 1970s.

More than 10,000 artifacts were revealed at the site, including the oldest ground-edge stone axe technology in the world and the oldest known seed-graining tools in Australia.

“The site contains the oldest ground-edge stone axe technology in the world, the oldest known seed-grinding tools in Australia and evidence of finely made stone points which may have served as spear tips,” Dr. Clarkson said.

“Most striking of all, in a region known for its spectacular rock art, are the huge quantities of ground ochre and evidence of ochre processing found at the site, from the older layer continuing through to the present.”

“Aboriginal people lived at Madjedbebe at the same time as extinct species of giant animals were roaming around Australia, and the tiny species of primitive human, Homo floresiensis, was living on the island of Flores in eastern Indonesia,” the researchers said.

The dig also discovered an upper jaw fragment of a thylacine (also known as the Tasmanian tiger) coated in red pigment, giving insight to the central role ochre played in local customs at the time.

“Our team has rewritten Australian and, indeed, world history by proving that the colonization of Australia and the first major sea voyage in human history occurred at least 65,000 years ago,” said team member S. Anna Florin, a PhD student at the University of Queensland.

“This incredible discovery, and its many implications, is the work of many archaeologists, using small pieces of evidence such as stone tools and grains of sand to understand human behavior many millennia ago.”

“The new evidence sets a new minimum age for the arrival of humans in Australia, the dispersal of modern humans out of Africa, and the subsequent interactions of modern humans with Neanderthals and Denisovans,” the scientists said.

Oldest Evidence for Plant Processing in Pottery Found

Oldest Evidence for Plant Processing in Pottery Found

The team, led by University of Bristol Professor Richard Evershed, studied unglazed pottery dating from more than 10,000 years ago, from two sites in the Libyan Sahara.

“We reveal the earliest direct evidence for plant processing in pottery globally, from the sites of Takarkori and Uan Afuda in the Libyan Sahara, dated to 8200–6400 BC,” the scientists said.

“Characteristic carbon number distributions and 13C values for plant wax-derived n-alkanes and alkanoic acids indicate sustained and systematic processing of C3/C4 grasses and aquatic plants, gathered from the savannahs and lakes in the Early to Middle Holocene green Sahara.”

Ancient cooking would have initially involved the use of fires or pits and the invention of ceramic cooking vessels led to an expansion of food preparation techniques. Cooking would have allowed the consumption of previously unpalatable or even toxic foodstuffs and would also have increased the availability of new energy sources.

Remarkably, until now, evidence of cooking plants in early prehistoric cooking vessels has been lacking.

Prof. Evershed and co-authors detected lipid residues of foodstuffs preserved within the fabric of unglazed cooking pots.

Over half of the vessels studied were found to have been used for processing plants based on the identification of diagnostic plant oil and wax compounds.

“The finding of extensive plant wax and oil residues in early prehistoric pottery provides us with an entirely different picture of the way early pottery was used in the Sahara compared to other regions in the ancient world,” Prof. Evershed said.

“Our new evidence fits beautifully with the theories proposing very different patterns of plant and animal domestication in Africa and Europe/Eurasia.”

Detailed analyses of the molecular and stable isotope compositions showed a broad range of plants were processed, including grains, the leafy parts of terrestrial plants, and most unusually, aquatic plants.

The interpretations of the chemical signatures obtained from the pottery are supported by abundant plant remains preserved in remarkable condition due to the arid desert environment at the sites.

The plant chemical signatures from the Saharan pottery show that the processing of plants was practiced for over 4,000 years, indicating the importance of plants to the ancient people of the prehistoric Sahara.

“Until now, the importance of plants in prehistoric diets has been under-recognized but this work clearly demonstrates the importance of plants as a reliable dietary resource,” said study lead author Dr. Julie Dunne, also from the University of Bristol, UK.

“These findings also emphasize the sophistication of these early hunter-gatherers in their utilization of a broad range of plant types, and the ability to boil them for long periods of time in newly invented ceramic vessels would have significantly increased the range of plants prehistoric people could eat.”

Neanderthals Capable of Incorporating Symbolic Objects into Their Culture, Discovery Suggests

Neanderthals Capable of Incorporating Symbolic Objects into Their Culture, Discovery Suggests

The rock was collected more than a century ago from the Krapina Neanderthal site and was just recently analyzed by experts from the Croatian Natural History Museum, the Croatian Academy of Science and Arts and the University of Kansas.

“At the Croatian site of Krapina dated to about 130,000 years ago, among many items, a split limestone rock was excavated by Dragutin Gorjanović-Kramberger between 1899 and 1905,” the researchers said.

“Of more than 1,000 lithic items at Krapina, none resemble this specimen and we propose it was collected and not further processed by the Neanderthalsbecause of its aesthetic attributes.”

“If we were walking and picked up this rock, we would have taken it home. It is an interesting rock,” added David Frayer, a professor emeritus of anthropology at the University of Kansas.

In 2015, Prof. Frayer and colleagues published an article about a set of eagle talons from the same Neanderthal site that included cut marks and were fashioned into a piece of jewelry.

“People have often defined Neanderthals as being devoid of any kind of aesthetic feelings, and yet we know that at this site they collected eagle talons and they collected this rock,” said Prof. Frayer, corresponding author of a paper on the discovery published in the November/December 2016 issue of the journal Comptes Rendus Palevol.

“At other sites, researchers have found they collected shells and used pigments on shells.”

The limestone rock from the Krapina site is 9.19 cm long, 6.61 cm wide, with a maximum thickness of 1.69 cm and minimum thickness of 3.1 mm.

“The specimen is a brownish, flat piece of micritic limestone (mudstone) bearing an array of dendritic forms. The brownish color comes from the surface patina, whereas afresh break exposes the original grayish color of the rock,” Prof. Frayer and co-authors said.

“The split rock shows some irregular surfaces, but no cortex is present. Both faces are smooth and the edges are unmodified. We could find no striking platform or other areas of preparation on the rock’s edge.”

“From this, we assume the cobble was not broken apart by a Neanderthal, but was picked up in its present condition.”

“The fact that it wasn’t modified, to us, it meant that it was brought there for a purpose other than being used as a tool,” Prof. Frayer explained.

There was a small triangular flake that fits with the rock, but the break appeared to be fresh and likely happened well after the specimen was deposited into the sediments of the Krapina site. Perhaps it occurred during transport or storage after the excavation around 1900.

“The dendritic forms, ‘stem’ and veins are visually appealing and have an aesthetic quality, often appreciated by today’s rock hunters,” the scientists said.

“No one would ever suggest that Neanderthals knew the source and the meaning of the dendritic forms in rock, but there is no reason to think they would not recognize their distinctiveness and the visual appeal of them. Presumably, they considered the rock unusual and worthy of keeping.”

The team suspects a Neanderthal collected the rock from a site a few miles north of the Krapina site where there were known outcrops of biopelmicritic grey limestone. Either the Neanderthal found it there or the Krapinica stream transported it closer to the site.

“The discovery is likely minor compared with other discoveries, such as more modern humans 25,000 years ago making cave paintings in France. However, it added to a body of evidence that Neanderthals were capable assigning symbolic significance to objects and went to the effort of collecting them,” Prof. Frayer said.

The discovery could also provide more clues as to how modern humans developed these traits.

“It adds to the number of other recent studies about Neanderthals doing things that are thought to be unique to modern Homo sapiens. We contend they had a curiosity and symbolic-like capacities typical of modern humans,” Prof. Frayer said.

4,000-Year-Old ‘Multi-Dolmen’ Found in Israel

4,000-Year-Old ‘Multi-Dolmen’ Found in Israel

The newly-discovered megalithic stone structure is a unique, monumental, multi-chambered dolmen: a central chamber roofed by a gigantic engraved capstone and surrounded by a giant tumulus (stone heap) into which at least four additional sub-chambers were built.

This is the first reported complex ‘multi-dolmen’ in the Levant and one of the largest dolmens ever reported from the region, according to a team of archaeologists led by Tel Hai College Professor Gonen Sharon.

“The dolmen tumulus, built around a central chamber, is 20 m in diameter. The total weight of the basalt stones used is estimated at 400 tons,” Prof. Sharon and colleagues explained.

“The four sub-chambers built into the tumulus are each medium-sized (1 x 3 m) and elongated, and covered by one to three massive basalt capstones.”

“In the upper part of the tumulus is the central chamber. The chamber is rectangular, 3 m long by 2 m wide, and the ceiling is 1.7 m above the present day surface prior to excavation.”

“Topping the central chamber of the dolmen is a single giant, basalt capstone. The stone, irregular in shape, measures over 4 m in length, 3.5 m in width and more than 1.2 m in thickness, with an estimated weight of over 50 tons.”

The archaeologists also discovered rock art engravings on the ceiling of the central chamber.

“This is the first art ever documented in a dolmen in the Middle East,” said team member Dr. Uri Berger, an archaeologist with the Israel Antiquities Authority.

“The ceiling panel, located at the southeast quarter of the chamber ceiling, includes 14 clearly identified schematic, engraved elements,” the researchers said.

“The forms represent variations on a single motif, comprising a vertical line with a downturned arc attached to its upper part.”

“The length of the central line differs between elements as does the curvature of the arc. The average size of the elements is about 25 cm.”

“The forms were made by pecking into the face of the basalt rock. The inner surface of the engraved lines is relatively uniform and could have been made by chisel or hammer/axe either of metal (bronze) or stone such as flint.”

Middle Stone Age Humans Used Innovative Heating Techniques to Make Tools

Middle Stone Age Humans Used Innovative Heating Techniques to Make Tools

South African Middle Stone Age humans deliberately heated silcrete, a hard, fine-grained, local rock, so that they could more easily obtain blades from the core material. The blades were then crescent shaped and glued into arrow heads.

“This is the first time anywhere that bows and arrows were used. This would have had a major effect on hunting practices as both spears and bow and arrow could be used to hunt animals,” said study senior author Prof. Christopher Henshilwood, from the University of the Witwatersrand in Johannesburg, South Africa, and the University of Bergen in Norway.

The extensive heat treatment enabled early humans to produce tougher, harder tools — the first evidence of a transformative technology. However, the exact role of this important development in the Middle Stone Age technological repertoire was not previously clear.

Prof. Henshilwood and his colleagues addressed this issue by using a new non-destructive approach to analyze the heating technique used in the production of silcrete artifacts at Klipdrift Shelter, a recently discovered Middle Stone Age site located on the southern Cape of South Africa, including unheated and heat-treated comparable silcrete samples from 31 locations around the site.

The researchers noted intentional and extensive heat treatment of over 90% of the silcrete, highlighting the important role this played in silcrete blade production.

The heating step appeared to occur early during the blade production process, at an early reduction stage where stone was flaked away to shape the silcrete core.

The hardening, toughening effect of the heating step would therefore have impacted all subsequent stages of silcrete tool production and use.

“Heating was applied, non-randomly, at an early stage of core exploitation and was sometimes preceded by an initial knapping stage,” said co-author Dr. Karen van Niekerk, from the University of Bergen.

“As a consequence, the whole operational chain, from core preparation to blade production and tool manufacturing, benefited from the advantages of the heating process.”

The scientists suggest that silcrete heat treatment at the Klipdrift Shelter may provide the first direct evidence of the intentional and extensive use of fire applied to a whole lithic chain of production.

Along with other fire-based activities, intentional heat treatment was a major asset for Middle Stone Age humans in southern Africa, and has no known contemporaneous equivalent elsewhere.

“The advantages of the heating process are multiple: by reducing the material’s fracture toughness and increasing its hardness, less force was needed to detach blades after heat treatment, resulting in better control and precision during percussion,” Prof. Henshilwood explained.

“This heating process marks the emergence of fire engineering as a response to a variety of needs that largely transcend hominin basic subsistence requirements, although it did not require highly specialized technical skills and was likely performed as part of on-site domestic activities,” he said.

The research was published this week in the journal PLoS ONE.

Swedish Researchers Find Submerged Mesolithic Settlement

Swedish Researchers Find Submerged Mesolithic Settlement

“The submerged landscape at Haväng is unique, as the excellent preservation of both natural and cultural objects and the longevity of the site are rarely seen in other submerged Mesolithic sites,” Prof. Hammarlund and co-authors said.

Changes in the sea level have allowed the findings to be preserved deep below the sea surface.

“Organic-rich sediment ridges with abundant wood remains and archaeological artifacts extend 3 km from the modern coast to depths of at least 20 m below the present sea level,” the scientists said.

“This exceptionally well-preserved material gives evidence of a lagoonal environment surrounded by a pine-dominated forest, which was inhabited by Mesolithic humans during two low-stand phases of the Baltic Basin, from the Yoldia Sea stage to the Initial Littorina Sea stage (11,700-8,000 years ago).”

Prof. Hammarlund and his colleagues from Lund University and the National Historical Museums drilled into the seabed and radiocarbon dated the core, as well as examined pollen and diatoms.

They also produced a bathymetrical map that reveals depth variations.

”As geologists, we want to recreate this area and understand how it looked. Was it warm or cold? How did the environment change over time?” said Lund University researcher Anton Hansson, a team member and first author of a paper reporting the results in the journal Quaternary International.

The team also made several spectacular finds, including 9,000-year-old stationary fish traps and a pick axe made out of elk antlers.

“Bones and antlers of red deer with slaughter marks and a unique pick axe made of elk antler provide evidence of human exploitation of terrestrial resources,” the scientists said.

“Of great interest are the remains of various kinds of stationary fishing equipment made of hazel wood; weirs, wattles, posts and fences,” they added.

“In total, eight fishing constructions have been found at Haväng, and two of these constructions have been radiocarbon dated to 9,200-8,400 years ago.”

“These fishing constructions are the oldest known of its kind in northern Europe, and demonstrate exploitation of riverine fish at Haväng.”

”If you want to fully understand how humans dispersed from Africa, and their way of life, we also have to find all their settlements,” Hansson said.

“Quite a few of these are currently underwater, since the sea level is higher today than during the last glaciation. Humans have always preferred coastal sites.”

Turkeys Were Part of Native American Life Centuries before First Thanksgiving

Turkeys Were Part of Native American Life Centuries before First Thanksgiving

Researchers knew that turkeys had been a part of Native American life long before the first Thanksgiving in 1621. Their feathers were used on arrows, in headdresses and clothing. The meat was used for food. Their bones were used for tools including scratchers used in ritual ceremonies.

There are even representations of turkeys in artifacts from the time. An intricately engraved marine shell pendant found at a site in central Tennessee shows two turkeys facing each other.

But the new research, reported in the Journal of Archaeological Science: Reports, indicates turkeys were more than just a casual part of life for Native Americans of that era.

The authors came across a few curiosities as they examined skeletons of turkeys from archaeological sites in Tennessee that led them to believe that Native Americans were actively managing these fowls.

For one, the groupings researchers worked on had more male turkeys than a typical flock.

“In a typical flock of turkeys, there are usually more females. But in the flock they examined, they found more remains of males. That would only happen if it were designed that way,” said lead author Dr. Tanya Peres, from the Department of Anthropology at Florida State University.

“It appears Native Americans were favoring males for their bones for tools.”

“And they certainly would have favored males for their feathers. They tend to be much brighter and more colorful than the female species. Female feathers tend to be a dull grey or brown to blend in to their surroundings since they have to sit on the nest and protect the chicks.”

The other immediately noticeable trait that stood out to the team was that these ancient American gobblers were big boned — much larger than today’s average wild turkey. That could be the result of them being purposefully cared for or fed diets of corn.

“The skeletons of the archaeological turkeys we examined were quite robust in comparison to the skeletons of our modern comparatives,” said co-author Kelly Ledford, a graduate student at Florida State University.

“The domestication process typically results in an overall increase in the size of the animal so we knew this was a research avenue we needed to explore.”

8,000-Year-Old Female Figurine Found at Çatalhöyük

8,000-Year-Old Female Figurine Found at Çatalhöyük

The ancient figurine measures 6.7 inches (17 cm) long and weighs 2.2 pounds (1 kg), and was carved from a marmoreal stone.

The statuette was unearthed earlier this year by an international team led by Stanford University archaeologist Professor Ian Hodder.

The remarkable object is “considered unique due to its intact form and fine craftsmanship,” according to a statement from the Turkish Ministry of Culture and Tourism.

The archaeologists said the figurine was probably used in rituals.

The site of Çatalhöyük where the figurine was found is one of the largest and best preserved Neolithic sites in the world.

It is located southeast of the modern Turkish city of Konya, about 90 miles from Mount Hasan.

The settlement was founded around 7500 BC and was inhabited for more than two millennia.

The site was discovered in the early 1960s by British archaeologist James Mellaart.

Excavations at the site produced a huge number of artifacts and ancient structures including a 10-foot-wide wall painting of the town and two peaks, sometimes referred to as the world’s oldest map.

Human DNA carries hints of unknown extinct ancestor

Human DNA carries hints of unknown extinct ancestor

 The human family tree may be even more tangled than scientists had thought. A new computer analysis has turned up evidence pointing to some long-lost human cousins. That evidence was found hiding in the DNA — genetic instruction book — of some people alive today.

Ryan Bohlender led the new study. This statistical geneticist works at the University of Texas MD Anderson Cancer Center in Houston. He and his colleagues pored over DNA from people living in Melanesia. This part of the South Pacific includes Papua New Guinea and nearby islands. And here, the new study finds, people inherited genes that appear to come from an unknown extinct hominid. (Hominids are a group of species that includes humans and our ancient relatives.)

Bohlender reported his team’s new conclusions here in Canada, on October 20. It was at the annual meeting of the American Society of Human Genetics.

Earlier research had shown that the ancestors of Melanesians mated with two groups of extinct hominids. One group, Neandertals, left behind fossils in Europe and Asia. The other group had been distant cousins of the Neandertals. Known as Denisovans (Deh-NEES-oh-vuns), this group is known only from DNA found in a finger bone and a couple of teeth. Their fossils came from a cave in Siberia.

The new study found some mysterious DNA in Melanesians that was very old. But it didn’t seem to come from either Neandertals or Denisovans. The mystery DNA likely comes from a third hominid species. Scientists have not yet found fossil evidence for such a species, Bohlender notes.

Other scientists also have dug into the DNA of present-day people and found traces of unknown species. In 2012, another group of researchers suggested some Africans carry heirloom DNA from an unknown extinct hominid. Again, no bones with that particular DNA have yet turned up.

Accounting for missing DNA

After ancestors of humans began migrating out of Africa, they mixed with Neandertals in Europe and Asia. As a result, people whose ancestors came from outside of Africa still carry a small amount of Neandertal DNA. Bohlender and his colleagues calculate that Europeans and Chinese people carry a similar amount of this Neandertal ancestry — some 2.8 percent.

Europeans have no sign of Denisovan ancestry. People in China do, but the amount is very small, just 0.1 percent, according to calculations by Bohlender’s group. But 2.74 percent of the DNA in people in Papua New Guinea comes from Neandertals. And Bohlender estimates the Denisovan DNA in Melanesians at about 1.1 percent. That’s far less than the 3 to 6 percent Denisovan DNA that other researchers have reported in the Melanesians.

But Melanesians carry other bits of old DNA too. These bits may have come from a relative of Neandertals and Denisovans. If true, that would make a third species of extinct hominid that has mixed with human ancestors.

“Human history is a lot more complicated than we thought it was,” Bohlender now says.

Another team recently concluded much the same thing. Eske Willerslev, who led this group, is an evolutionary geneticist. He works at the Natural History Museum of Denmark in Copenhagen. Willerslev’s group examined DNA from 83 aboriginal Australians. They also probed the DNA of 25 people from native populations in Papua New Guinea. DNA similar to Denisovans showed up in the study volunteers. But that DNA didn’t match that of the Denisovans precisely. In fact, it may be from another extinct hominid. That’s what these researchers reported in the October 13 Nature. “Who this group is we don’t know,” Willerslev points out.

Fossils from other extinct human relatives have been found in the South Pacific. Scientists have not yet gotten DNA out of those bones. It’s possible that the not-quite-Denisovan DNA that Willerslev’s team found comes from one those unknown hominids. If researchers can get DNA from the old bones, they’ll try to match it to what’s in people today.

 A possible confounder

It’s hard to know for sure whether such a third group mated with the ancestors of the South Pacific islanders. One reason is that within any group, DNA will vary from person to person. Some groups have quite a bit of this genetic diversity.

Researchers don’t know much about Denisovans, says Mattias Jakobsson. He’s an evolutionary geneticist at Uppsala University in Sweden. But it’s possible that Denisovans formed distant communities. These might have been separated from each other for a long time. If true, those groups could have developed many genetic differences. If there were enough such changes to their DNA, this might have fooled scientists into thinking the groups were different species.

Still, Jakobsson says he wouldn’t be surprised if other groups of extinct hominids mixed with humans. Modern and ancient humans, he observes, “have met many times and had many children together.”

Scientists discover itch-busting cells

Scientists discover itch-busting cells

A fly tickling the hair on your arm can spark a maddening itch. Now, scientists have spotted nerve cells in mice that curb this light twiddling sensation. If humans have similar itch-busters, the results could lead to treatments for the millions of people who suffer from chronic, unstoppable itch.

For many of these people, there are currently no good treatments. “This is a major problem,” says Gil Yosipovitch. He directs the Temple University Itch Center in Philadelphia, Pa., and was not involved in the new study.

All touch sensations — including itch — start at the skin. In recent years, scientists have started to learn how nerve cells carry itchy signals from there to the spinal cord and on up to the brain. Often the original itch signal is triggered by chemicals, such as those that mosquitoes inject. For another sort of itch, all that’s needed is a light touch on the skin. That’s called a mechanical itch. The fact that this type of itch exists is no surprise, Yosipovitch says. Mechanical itch may help explain why clothes or even dry, scaly skin can be so itchy. It’s also why you might feel a mosquito crawling on your skin before it takes a bite.

The new study, published October 30 in Science, involved mice with one key defect. Scientists had altered their genes so that they lacked a certain type of nerve cell in their spinal cords. Without those cells, the mice “have the urge to scratch all the time,” says study coauthor Qiufu Ma. He’s a neuroscientist at Harvard Medical School in Cambridge, Mass. Even with nothing specifically causing them to itch, the mice scratched so often that they developed bald patches on their skin. A light touch from a thin wire will cause a mechanical itch. This touch led the mice to scratch themselves more than regular mice did. Yet the itchy mice responded to pain and itch-causing chemicals normally.

That suggests some nerve cells detect only mechanical itch, Ma concludes. If a light touch taps into the itch accelerator, then these spinal-cord nerve cells act as the brakes, says Martyn Goulding, who also worked on the study. He’s a neuroscientist at the Salk Institute for Biological Studies in La Jolla, Calif. Removing these nerve cells lets the itch signal more easily get through to the brain, he says.

The discovery of itch-blocking nerve cells opens up new possibilities for understanding itch, Goulding says. Now, scientists can start to piece together the rest of the nerve pathway that detects mechanical itch on the skin and then carries that signal to the brain. These nerve cells produce a chemical signal called neuropeptide Y. Future experiments can test what role that chemical plays in how a mechanical itch makes itself known, he says.

It makes sense that human skin would develop the ability to detect an itchy tickle, Goulding says. An insect crawling on your skin could be harmless. But if it’s carrying germs and bites you, it might cause a nasty infection, he says. A quick scratch, prompted by an itch, might prevent that.

Genes: How few needed for life?

Genes: How few needed for life?

What are the fewest genes needed to sustain life? To test that, scientists started with a microbe having one of the smallest known genomes — or entire sets of genetic instructions. Then scientists figured out the magic minimum for this microbe, which was 473 genes. By whittling down the genes to this number, scientists learned a lot about biology. But there is still much to discover. Researchers still aren’t sure exactly what almost a third of its genes do.

The new microbe is a stripped-down version of Mycoplasma mycoides (MY-ko-PLAZ-ma My-KOY-dees). This bacterium normally has 901 genes. That’s really not very many. Quite a few other bacteria, such as E. coli, may have 4,000 to 5,000 genes. People have more than 22,000 genes, although we don’t need all of them to live and be healthy.

In 2010, researchers at the J. Craig Venter Institute in La Jolla, Calif., copied the entire genome of M. mycoides. Then they popped it into a cell of a different species, Mycoplasma capricolum. Some people called this the first synthetic, or artificial, organism.

More recently, the researchers started by stripping the M. mycoides genome down to its essentials. Then they transplanted them into the M. capricolum shell. The result was a minimal bacterium that they now call syn3.0.

The researchers described their results March 25 in Science.

Learning from minimal life

Researchers hope syn3.0’s simple genome will teach them more about the basics of biology. Minimal life-forms such as this one could also be a starting point for building custom microbes in the future. These microbes might make certain drugs or chemicals that people need or prize.

J. Craig Venter is the geneticist who founded the nonprofit institute bearing his name. He worked with a team of researchers there led by Clyde Hutchison III and Daniel Gibson. At first, they wanted to design an organism with a core set of about 300 genes. The researchers thought these would be enough for a microbe to survive on its own. Computers predicted this would work. But when the researchers tried to bring their computer creations to life, “every one of our designs failed,” Venter said in a phone conference with reporters.

Why didn’t the microbes live? The researchers had left out some genes because they didn’t know those genes were important.

In fact, scientists don’t know what many genes do. Here, the scientists thought they already knew the essential ones for survival. They were wrong. Almost one-third of the genes in the minimal genome are secret ingredients that do something important, even though scientists don’t know what that something is. Without these genes, the bacteria died. When the researchers mixed those genes back into their recipe, the bacteria sprung to life.

Most of those 473 genes in the final recipe do one of four essential jobs. Reading DNA instructions and turning them into RNA and proteins, is the job of 195 genes. Another 34 genes copy and repair DNA. There are 84 genes involved in building the cell membrane — the skin around the bacterium — and keeping it working. And 81 genes carry out metabolism, helping the organism use its fuel for growth and reproduction.

That leaves 149 genes in syn3.0 that do jobs researchers don’t completely understand. Scientists can predict what 70 of these genes likely do. But what the remaining 79 mystery ingredients do is entirely unknown.

“I think we’re showing how complex life is in even the simplest of organisms,” Venter says. The results show that researchers still don’t fully understand the basics of life. “These findings are very humbling,” Venter concludes.

Pamela Silver works at Harvard Medical School in Boston, Mass. As a synthetic biologist, she designs and creates new types of organisms from genes. Silver says that lack of knowledge is “frustrating after so many years of molecular biology.” But, she adds, the new stripped-down microbe may help science learn what those mystery genes do.

Other researchers have tried to make minimal genomes by stripping away one gene at a time. Not Venter’s group. His team built its new microbe from the ground up. They made pieces of DNA from scratch, then combined them into a new genome.

Drew Endy is a synthetic biologist at Stanford University. He is one of several scientists who like the made-from-scratch approach. “Only when you try to build something do you find out what’s truly required. Too often in biology we end up with only data, a computer model, or a just-so story. When you actually try to build something, you can’t hide from your ignorance,” Endy said in an e-mail. “What you build either works or it doesn’t.”

And, at first, this bare-bones genome didn’t work. The problem was that the researchers had compiled a list of genes that they thought they could just leave out. Such genes are known as nonessential. But not all of those genes proved nonessential after all. Some genes did the same job as another gene. Researchers could remove one such gene, but never both at the same time. It’s like a twin-engine jet, Gibson says. Knocking out one engine will keep the plane airborne, but disabling both engines will lead to a crash.

How low can you go?

The genome of syn3.0 is far smaller than those of any natural free-living bacterium. But it’s possible that there are life-forms with even smaller starting genomes. For example, other researchers think it’s possible that there could be some cell hosting just a single gene inside a membrane. It sole job would likely be to copy RNA, says George Church. He’s a geneticist at Harvard University.

Researchers might also start with another organism instead of M. mycoides. Or they might grow bacteria under different conditions. This would probably lead to a microbe needing a different minimal set of genes, says Jay Keasling. He’s a synthetic biologist at the University of California, Berkeley. “The minimal genome is in the eye of the beholder,” he says. It also may depend on the environment, stress or other conditions in which a life-form will need to survive.

Gibson and Venter agree that they have created a minimal genome, but not necessarily the most minimal genome. Syn3.0 is streamlined, but still has a few frills. The team kept several “quasi-essential” genes that may not be strictly necessary. Still, these genes let the bacteria grow fast enough to be useful in the lab. It is possible that future minimal life-forms might host even tinier genomes.

Mice with a mutation linked to autism affect their littermates’ behavior

Mice with a mutation linked to autism affect their littermates’ behavior

The company mice keep can change their behavior. In some ways, genetically normal littermates behave like mice that carry an autism-related mutation, despite not having the mutation themselves, scientists report.

The results, published July 31 in eNeuro, suggest that the social environment influences behavior in complex and important ways, says neuroscientist Alice Luo Clayton of the Simons Foundation Autism Research Initiative in New York City. The finding comes from looking past the mutated mice to their nonmutated littermates, which are usually not a subject of scrutiny.  “People almost never look at it from that direction,” says Clayton, who wasn’t involved in the study.

Researchers initially planned to investigate the social behavior of mice that carried a mutation found in some people with autism. Studying nonmutated mice wasn’t part of the plan. “We stumbled into this,” says study coauthor Stéphane Baudouin, a neurobiologist at Cardiff University in Wales.

Baudouin and colleagues studied groups of mice that had been genetically modified to lack neuroligin-3, a gene that is mutated in some people with autism. Without the gene, the mice didn’t have Neuroligin-3 in their brains, a protein that helps nerve cells communicate. Along with other behavioral quirks, these mice didn’t show interest in sniffing other mice, as expected. But Baudouin noticed that the behavior of the nonmutated control mice who lived with the neuroligin-3 mutants also seemed off. He suspected that the behavior of the mutated mice might be to blame.

Experiments confirmed this hunch. Usually, mice form strong social hierarchies, with the most aggressive and vocal males at the top. But in mixed groups of mutated and genetically normal male mice, there was no social hierarchy. “It’s flat,” Baudouin says.

Raised and housed together, the mutated and nonmutated mice all had less testosterone than nonmutated mice raised in genetically similar groups. The testosterone levels in both types of mice were comparable to those found in females — “one of the strongest and most surprising results,” Baudouin says.

The mice’s social curiosity was lacking, too. Usually, mice are interested in the smells of others, and will spend lots of time sniffing a cotton swab that had been swiped across the bedding of unfamiliar mice. But when given a choice of strange mouse scent or banana scent, the nonmutated littermates spent just as much time sniffing banana as did the mutant mice.

When Baudouin and colleagues added back the missing Neuroligin-3 protein to parts of the mutant mice’s brains, aspects of their behavior normalized. The mice became interested in the odor from another mouse’s bedding, for instance. These behaviors also shifted in the mice’s nonmutated littermates. That experiment suggests that the missing protein — and the resulting abnormal behavior of the mutants — was to blame for their littermates’ abnormal actions.

Still, it’s hard to tease apart the mice’s roles, says behavioral neuroscientist Mu Yang of Columbia University. “It is a shared environment, and there is no sure way to tell who is influencing whom, or whether both parties are being impacted.”

Female mice that completely lacked the neuroligin-3 gene also influenced the behaviors of littermates that carried one mutated version of the gene, other behavior tests revealed. More experiments are needed to determine whether the social environment affects male and female mice differently, and if so, whether those differences relate to autism, says Luo Clayton.

The turning of wolves into dogs may have occurred twice

The turning of wolves into dogs may have occurred twice

Dogs were such great friends that humans appear to have domesticated them at least twice, a new study suggests.

Domestication (Doh-MES-ti-KAY-shun) is the gradual process by which humans can produce a tame and useful animal from a wild one. This happens over countless generations. It may take thousands of years, but eventually the tamed animals can become so different from their wild ancestors that they turn into a new species. In this case of wolves, their domestication produced dogs.

Earlier studies had indicated that the wolf-to-dog transformation happened just once. But scientists disagree about where it occurred. Some say dogs became human’s best friend in East Asia. Then, last year, a study of village dogs suggested it had happened in Central Asia. Some studies had even hinted that Europeans were the first to turn wolves into dogs.

In the new study, scientists analyzed the genes of bones from a 4,800-year-old Irish dog and 59 other ancient dogs. These tests suggest that canines and humans became pals in both Europe and East Asia. And it may have been as long as 14,000 years ago. Later, dogs from East Asia accompanied their human companions to Europe. The Asian dogs’ bred with and replaced the European dogs, the team concluded June 3 in Science.

Understanding process to dog-dom may help people learn more about humans’ distant past. Dogs were probably the first domesticated animal. They may have paved the way for taming other animals and plants.

In the new study, researchers put together the complete set of genes, or genome, of an ancient dog. Genes are made of DNA and carry instructions for building a body and all the bits and pieces inside. So a genome is like an instruction book.

The ancient dog had been found in a tomb near Newgrange, Ireland. To get at the DNA carrying the dog’s genetic instruction book, researchers drilled into a bone from the dog’s inner ear. The bone, called the petrous, is part of a skull bone that makes that knob behind your ear.

That petrous is hard as a rock, says Laurent Frantz. He is an evolutionary geneticist at the University of Oxford in England. He was also one of the scientists that took part in the new study. The hard petrous bone protects the DNA inside. So when scientists examined it after thousands of years, it still was fairly easy to read.

But it didn’t tell the scientists much about what the midsize Irish dog that it came from would have looked like. From its DNA, the scientists can tell that it probably did not resemble modern dog breeds, Frantz says. “He wasn’t black. He wasn’t spotted. He wasn’t white.” Instead, the Newgrange dog was probably a mongrel with fur similar to a wolf’s.

But the ancient mutt has something special in his genes. It had a stretch of mysterious DNA, points out Mietje Germonpré. She is a paleontologist at the Royal Belgian Institute of Natural Sciences in Brussels and was not part of the study. “This Irish dog has a component that can’t be found in recent dogs or recent wolves.” That mystery DNA, she says, could be left over from prehistoric dogs that lived in Europe. And that just might help researchers learn more about what the first dogs were like. Or it could be a trace of an extinct ancient wolf that may have given rise to dogs.

Digging deep into doggy DNA

The idea that dogs came from East Asia or Central Asia is mostly based on the DNA of modern dogs. Claims that dogs have European origins had been staked on the DNA of prehistoric pups. “This paper combines both types of data” to give a more complete picture of dog domestication, says Germonpré.

Frantz’s team gathered DNA data from the Newgrange dog and other ancient dogs. The scientists compared these to data from studies of modern dogs. These included the whole genomes of 80 separate dogs. The researchers also used a less-complete sampling of DNA from 605 additional dogs. They included a collection of 48 breeds and of village dogs of no particular breed.

Eastern and Western dogs are genetically different, the researchers learned. That might indicate that two separate branches of the canine family tree once existed, like distant cousins.

The Newgrange dog’s DNA is more like that of the Western dogs. Since the Irish dog is 4,800 years old, the Eastern and Western dogs must have gone out on separate limbs of family tree before then. That probably happened between about 6,400 to 14,000 years ago. The new finding suggests that dogs may have been domesticated from local wolves in two separate locations during the Stone Age.

The ancient dog’s DNA also may help pinpoint when that domestication took place. Frantz and his colleagues used the Newgrange dog as a known point in time. Then they counted up the genetic changes that have happened to dogs since then. From there they could calculate how quickly dogs’ DNA changes, or mutates. This “mutation rate” is important for figuring out how long ago animals morphed into a new species. It also can tell researchers how fast animals can adapt to new situations.

Dogs’ genes mutate at a slower rate than researchers had calculated before, the study found.

Determining when dogs first emerged

Frantz’s team used this slower mutation rate to calculate when dogs likely became different from wolves.

That split likely occurred between 20,000 and 60,000 years ago. So that could be the time period when humans began domesticating wolves. But Frantz and colleagues say that their estimate doesn’t truly nail down when domestication happened. Different types of wolves could have been hanging around for a long time. Some became grey wolves like the ones living today. Others went extinct. And still others evolved into dogs. The researchers need more data to tell exactly when all of those things occurred.

Although dogs may have started in two different places, our best furry friends have since mixed and mingled. The researchers came to that conclusion from looking at bones and at DNA from dogs’ mitochondria (My-toh-KON-dree-uh). Mitochondria are like power plants inside cells, and have their own DNA. Most of a cell’s DNA is stored in a compartment called a nucleus. Both a mom and dad pass that type of DNA on to their kids. But only moms also pass on their mitochondrial DNA.

Mitochrondrial DNA comes in different “flavors” called haplogroups. Researchers can use those different types to figure out where a dog’s mother, grandmother, great-grandmother and so on came from. The researchers compared mitochondrial DNA from 59 ancient European dogs and 167 modern European dogs. Haplogroups in the ancient European dogs were different from those in the modern dogs, the researchers found.

Still, the authors of the latest study admit they can’t yet rule out that dogs were domesticated only once. Dogs could have then moved to different places early on. There, isolation, random chance and other factors might have caused them to drift apart genetically so that now their DNA looks like they started as different groups.

Ancient enzymes adapted to a cooler Earth to keep life’s chemical reactions going

Ancient enzymes adapted to a cooler Earth to keep life’s chemical reactions going

Like lifelong Floridians dropped into a Wisconsin winter, enzymes accustomed to warmth don’t always fare well in colder climes. But ancient heat-loving enzymes forced to adapt to a cooling Earth managed to swap out parts to keep chemical reactions going, scientists report online December 22 in Science.

By reconstructing enzymes as they might have looked billions of years ago, the research “helps to explain the natural evolutionary history of life on this planet,” says Yousif Shamoo, a biochemist at Rice University in Houston who wasn’t part of the study. And the findings question the idea that enzymes must sacrifice their stability to become more active.

Enzymes are natural catalysts that jump-start essential chemical reactions inside living things. Most work only within a specific temperature range. Too cold, and they can’t get going. Too hot, and they lose their shape — and by extension, their function.

Life on Earth is believed to have started out in warm environments like hot springs or hydrothermal vents, so the first enzymes probably worked best in those toasty temperatures, says study coauthor Dorothee Kern, a biochemist at Brandeis University in Waltham, Mass. But gradually, Earth cooled. For life to continue, early enzymes had to shift their optimal temperature range.

Kern and her colleagues looked at the evolutionary history of an enzyme called adenylate kinase. Some version of this protein is found in every cell, and it’s essential for life to survive.

The researchers used a technique called ancestral sequence reconstruction to figure out what the enzyme’s genes might have looked like at different points in the last 3 billion years. The scientists edited E. coli’s genes to make the bacteria produce those probable ancient enzymes, and then looked at how the reincarnated molecules held up under different temperatures.

“These very old enzymes were way more lousy at low temperatures than anyone expected,” says Kern. But over time, natural selection gradually pushed the enzymes to work better at cooler temperatures, she found. The enzymes accumulated mutations that swapped some of their amino acid building blocks, ultimately lowering the enzymes’ energy demands. That let the enzymes keep moving essential reactions along at a fast-enough pace for life to survive.

There wasn’t a corresponding disadvantage to also working well in heat, so the enzymes didn’t immediately lose their heat tolerance. Some of them became what Kern calls “superenzymes” — they worked impressively fast and could catalyze reactions at low temperatures, but they remained stable at high temperatures.

That finding goes against a widely held assumption that an increase in an enzyme’s activity — which would allow it to keep trucking at the same speed at lower temperatures — typically comes with a corresponding decrease in stability.

That assumption was a logical one: Like chilly fingers struggling to tie shoelaces, enzymes get stiffer and don’t work as well when the temperature drops. To up their activity, they‘d need to increase their flexibility. That could make them less stable at higher temperatures — more likely to lose their shape and stop working. But now, it seems that some enzymes can have the best of both worlds.

The idea of a generalist enzyme that works well across a wide temperature range isn’t new — scientists have engineered such proteins in the lab, Shamoo says. But this work shows it might have happened in a real-world setting. “Just because I can do something in the laboratory, that I can build an enzyme that’s a true generalist, doesn’t mean that’s how it happened on this planet,” he says.

Cleaner water helps male fish again look and act like guys

Cleaner water helps male fish again look and act like guys

Some types of water pollution can make male fish look and act like females. But a new study shows that better water treatment can prevent that. And that could allow  fish populations to thrive.

Water treatment plants are supposed to clean the water from our toilets, showers and sinks before releasing it into rivers, lakes and oceans. They are also supposed to treat water from manufacturing plants. But these water-cleanup plants were never designed to remove all pollutants. Most were built before anyone realized that hormones and hormone-like chemicals could show up in the water. And such chemicals can prove a big problem for fish.

How? They can fake out cells of a male’s body by sending signals telling those cells that this he fish is actually a she. These feminized guy fish may then have little interest in fertilizing a female’s eggs — or he may do so poorly. The result: Fewer baby fish. Or at least that’s the risk.

Male feminization of fish has been showing up in rivers throughout North America.

Mark Servos and his team have been monitoring it in Canada’s Grand River. Servos is an aquatic toxicologist at the University of Waterloo in Ontario. There, he studies the effects of water pollution on rainbow darter fish.

At least once a year from 2007 to 2012, his team caught and examined rainbow darters at a site downstream of a water treatment plant. And depending on the year, between 80 and 100 percent of the males had eggs in them. Egg-making is something that only female fish should do. Moreover, those eggs were in the males’ testes — reproductive organs that normally make sperm (cells used to fertilize a female’s eggs).

Many affected males didn’t even look right on the outside. “Male rainbow darter fish are really colorful,” Servos says. Or at least they should be. “This color is important for attracting mates.” Yet some local males were becoming drab. That could make it hard for them to find a mate.

Clearly, something was very wrong in these male fish. Until 2013, that is. Suddenly, the number of feminized male fish started to fall. This happened at the same time that the local treatment plant changed how it cleaned the water.

The team’s data now link these two observations in a paper published early online in Environmental Science and Technology.

The problem with feminized males

The bodies of animals — including humans — use hormones to tell their cells when to switch various activities on or off. Those hormones fit like keys into “locks” on the outside of a cell. Scientists call these locks receptors. When hormones connect with their locks, they affect how an animal will develop and act. But certain pollutants act like fake keys. These hormone mimics are known as endocrine (EN-doe-krin) disruptors. They can turn on or off some normal function of an animals’ cells — but at the wrong time. That can make an animal develop or act in a way that isn’t natural.

From 2007 to 2012, Servos’ team found high levels of endocrine disruptors in the water downstream of the water treatment plant. Many of these chemicals mimicked the action of estrogen, a female sex hormone. Some endocrine disruptors came from birth control pills. Their synthetic hormones left a woman’s body in urine. Flushed down the toilet, they ended up at a water-treatment plant. Another common endocrine disruptor is nonylphenol (NON-ul-FEE-nul). It’s a breakdown product of certain surfactants. (Surfactants are chemicals that let liquids mix that would not ordinarily do so.) The problem: Nonylphenol, too, can mimic estrogen.

Some male rainbow darters exposed to these pollutants produced eggs in their testes. This took a lot of energy. That reduced the energy available for them to make sperm. Affected fish may have made damaged sperm — or no sperm at all. Eggs laid in the water by females won’t mature and hatch unless males release sperm to fertilize those eggs. So egg-making by males could cause fish populations  to shrink. Oh, and the eggs made by those males: They’re worthless. The bodies of males lack the tubes needed to release those eggs. So the eggs just collect and take up space in the males’ bodies.

Bacteria, bubbles and healthier fish

The good news: Changes to the wastewater-treatment plant in 2013 led to changes in those males.

The plant had always used bacteria to break down harmful chemicals in the water. But workers upgraded the system to give the bacteria more time to break down chemicals. The plant now also bubbled oxygen into the wastewater. This extra oxygen helped the hard-working bacteria grow faster.

Servos and his team tested the water and fish for three years after these water-cleaning changes. As expected, levels of various pollutants dropped. These included the estrogen-mimicking chemicals.

“We don’t know exactly how estrogens are reduced,” Servos says of the water treatment plant. “The key seems to be giving bacteria more time to break down harmful compounds and to feed them oxygen to speed the process.”

And as levels of these chemicals in the water fell, so did the number of feminized males. Within three years, nearly all male fish appeared normal again, inside and out. The researchers suspect the males’ bodies had re-absorbed the useless eggs. The male darters also regained their rainbow colors.

The study shows that although fish may be exposed to endocrine disruptors early in life, those changes may not harm them forever. “That was part of the surprise — [that] adult fish could recover,” says Servos. He doesn’t know whether other species of fish would respond the same way. He suspects many would.

Chris Metcalfe is an environmental toxicologist at Canada’s Trent University in Peterborough, Ontario. He studies materials that can act as poisons in the environment. Metcalfe cautions that not all endocrine disruptors behave the same way. Just because one type can be removed from wastewater doesn’t mean all others will, too.

He also points out that not all urban areas have good water treatment. Some have none; they just spew polluted wastes directly into rivers. And outside of cities, many people rely on underground septic tanks to store water from toilets and showers. Septic tanks filter and capture many pollutants. If people don’t take good care of these systems,  wastes can leak from these tanks into groundwater. From there, pollutants can enter downstream rivers, lakes or the ocean.

What the new work shows is that hormone mimics can hurt fish populations, but that good water-cleansing techniques can limit the risk that this happens.

Bacteria genes offer new strategy for sterilizing mosquitoes

Bacteria genes offer new strategy for sterilizing mosquitoes

A pair of bacterial genes may enable genetic engineering strategies for curbing populations of virus-transmitting mosquitoes.

Bacteria that make the insects effectively sterile have been used to reduce mosquito populations. Now, two research teams have identified genes in those bacteria that may be responsible for the sterility, the groups report online February 27 in Nature and Nature Microbiology.

“I think it’s a great advance,” says Scott O’Neill, a biologist with the Institute of Vector-Borne Disease at Monash University in Melbourne, Australia. People have been trying for years to understand how the bacteria manipulate insects, he says.

Wolbachia bacteria “sterilize” male mosquitoes through a mechanism called cytoplasmic incompatibility, which affects sperm and eggs. When an infected male breeds with an uninfected female, his modified sperm kill the eggs after fertilization. When he mates with a likewise infected female, however, her eggs remove the sperm modification and develop normally.

Researchers from Vanderbilt University in Nashville pinpointed a pair of genes, called cifA and cifB,connected to the sterility mechanism of Wolbachia. The genes are located not in the DNA of the bacterium itself, but in a virus embedded in its chromosome.

When the researchers took two genes from the Wolbachia strain found in fruit flies and inserted the pair into uninfected male Drosophila melanogaster, the flies could no longer reproduce with healthy females,says Seth Bordenstein, a coauthor of the study published in Nature. But modified uninfected male flies could successfully reproduce with Wolbachia-infected females, perfectly mimicking how the sterility mechanism functions naturally.

The ability of infected females to “rescue” the modified sperm reminded researchers at the Yale School of Medicine of an antidote’s reaction to a toxin.

They theorized that the gene pair consisted of a toxin gene, cidB, and an antidote gene, cidA. The researchers inserted the toxin gene into yeast, activated it, and saw that the yeast was killed. But when both genes were present and active, the yeast survived, says Mark Hochstrasser, a coauthor of the study in Nature Microbiology.

Disease control

Scientists could insert the bacteria genes into either mosquitoes not infected with Wolbachia (left) or into the bacteria in infected insects (right) to help control the spread of Zika and dengue.

Hochstrasser’s team also created transgenic flies, but used the strain of Wolbachia that infects common Culex pipiens mosquitoes.

Inserting the two genes into males could be used to control populations of Aedes aegypti mosquitoes, which can carry diseases such as Zika and dengue.

The sterility effect from Wolbachia doesn’t always kill 100 percent of the eggs, says Bordenstein. Adding additional pairs of the genes to the bacteria could make the sterilization more potent, creating a “super Wolbachia.

You could also avoid infecting the mosquitoes altogether, says Bordenstein. By inserting the two genes into uninfected males and releasing them into populations of wild mosquitoes, you could “essentially crash the population,” he says.

Hochstrasser notes that the second method is safer in case Wolbachia have any long-term negative effects.

O’Neill, who directs a research program called Eliminate Dengue that releases Wolbachia-infected mosquitoes, cautions against mosquito population control through genetic engineering because of public concerns about the technology. “We think it’s better that we focus on a natural alternative,” he says.

World’s tallest corn towers nearly 14 meters

World’s tallest corn towers nearly 14 meters

Western New York is getting its own kind of rural skyscraper: giant corn stalks. A researcher there in Allegany now reports growing corn nearly 14 meters (45 feet) high. That makes it about as tall as a four-story building. They appear to be the tallest corn plants ever recorded.

A corn stalk typically grows to about 2.5 meters (8 feet). One strain from Mexico is taller, sometimes 3.4 meters or more. But when the nights are short and the days are long, corn has more time to tap growth-fostering sunlight. Then it can grow even more, sometimes taller than 6 meters (20 feet). Raising it in a greenhouse can add another 3 meters. And tweaking a gene called Leafy1 can up its height yet another 3 meters. Put them together and such factors can cause this strain to ascend nearly 14 meters, notes Jason Karl. He is an agricultural scientist who helped turn some corn plants into such giants.

The Mexican name for corn is maize. That’s also the common term for this plant outside the United States. The unusually tall maize type is called Chiapas 234. Usually “people try to make maize shorter, not taller,” Karl notes. “So it is plainly funny even to consider adding Leafy1 to the tallest strain.”

Corn is the most widely grown food crop in the United States. Most scientists who study corn want to make it better for harvesting. So why would farmers prize shorter corn? Shorter stalks flower earlier in the season. That allows the ears of grain (containing the ymmy kernels that we eat) to mature sooner.

But Karl isn’t interested in corn that blooms quickly or is easy to harvest (because climbing an 12- to 14-meter ladder to pick their ears of corn would hardly be easy). Instead, he wants to know which genes and other factors, such as light, affect the stalk’s growth.

The Chiapas 234 strain was discovered in the 1940s in Mexico. Researchers stored seed from it in a freezer for nearly 30 years. Then, in a 1970 experiment, they grew up some of that seed in a greenhouse. To simulate summer nights, they gave the plants only short periods of darkness. The corn responded by growing more leafy segments, called internodes. Each internode is typically about 20 centimeters (8 inches) long. The corn that you might see on an American farm today has 15 to 20 internodes. The Chiapas 234 strain had 24. When grown with short nights, its stalks developed twice as many.

Karl read about the 1970s night-length study with Chiapas 234. He also knew about a mutation in the Leafy1 gene that could make maize taller. He decided to put them together. “The mutation makes common U.S. maize a good third taller. And I had seen synergy between mutations and the night-length reaction,” he says. And that, he recalls, was a “good omen for discovering new things via preposterously lofty maize.”

What the researchers did

For his experiment, Karl grew the Chiapas 234 in a greenhouse with artificially shortened nights. Materials in the greenhouse walls filtered out some types of light. This allowed more reddish — or longer wavelength — light to reach the plants. That red light increased the length of the internodes. This made the plant grow to nearly 11 meters (35 feet). Then, Karl bred the Leafy1 mutation into the stalks by controlling the pollen that landed on each plant. The result was a nearly 14-meter stalk with a whopping 90 internodes! That’s about five times as many as regular corn produces.

“The science done here makes lots of sense,” says Edward Buckler. He is a geneticist with the U.S. Department of Agriculture (USDA). He has a lab at Cornell University in Ithaca, N.Y. Buckler was not part of the new study but says Karl’s way of growing tall corn should make it grow nearly forever. “I have just never seen anyone try this in such a tall greenhouse,” he says.

Paul Scott also was not involved in the study. This USDA scientist studies the genetics of corn at Iowa State University in Ames. “Plant height is important because it is related to yield,” he says. “Bigger plants tend to produce more grain, but if they get too tall they tend to fall over.” He says the new work helps scientists better understand which genes and other factors affect corn growth.

The new giant corn stalks have trouble surpassing 12 meters (40 feet). That’s a result of the genetic mutation inserted into the corn, Karl says. He is now trying to tweak the corn’s genetics by inserting other mutations to see if this corrects the problem. If they do, Karl suspects he might be able to get even loftier corn.

Corn is incredibly diverse, Buckler notes. There are thousands of strains grown all over the world. This work can help scientists understand why plants may grow differently depending on their location (which would affect day length and light levels).

Ancient DNA shakes up the elephant family tree

Ancient DNA shakes up the elephant family tree

Fossil DNA may be rewriting the history of elephant evolution.

The first genetic analysis of DNA from fossils of straight-tusked elephants reveals that the extinct animals most closely resembled modern African forest elephants. This suggests that straight-tusked elephants were part of the African, not Asian, elephant lineage, scientists report online June 6 in eLife.

Straight-tusked elephants roamed Europe and Asia until about 30,000 years ago. Much like modern Asian elephants, they sported high foreheads and double-domed skulls. These features convinced scientists for decades that straight-tusked and Asian elephants were sister species, says Adrian Lister, a paleobiologist at the Natural History Museum in London who was not involved in the study.

For the new study, researchers extracted and decoded DNA from the bones of four straight-tusked elephants found in Germany. The fossils ranged from around 120,000 to 240,000 years old. The genetic material in most fossils more than 100,000 years old is too decayed to analyze. But the elephant fossils were unearthed in a lake basin and a quarry, where the bones would have been quickly covered with sediment that preserved them, says study author Michael Hofreiter of the University of Potsdam in Germany.

Hofreiter’s team compared the ancient animals’ DNA with the genomes of the three living elephant species — Asian, African savanna and African forest — and found that straight-tusked genetics were most similar to African forest elephants.

When the researchers told elephant experts what they’d found, “Everybody was like, ‘This can’t possibly be true!’” says study coauthor Beth Shapiro of the University of California, Santa Cruz. “Then it gradually became, ‘Oh yeah, I see.… The way we’ve been thinking about this is wrong.’”

If straight-tusked elephants were closely related to African forest elephants, then the African lineage wasn’t confined to Africa — where all elephant species originated — as paleontologists previously thought. It also raises questions about why straight-tusked elephants bore so little resemblance to today’s African elephants, which have low foreheads and single-domed skulls.

Family tree

This tree shows a revision to how scientists think straight-tusked elephants fit into elephant evolution: Straight-tusked elephants shared the most common ancestors with African forest elephants, rather than Asian elephants.

Accounting for this new finding may not be as simple as moving one branch on the elephant family tree, Lister says. It’s possible that straight-tusked elephants really were a sister species of Asian elephants, but they exhibit genetic similarities to African forest elephants from interbreeding before the straight-tusked species left Africa.

It’s also possible that a common ancestor of Asian, African and straight-tusked elephants had particular genetic traits that were, for some reason, only retained by African and straight-tusked elephants, he says.

Lister and colleagues are now reexamining data on straight-tusked skeletons to reconcile the species’ skeletal features with the new information on their DNA. “I will feel most comfortable if we can understand these genetic relationships in terms of the [physical] differences between all these species,” he says. “Then we’ll have a complete story.”

Space-Time is Constantly Moving, Physicists Say

Space-Time is Constantly Moving, Physicists Say

The study, published in the journal Physical Review D, suggests that if we zoomed in-way in-on the Universe, we would realize it’s made up of constantly fluctuating space and time.

“Space-time is not as static as it appears, it’s constantly moving,” said lead author Qingdi Wang, a Ph.D. student at the University of British Columbia.

“This is a new idea in a field where there hasn’t been a lot of new ideas that try to address this issue,” said Bill Unruh, a physics and astronomy professor at the University of British Columbia.

In 1998, astronomers found that the Universe is expanding at an ever-increasing rate, implying that space is not empty and is instead filled with dark energy that pushes matter away.

The most natural candidate for dark energy is vacuum energy.

When physicists apply the theory of quantum mechanics to vacuum energy, it predicts that there would be an incredibly large density of vacuum energy, far more than the total energy of all the particles in the Universe.

If this is true, Einstein’s theory of general relativity suggests that the energy would have a strong gravitational effect and most physicists think this would cause the Universe to explode.

Fortunately, this doesn’t happen and the Universe expands very slowly. But it is a problem that must be resolved for fundamental physics to progress.

Unlike other physicists who have tried to modify the theories of quantum mechanics or general relativity to resolve the issue, Wang and co-authors suggest a different approach.

They take the large density of vacuum energy predicted by quantum mechanics seriously and find that there is important information about vacuum energy that was missing in previous calculations.

Their calculations provide a completely different physical picture of the Universe.

In this new picture, the space we live in is fluctuating wildly.

At each point, it oscillates between expansion and contraction.

As it swings back and forth, the two almost cancel each other but a very small net effect drives the Universe to expand slowly at an accelerating rate.

But if space and time are fluctuating, why can’t we feel it?

“This happens at very tiny scales, billions and billions times smaller even than an electron,” Wang said.

“It’s similar to the waves we see on the ocean. They are not affected by the intense dance of the individual atoms that make up the water on which those waves ride,” Prof. Unruh said.

Scientists expect to calculate amount of fuel inside Earth by 2025

Scientists expect to calculate amount of fuel inside Earth by 2025

Earth requires fuel to drive plate tectonics, volcanoes and its magnetic field. Like a hybrid car, Earth taps two sources of energy to run its engine: primordial energy from assembling the planet and nuclear energy from the heat produced during natural radioactive decay. Scientists have developed numerous models to predict how much fuel remains inside Earth to drive its engines — and estimates vary widely — but the true amount remains unknown. In a new paper, a team of geologists and neutrino physicists boldly claims it will be able to determine by 2025 how much nuclear fuel and radioactive power remain in the Earth’s tank. The study, authored by scientists from the University of Maryland, Charles University in Prague and the Chinese Academy of Geological Sciences, was published on September 9, 2016, in the journal Nature Scientific Reports.

“I am one of those scientists who has created a compositional model of the Earth and predicted the amount of fuel inside Earth today,” said one of the study’s authors William McDonough, a professor of geology at the University of Maryland. “We’re in a field of guesses. At this point in my career, I don’t care if I’m right or wrong, I just want to know the answer.”

To calculate the amount of fuel inside Earth by 2025, the researchers will rely on detecting some of the tiniest subatomic particles known to science — geoneutrinos. These antineutrino particles are byproducts of nuclear reactions within stars (including our sun), supernovae, black holes and human-made nuclear reactors. They also result from radioactive decay processes deep within the Earth.

Detecting antineutrinos requires a huge detector the size of a small office building, housed about a mile underground to shield it from cosmic rays that could yield false positive results. Inside the detector, scientists detect antineutrinos when they crash into a hydrogen atom. The collision produces two characteristic light flashes that unequivocally announce the event. The number of events scientists detect relates directly to the number of atoms of uranium and thorium inside the Earth. And the decay of these elements, along with potassium, fuels the vast majority of the heat in the Earth’s interior.

To date, detecting antineutrinos has been painfully slow, with scientists recording only about 16 events per year from the underground detectors KamLAND in Japan and Borexino in Italy. However, researchers predict that three new detectors expected to come online by 2022–the SNO+ detector in Canada and the Jinping and JUNO detectors in China–will add 520 more events per year to the data stream.

“Once we collect three years of antineutrino data from all five detectors, we are confident that we will have developed an accurate fuel gauge for the Earth and be able to calculate the amount of remaining fuel inside Earth,” said McDonough.

The new Jinping detector, which will be buried under the slopes of the Himalayas, will be four times bigger than existing detectors. The underground JUNO detector near the coast of southern China will be 20 times bigger than existing detectors.

“Knowing exactly how much radioactive power there is in the Earth will tell us about Earth’s consumption rate in the past and its future fuel budget,” said McDonough. “By showing how fast the planet has cooled down since its birth, we can estimate how long this fuel will last.”

Electrostatic Lenses for Wigner Entangletronics

Electrostatic Lenses for Wigner Entangletronics

Electrostatic lenses are used for manipulating electron evolution and are therefore attractive for applications in novel quantum engineering disciplines, in particular in entangletronics (i.e. entangled electronics). A fundamental aspect involved in the manipulation of the electron dynamics are the processes maintaining coherence; coherence describes all properties of the correlation between physical quantities of a single wave, or between several waves or wave packets. However, so-called scattering processes strive to counteract coherence and therefore have a strong impact on the entire process. A physically intuitive way of describing the coherence processes and scattering-caused transitions to classical dynamics is entirely missing, impeding the overall progress towards devising novel coherence-based nanodevices in the spirit of entangletronics.

To tackle this issue, Paul Ellinghaus, Josef Weinbub, Mihail Nedjalkov, and Siegfried Selberherr (TU Wien, Austria), have expressed the new quantifying theory of coherence (derived recently based on the theory of entanglement) in the Wigner formalism and – in this setting – discuss a lense-splitting simulation conducted with the group’s simulator ViennaWD. The signed particle model of Wigner evolution enables physically intuitive insights into the processes maintaining coherence. Both, coherent processes and scattering-caused transitions to classical dynamics are unified by a scattering-aware particle model of the lense-controlled state evolution. In particular, the evolution of a minimum uncertainty Wigner state, controlled by an electrostatic lense is analyzed in Wigner function terms. It is shown, that cross-domain phase space correlations maintain the coherence while scattering impedes this exchange.

Overall, the work shows the importance of the Wigner singed particles model in the novel field of entangletronics and paves the way for future entirely novel devices and structures where coherence and entanglement are used as fundamental mechanisms for the operation.

 

New small angle scattering methods boost molecular analysis

New small angle scattering methods boost molecular analysis

A dramatic leap forward in the ability of scientists to study the structural states of macromolecules such as proteins and nanoparticles in solution has been achieved by a pair of researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab). The researchers have developed a new set of metrics for analyzing data acquired via small angle scattering (SAS) experiments with X-rays (SAXS) or neutrons (SANS). Among other advantages, this will reduce the time required to collect data by up to 20 times.

“SAS is the only technique that provides a complete snapshot of the thermodynamic state of macromolecules in a single image,” says Robert Rambo, a scientist with Berkeley Lab’s Physical Biosciences Division, who developed the new SAS metrics along with John Tainer of Berkeley Lab’s Life Sciences Division and the Scripps Research Institute.

“In the past, SAS analyses have focused on particles that were well-behaved in the sense that they assume discrete structural states,” Rambo says. “But in biology, many proteins and protein complexes are not well-behaved, they can be highly flexible, creating diffuse structural states. Our new set of metrics fully extends SAS to all particle types, well-behaved and not well-behaved.”

Rambo and Tainer describe their new SAS metrics in a paper titled “Accurate assessment of mass, models and resolution by small-angle scattering.” The paper has been published in the journal Nature.

Says co-author Tainer, “The SAS metrics reported in our Nature paper should have game-changing impacts on accurate high-throughput and objective analyses of the flexible molecular machines that control cell biology.”

In SAS imaging, beams of X-rays or neutrons sent through a sample produce tiny collisions between the X-rays or neutrons and nano- or subnano-sized particles within the sample. How these collisions scatter are unique for each particle and can be measured to determine the particle’s shape and size. The analytic metrics developed by Rambo and Tainer are predicated on the discovery by Rambo of an SAS invariant, meaning its value does not change no matter how or where the measurement was performed. This invariant has been dubbed the “volume-of-correlation” and its value is derived from the scattered intensities of X-rays or neutrons that are specific to the structural states of particles, yet are independent of their concentrations and compositions.

“The volume-of-correlation can be used for following the shape changes of a protein or nanoparticle, or as a quality metric for seeing if the data collection was corrupted,” Rambo says. “This SAS invariant applies equally well to compact and flexible particles, and utilizes the entire dataset, which makes it more reliable than traditional SAS analytics, which utilize less than 10-percent of the data.”

The volume-of-correlation was shown to also define a ratio that determines the molecular mass of a particle. Accurate determination of molecular mass has been a major difficulty in SAS analysis because previous methods required an accurate particle concentration, the assumption of a compact near-spherical shape, or measurements on an absolute scale.

“Such requirements hinder both accuracy and throughput of mass estimates by SAS,” Rambo says. “We’ve established a SAS-based statistic suitable for determining the molecular mass of proteins, nucleic acids or mixed complexes in solution without concentration or shape assumptions.”

The combination of the volume-of-correlation with other metrics developed by Rambo and Tainer can provide error-free recovery of SAS data with a signal-to-noise ratio below background levels. This holds profound implications for high-throughput SAS data collection strategies not only for current synchrotron-based X-ray sources, such as Berkeley Lab’s Advanced Light Source, but also for the next-generation light sources based on free-electron lasers that are now being designed.

“With our metrics, it should be possible to collect and analyze SAS data at the theoretical limit,” Rambo says. “This means we can reduce data collection times so that a 90- minute exposure time used by commercial instruments could be cut to nine minutes.”

Adds Tainer, “The discovery of the first x-ray scattering invariant coincided with the genesis of the Berkeley Lab some 75 years ago. This new discovery of the volume-of-correlation invariant unlocks doors for future analyses of flexible biological samples on the envisioned powerful next-generation light sources.

How selenium compounds might become catalysts

How selenium compounds might become catalysts

Traditionally, metal complexes are used as activators and catalysts. They form complete, i.e. covalent bonds with the molecule whose reactions they are supposed to accelerate. However, the metals are often expensive or toxic.

Weaker bonds suffice

In the recent years, it has become evident that a covalent bond is not absolutely necessary for activation or catalysis. Weaker bonds, such as hydrogen bonds, might be sufficient. Here, the bond forms between a positively polarised hydrogen atom and the negatively polarised centre of another molecule. In the same way as hydrogen, elements of group 17 in the periodic table, namely halogens such as chlorine, bromide and iodine, can form weak bonds — and thus serve as activators or catalysts.

Stefan Huber’s team transferred this principle to elements from group 16 of the periodic table, i.e. chalcogens. The researchers used compounds with a positively polarised selenium atom. It forms a weak bond to the substrate of the reaction, the transformation of which was accelerated by 20 to 30 times as a result.

For comparison purposes, the chemists also tested compounds in which they’d replaced the selenium centre by another element. Molecules without selenium did not accelerate the reaction. “Consequently, the observed effect can be clearly attributed to selenium as active centre,” says Huber.

Better than sulphur

In earlier studies, only one comparable case of chalcogen catalysis had emerged; there, sulphur was used instead of selenium. “As selenium can be polarised more easily than sulphur, it has greater potential as a catalyst component in the long term,” explains Stefan Huber. “In combination with halogen bonds, chalcogen bonds have added two fascinating mechanism to the chemists’ repertoire, for which there is no known equivalent in nature, for example in enzymes.”

In the next step, the team plans to demonstrate that selenium compounds can be utilised as adequate catalysts. At present, the researchers refer to them as activators, as relatively large amounts of the substance are required to trigger the reaction. This is because the term catalyst cannot be used until the amount of the necessary selenium compounds is smaller than the amount of the starting materials required for the reaction.

Birds Of A Feather Flock Like A Magnetic System

Birds Of A Feather Flock Like A Magnetic System

An interdisciplinary team of physicists, biologists and biophysicists has developed a model that can imitate the mesmerizing patterns of starling flocks. By drawing an unlikely parallel between starling flocks and magnetic systems, the scientists were able to describe this vastly complicated biological system using only a few physical equations.

All sciences can be boiled down to the observation and prediction of patterns. Sometimes patterns that seem totally unrelated resemble each other in some way, like human eyes and nebulae, or hurricanes and galaxies. Sometimes these things have nothing in common except the associations we contrive, but sometimes they actually do. For example, fractal patterns, which can be found in tree roots, river branches and lightning strikes, all share a similar appearance and can also be represented by a common physical principle.

“The difficult thing is figuring out how to do a proper study to test the extent of these analogies,” said Frederic Bartumeus, an expert in movement ecology from the Centre for Advanced Studies of Blanes in Spain. “To see how accurate one can describe a biological system using concepts from statistical physics for example.”

This is exactly what the scientists from École Normale Supérieure in France and Institute for Complex Systems in Italy have done. In their recent paper published in Nature Physics in August, a team led by physicist Thierry Mora has successfully modeled how starling flocks behave by modifying existing theories on magnetic systems.

A flock of tiny little magnets

Largely driven by the demand for better computing and data storage devices, for decades physicists have studied how to manipulate magnetic materials for practical uses. But physicists being physicists, it is not enough just to know how; they also needed to know why.

They have learned that inside a ferromagnet — like a fridge magnet — there are billions and trillions of individual magnetic dipoles. Each dipole is itself essentially a tiny magnet, and they all have to line up for the big magnet to work. So the physicists developed and tested models and theories on what happens at microscopic scales inside magnets — how individual dipoles behave, how each of them interacts with their neighbors and how these interactions between these tiny magnets affect the big magnet.

Similar to a magnetic dipole, an individual starling adjusts itself depending on its neighbors. While there have been other parallel efforts that have investigated flocking starlings, Mora’s team has taken a different approach to the problem — by studying the flocks using theories originally intended for magnetic systems.

By carefully modifying theories of magnetic dipole interactions, they were able to develop a model that can accurately describe the behavior of flocking starlings.

Essentially the model simulates the interactions between neighboring individuals within the group. It not only predicts the movements of individual starlings, but more importantly the time it takes for an individual’s adjustment to affect the movement of the entire flock.

The bigger picture

“People have also looked into collective motion at a cellular scale, for example in biological studies like tissue repairs,” said Mora. “The same class of models have been proposed to describe those motions, … basically any system where agents move by themselves.”

A better understanding of collective systems in general can have an impact on a broad range of subjects, from crowd control to bacteria growth and even the design of future self-propelled medical nanobots.

“Of course there will be some conditions for what kind of system this model can be applied,” said Bartumeus,”but to have any kind of transferrable knowledge among different biological systems, that’s already magic for a biologist.”

Bird flocks and magnetic systems have been studied before, but mostly by biologists and physicists working in different buildings. Now we know that the two share something in common, and from this, new connections may emerge.

“Sometimes it’s difficult to find physicists who are willing to study biology, or biologists who are interested in physics,” said Bartumeus. “There’s a pool of theories in physics that can be used to describe collective motion in biology, and this research has just opened another door.”

Physicists Produce World’s First Sample of Metallic Hydrogen

Physicists Produce World’s First Sample of Metallic Hydrogen

Originally theorized by Princeton physicists Eugene Wigner and Hillard Bell Huntington in 1935, metallic hydrogen is ‘the holy grail of high-pressure physics.’

Theoretical work suggests a wide array of interesting properties for this material, including high temperature superconductivity and superfluidity (if a liquid).

To create it, Harvard University physicists Dr. Ranga Dias and Professor Isaac Silvera squeezed a tiny hydrogen sample at 495 GPa (gigapascal) — greater than the pressure at the center of the Earth.

At those extreme pressures, solid molecular hydrogen -which consists of molecules on the lattice sites of the solid – breaks down, and the tightly bound molecules dissociate to transforms into atomic hydrogen, which is a metal.

“We have studied solid molecular hydrogen under pressure at low temperatures,” the researchers said.

“At a pressure of 495 GPa hydrogen becomes metallic with reflectivity as high as 0.91.”

“We fit the reflectance using a Drude free electron model to determine the plasma frequency of 32.5 ± 2.1 eV at T = 5.5 K, with a corresponding electron carrier density of 7.7 ± 1.1 × 1023 particles/cm3, consistent with theoretical estimates of the atomic density.”

“The properties are those of an atomic metal,” they noted.

To create the material, the authors turned to one of the hardest materials on Earth — diamond.

But rather than natural diamond, they used two small pieces of carefully polished synthetic diamond which were then treated to make them even tougher and then mounted opposite each other in a device known as a diamond anvil cell.

“It was really exciting,” Professor Silvera said.

“Ranga was running the experiment, and we thought we might get there, but when he called me and said, ‘The sample is shining.’ I went running down there, and it was metallic hydrogen.”

“I immediately said we have to make the measurements to confirm it, so we rearranged the lab, and that’s what we did.”

“It’s a tremendous achievement, and even if it only exists in this diamond anvil cell at high pressure, it’s a very fundamental and transformative discovery.”

While the work offers a new window into understanding the general properties of hydrogen, it also offers tantalizing hints at potentially revolutionary new materials.

“One prediction that’s very important is metallic hydrogen is predicted to be meta-stable,” Professor Silvera explained.

“That means if you take the pressure off, it will stay metallic, similar to the way diamonds form from graphite under intense heat and pressure, but remains a diamond when that pressure and heat is removed.”

“Metallic hydrogen may have important impact on physics and perhaps will ultimately find wide technological application,” the researchers said.

“A looming challenge is to quench metallic hydrogen and if so study its temperature stability to see if there is a pathway for production in large quantities.”

Uranium-based compound improves manufacturing of nitrogen products

Uranium-based compound improves manufacturing of nitrogen products

Despite being widely used, ammonia is not that easy to make. The main method for producing ammonia on an industrial level today is the Haber-Bosch process, which uses an iron-based catalyst and temperatures around 450oC and pressure of 300 bar — almost 300 times the pressure at sea level.

The reason is that molecular nitrogen — as found in the air — does not react very easily with other elements. This makes nitrogen fixation a considerable challenge. Meanwhile, numerous microorganisms have adapted to perform nitrogen fixation under normal conditions and within the fragile confines of a cell. They do this by using enzymes whose biochemistry has inspired chemists for applications in industry.

The lab of Marinella Mazzanti at EPFL synthesized a complex containing two uranium(III) ions and three potassium centers, held together by a nitride group and a flexible metalloligand framework. This system can bind nitrogen and split it in two in ambient, mild conditions by adding hydrogen and/or protons or carbon monoxide to the resulting nitrogen complex. As a result, the molecular nitrogen is cleaved, and bonds naturally with hydrogen and carbon.

The study proves that a molecular uranium complex can transform molecular nitrogen into value-added compounds without the need for the harsh conditions of the Haber-Bosch process. It also opens the door for the synthesis of nitrogen compounds beyond ammonia, and forms the basis for developing catalytic processes for the production of nitrogen-containing organic molecules from molecular nitrogen.

Scientists Capture ‘Spooky Action’ In Photosynthesis

Scientists Capture ‘Spooky Action’ In Photosynthesis

Photosynthesis and other vital biological reactions depend on the interplay between electrically polarized molecules. For the first time, scientists have imaged these interactions at the atomic level. The insights from these images could help lead to better solar power cells, researchers added.

Atoms in molecules often do not equally share their electrons. This can lead to electric dipoles, in which one side of a molecule is positively charged while the other side is negatively charged. Interactions between dipoles are critical to biology — for instance, the way large protein molecules fold — often depend on how the electric charges of dipoles attract or repel each other.

One process where dipole coupling is key is photosynthesis. During photosynthesis, dipole coupling helps chromophores — molecules that can absorb and release light — transfer the energy that they capture from sunlight to other molecules that convert it to chemical energy.

Intriguingly, a consequence of dipole coupling is that chromophores may experience a strange phenomenon known as quantum entanglement. Quantum physics suggests that the world is a fuzzy, surreal place at its very smallest levels. Objects experiencing quantum entanglement are better thought of as a single collective than as standalone objects, even when separated in space. Quantum entanglement means that chromophore properties can strongly depend on the number, orientations and positions of their neighbors.

Understanding the effects that dipole coupling might have on chromophores might help shed light on photosynthesis and light-harvesting applications such as solar energy. However, probing these interactions requires imaging chromophore activity with atomic levels of precision. Such a task is well beyond the capabilities of light-based microscopes, which are currently limited to a resolution slightly below 10 nanometers or billionths of a meter at best, said Guillaume Schull, a physicist at the University of Strasbourg in France. In comparison, a hydrogen atom is roughly one-tenth of a nanometer in diameter.

Instead of relying on light to illuminate and image chromophores, scientists in China used electrons. Scanning tunneling microscopes bring extremely sharp electrically conductive tips near surfaces they scan. Quantum physics suggest that electrons and other particles do not have one fixed location until they interact with something else, and so has some chance of existing anywhere. As such, electrons from the microscope’s tip can “tunnel” to whatever the microscope is scanning. A fraction of these electrons lose energy during tunneling, which gets emitted as light that the microscope uses to image targets with atomic-scale resolution.

The researchers experimented with chromophores made of a purple dye known as zinc phthalocyanine. These chromophores were each about 1.5 nanometers wide, and they shone red light when the microscope excited them.

The scientists used the microscope’s tip to push chromophores together. When the chromophores were roughly 3 nanometers apart, the spectra of light they gave off began shifting.

“I was quite surprised by the dramatic spectral change when two isolated molecules were simply pushed together,” said study co-senior author Zhenchao Dong, a physical chemist at the University of Science and Technology of China in Anhui.

The research team’s theoretical simulations suggest these changes were a direct visualization of dipole coupling between the chromophores.

“Dipole-dipole interactions play an important role in many biological and photophysical processes. To my knowledge, it is the first time that one has directly imaged them with sub-molecular resolution,” said Schull, who did not participate in this study. “This is really impressive.”

The scientists experimented with clusters of up to four chromophores. The varieties of light from these chromophores suggested they might have been entangled. Future research can explore dipole coupling in more complex arrangements — for example, in 3-D systems, Dong said. He and his colleagues detailed their findings in the March 31 issue of the journal Nature.

By analyzing how molecules interact and exchange energy, “the most important implication of these findings is the possibility to understand and therefore engineer molecular structures for efficient solar-energy conversion devices,” said  Elisabet Romero, a biophysicist at VU University Amsterdam, who did not take part in this research. A better understanding of dipole coupling might also yield insights into the function of structures of molecules such as catalysts, Romero added.

Scientists know that you pee in the pool

Scientists know that you pee in the pool

We know you would never do it. But some people pee in swimming pools and hot tubs. This isn’t just a gross habit. When chlorine reacts with urine, it creates chemicals that can irritate eyes and lungs. Now researchers can measure this disgusting behavior. They’ve found a simple way to estimate the volume of urine in a pool.

The technique could help people decide when to change some or all of the water in a pool or hot tub, the researchers say. But the new research isn’t really meant to create new rules for pool managers. It’s supposed to emphasize a message: Don’t pee in the pool!

By itself, urine in pools isn’t a problem. That’s because a healthy person’s pee is typically sterile, or germ-free, says Lindsay Blackstock. She’s an analytical chemist at the University of Alberta in Edmonton, Canada. But pool water also contains chlorine, a chemical that kills germs. Trouble can arise when that chlorine reacts with urine. It can trigger the production of dozens of new byproducts. Many of these new chemicals will cause no harm. But some, especially one called trichloramine (Try-KLOR-ah-meen), are known irritants.

Even if you’ve never heard of trichloramine, you’ve probably smelled it. That distinct “swimming pool smell” at most pools doesn’t come from the chlorine, notes Blackstock. It’s trichloramine. It can sting the eyes. The pungent chemical also can irritate the lungs.

As pee in a pool increases, the amounts of trichloramine will too. The more trichloramine there is, the more irritating it can be to swimmers. So Blackstock and her teammates wanted to see if they could estimate how much urine was present in pool water. There’s no simple way to test for urine directly. (Have you ever heard that pool water has a chemical in it that will change color if you pee? That’s only a myth.)

So the researchers needed a marker for the urine — some other substance that would signal the likely presence of pee. And that’s what caused them to focus on acesulfame (ASS-eh-sul-faym) potassium. It’s an artificial sweetener used in foods and drinks. It’s sold under the brand names Sunett and Sweet One. The chemical is also called Ace-K for short.

It makes a good marker for pee, says Blackstock. For one, it has no natural sources and is very stable. It doesn’t break down at normal temperatures, which is why many food manufacturers use Ace-K. Even after being stored in foods at room temperature for 10 years, it won’t have broken down. It also won’t break down in pools or be removed during water-cleanup treatments.

Moreover, Ace-K passes right through the human body without being digested. That makes it a great choice as a low-calorie sweetener (the body doesn’t get any energy from it). But it also made Ace-K a good choice for their study, says Blackstock. The substance doesn’t leave the body in sweat, breath or poop. Ace-K only leaves the body in urine. And when it comes out, it will be the same form of the chemical as had been ingested.

Foul findings

First, the researchers needed to know how much Ace-K is present in the average person’s urine. They collected urine samples from 20 people and mixed them together. Each milliliter of urine (about one-fifth of a teaspoon) contained about 2.36 micrograms of Ace-K.

Then, on 15 days in August 2016, the team collected water samples from two swimming pools in a city in Canada. One pool held about 420,000 liters (110,000 gallons). The other had about twice that volume. On the same days, the researchers also collected three samples from the city’s water supply.

Liter-sized samples of the city’s tap water contained between 12 and 20 nanograms of Ace-K. (Remember, Ace-K doesn’t decompose during water treatment.) If there were no pee in the pools, they should have had similar levels of Ace-K. The smaller pool, though, had 156 nanograms of Ace-K per liter of water. And the larger pool had even more, about 210 nanograms per liter. That adds up to about 30 liters (almost 8 gallons) of urine in the small pool. The larger pool held a whopping 75 liters (almost 20 gallons) of pee!

These pools probably aren’t unusual, says Blackstock. In 2014, the same researchers found Ace-K in unusually high concentrations in 21 public swimming pools, 8 hot tubs and even a private swimming pool. In other words, every pool and hot tub they tested had pee in it. Blackstock and her team shared their new findings online March 1 in Environmental Science & Technology Letters.

The team’s approach “is a pretty cool idea,” says Beate Escher. She’s a toxicologist at the Helmholtz Center for Environmental Research in Leipzig, Germany. Researchers have used Ace-K before to measure water pollution, she says, both on and just beneath Earth’s surface. And Ace-K holds some advantages over other substances, such as caffeine, that researchers have used as a marker of urine. Caffeine, for instance, can break down after it leaves the body. “Ace-K is much more stable,” Escher says.

Like Blackstock and her team, Escher suggests the best way to tackle urine is pools is prevention, not clean-up. So please, she urges, don’t pee in the pool: “Self-control is the best thing.”

‘Fossil’ groundwater is not immune to modern-day pollution

‘Fossil’ groundwater is not immune to modern-day pollution

Groundwater that has lingered in Earth’s depths for more than 12,000 years is surprisingly vulnerable to modern pollution from human activities. Once in place, that pollution could stick around for thousands of years, researchers report online April 25 in Nature Geoscience. Scientists previously assumed such deep waters were largely immune to contamination from the surface.

“We can’t just drill deep and expect to run away from contaminants on the land surface,” says Scott Jasechko, a study coauthor and water resources scientist at the University of Calgary in Canada.

Groundwater quenches the thirst of billions of people worldwide and accounts for roughly 40 percent of the water used in agriculture. Water percolating from the surface into underground aquifers can carry pollutants such as pesticides and salt along for the ride.

Jasechko and colleagues weren’t looking for contamination when they tested water from 6,455 water wells around the world. Their goal was to use carbon dating to identify how much of that deep water was “fossil” groundwater formed more than 12,000 years ago. Previous studies had looked at average water age, rather than the age of its individual components.

While there’s no C in H2O, carbon dating can still be used to date groundwater by examining the carbon dissolved in the water. Radioactive carbon atoms decay as the water ages. After around 12,000 years, only stable carbon isotopes remain. Comparing the relative abundance of these carbon isotopes in the various wells, the researchers discovered that over half of wells more than 250 meters deep yielded mostly groundwater at least 12,000 years old. How much older is unknown. Worldwide, the researchers estimate that fossil groundwater accounts for 42 to 85 percent of water in the top kilometer of Earth’s crust.

In a second measurement, the researchers looked for a common modern pollutant. They found that around half of wells containing mostly fossil groundwater had elevated traces of tritium, a radioactive hydrogen isotope spread during nuclear bomb tests that’s hazardous in very high concentrations. While the tritium levels weren’t dangerous, its presence suggests that at least some groundwater in the wells postdates the 1950s nuclear testing. That relatively young water may introduce other contaminants in addition to tritium, the researchers say.

How new groundwater enters deep wells is still unclear, Jasechko says. Old and young waters could mix within an aquifer or, alternatively, the construction and use of the well itself could churn the waters together.

No matter where the young water comes from, the new technique for identifying the percentage of fossil groundwater in a well could be an important tool for communities, says Audrey Sawyer, a hydrogeologist at Ohio State University in Columbus. The study raises awareness that even in wells with mostly older water “a fraction of that same water can be pretty young and susceptible to contamination,” she says.

Could Nano-sized Minerals Make Agriculture More Productive?

Could Nano-sized Minerals Make Agriculture More Productive?

 Every day engineered nanoparticles in common consumer products are washed down drains, belched out from exhaust pipes or otherwise expelled into the environment. These miniscule chunks of material, which are around 100 times smaller than an average bacterium, can find their way into the waste water treatment system, and it’s difficult to remove them because they’re so small. Since about 50 percent of treated sewage sludge in the U.S. is used as fertilizer, a lot of these nanoparticles can end up in farmers’ fields.

“We think that engineered nanoparticles have a great chance of getting into agricultural lands,” said Xingmao (Samuel) Ma, an environmental engineering professor at Texas A&M University in College Station. Ma is one of a growing cadre of scientists who study how nanoparticles affect crops. The work is important because nanotechnology is booming. By 2020, the value of nanotech products on the global market is expected to reach $3 trillion.

In a study recently published in the journal Environmental Pollution, Ma and his colleagues report that a common industrial nanoparticle could in fact have a positive impact on crops — helping canola plants that are stressed by salty conditions to grow closer to normal size. But the results should be interpreted cautiously. Other studies have found the same nanoparticle — cerium oxide, made up of the elements cerium and oxygen — can have both positive and negative effects, depending on the species of plant, how much cerium oxide it is exposed to and for how long, and other specific growing conditions, such as whether the plant was in soil.

“The idea of studying the salt stress combined with nanoparticles, this is good,” said Jorge Gardea-Torresdey, an environmental chemist at the University of Texas at El Paso who has researched the environmental impact of nanotechnology for decades. Salt stress will likely be a bigger problem in the future, since a combination of climate change and increased demand for food for a growing population may force more farmers to irrigate with salty water, he said.

Cerium oxide nanoparticles of the type Ma and his colleagues studied are produced in the thousands of pounds each year in the U.S. and are used to make sunscreen, microelectronics, polishing agents and catalysts to speed up chemical reactions. They are also added to diesel to help the fuel burn more efficiently.

Ma and his team tested the effects of the cerium oxide nanoparticles on canola, a generally hardy plant whose seeds are pressed to produce canola oil. They made 3 potting mixtures that contained sand and clay, with either zero, 200, or 1,000 parts per million concentration of cerium oxide nanoparticles. While the last two concentrations are substantially higher than most predicted levels of cerium oxide nanoparticles in the environment, Ma noted that many of the predictions are based on a uniform mixing of nanoparticles into the soil, which may not occur in real-life scenarios.

Half the plants they grew were also treated with salt solution. The researchers chose a salt concentration that fell within the average range for salty soil and brackish water reported in previous studies, they wrote.

Both groups of plants grew more slowly when exposed to salty water, but the plants grown with cerium oxide nanoparticles had bigger leaf mass, and higher levels of chlorophyll — the green pigment plants use to turn light into energy — than their salt-stressed, but nanoparticle-less peers. While the cerium oxide did not completely alleviate the stunted growth caused by the salt solution, it did help the plants flourish more than they normally would under such conditions. For the salt-stress-free plants, cerium oxide nanoparticles also increased plant mass, but the increase was primarily in the roots.

The results are encouraging, given the likely need to grow crops on marginal land in the future. Still, significant questions remain. It’s important to test whether the positive effect persists with different plants, different concentrations of salt and nanoparticles, and in different soil conditions, said Jason White, an analytical chemist at the Connecticut Agricultural Experiment Station, a state agency that conducts agricultural research. White has worked with Ma on separate projects, but was not involved in the salt study.

Further research should also explore if nanoparticles make plants less nutritious, what happens to multiple generations of plants grown in nanoparticle-laced soil, and what’s happening on the molecular level when nanoparticles affect plant growth, White said.

White himself is looking at how nanoparticles move through the food chain, such as from plants to insects. Ma and his team are currently investigating how cerium oxide nanoparticles might turn specific plant genes on or off, which would point scientists to a more precise explanation for why the particles have the effect that they do.

“We are talking about something new and still unexplored. Every day we discover new aspects,” said Lorenzo Rossi, a postdoctoral researcher in Ma’s lab and a co-author of the paper.

Some companies are already using nanoparticles in products to help plants grow, Ma said, so a cerium oxide additive for farmers’ field is a definite possibility.

“I know for a fact that there are fertilizer companies in China who are adding cerium [nanoparticles] to their products,” White said.

This research, and the projects that follow up on it, could help scientists build an understanding of the potential effects, both helpful and harmful.

Hotter air may lead planes to carry fewer passengers

Hotter air may lead planes to carry fewer passengers

Air travel can be annoying. But research now suggests global warming could make it much worse. To get off the ground in really hot weather, planes may be forced to carry fewer passengers. That might mean a little more elbow room, which would be good. However, it also would make flying more expensive.

Average air temperatures around the world are rising. That global warming is happening because people are polluting the air with increasing amounts of greenhouse gases. These gases, such as carbon dioxide, are a byproduct of burning fossil fuels. Their rising levels help to hold in energy from the sun, causing ground-level temperatures to rise.

Those warmer temps can affect an airplane’s ability to fly. That’s because air molecules spread out more as the air warms. This generates less lift under a plane’s wings as it barrels down a runway. To compensate, a plane must be lighter to take off in hot weather than on cooler days.

It can even prove too dangerous for some planes to attempt a take-off. A record June heat wave in the American Southwest, for instance, caused flight cancellations in Phoenix, Ariz. One airline’s planes were cleared to operate only up to 47.8° Celsius (118° Fahrenheit). On June 20, Phoenix reached a blistering 48.3 °C (119 °F)!

Radley Horton is a climate scientist at Columbia University in New York City. Two years ago, he and graduate student Ethan David Coffel projected the impact of warming at four U.S. airports. The trajectory of expected warming could triple the number of days when planes face weight restrictions, they calculated.

Horton and his colleagues have now expanded on those earlier projections. They probed the impact of rising temps on five types of commercial planes flying out of 19 of the world’s busiest airports. In the coming decades, as many as one to three out of every 10 flights that take off during the hottest time of day could face weight restrictions, they found. In some cases, a typical 160-seat plane would have to jettison 4 percent of its weight. That would be the equivalent of taking a dozen people off the plane, the researchers calculated.

In Temperate Forests, Edges Hold More Carbon than the Middles

In Temperate Forests, Edges Hold More Carbon than the Middles

Almost everyone loves a vast, dense forest — including scientists. But in the temperate zone encompassing North America, Europe and much of Asia, farms, roads and housing developments have removed much of the forest, creating a patchwork quilt where there was once an unbroken green blanket.

In a paper published today in the Proceedings of the National Academy of Sciences, and presented last week at the American Geophysical Union meeting in San Francisco, a team of scientists report that such fragmented forests may actually fight harder against climate change. The paper authors found that New England oak forests grew nearly twice as fast around their edges than in their interiors.

“The growth and the size of trees near the edge of forest compared to what is just inside is pretty phenomenal,” said Nick Haddad, an ecologist at North Carolina State University in Raleigh who was not involved in the research. “Through that lens, there can be a green lining around the big cloud of forest fragmentation.”

But the study authors also found that this surprising benefit will likely decline as the climate warms, and emphasized that further fragmenting forests will not slow climate change.

About half of a tree’s mass is the element carbon, and growing trees soak up carbon from the atmosphere via photosynthesis. This carbon can return to the atmosphere, however, if forests are burned or cut and left to rot, or when trees die naturally. Still, scientists have estimated that forests globally take up about a billion tons of carbon a year more than is lost to fire and deforestation.

While that is not nearly enough to offset the roughly 10 billion tons of carbon that human activities emit annually, scientists have suggested that by protecting existing forests and allowing forests to regrow where they have been cut down, countries can slow the pace of climate change as they transition away from fossil fuels.

But estimates of forests’ carbon-absorbing potential are based on a relatively small number of studies, many done in large, intact forests such as the Amazon rainforest. In studies of forests that have been penetrated by farms or roads, scientists have found that biodiversity and other measures of forest health usually decline, because forest edges are vulnerable to damage from wind, heat, invasive species and other factors. Recently, scientists found that tropical forest edges also contain substantially less carbon than the interiors, suggesting that the total amount of carbon held in tropical forests — which provide a portion of the world’s overall carbon sink — might be smaller than previously estimated.

Curious if a similar effect occurs in temperate forests, Boston University ecologist Lucy Hutyra and postdoctoral researcher Andrew Reinmann launched a study in oak-dominated forests in Massachusetts. The scientists used trees’ annual growth rings to measure their growth rates in study plots that extended from the forest edge to 30 meters into the forest. They found that trees at the forest edge actually grew new wood 89 percent faster. Though the study did not determine a cause for the growth enhancement, Hutyra and Reinmann suspect that added sun exposure at the forest edge plays a major role.

Unlike in the tropics, nearly all original temperate forests have been cut down, and many are now growing back, making temperate forests an outsized contributor to the total global carbon sink. They are also more fragmented: Hutyra and Reinmann found that nearly 18 percent of forests in southern New England — comprising the states of Massachusetts, Connecticut and Rhode Island — grow 20 meters or less from an edge, indicating a level of fragmentation higher than almost any other forest on Earth. If the edge effect from their study held for all temperate forests, the global temperate forest carbon sink estimate, which accounts for around 60 percent of the total net forest carbon sink, would increase substantially.

However, Haddad noted that because Hutyra’s and Reinmann’s study plots were mostly next to suburban yards, trees on these forests’ edges were likely protected from disturbances, and possibly even aided by fertilizer applications to nearby lawns and gardens. This may have given them an edge, so to speak, over trees growing next to farms or roads. The authors “were looking, in some ways, at the most idealized landscapes,” Haddad said. “I suspect that the patterns will not be as strong in those harsher and more dynamic environments.”

Haddad added that carbon uptake in other temperate forest types, such as pine forests in the southern U.S., might respond differently to fragmentation.

By comparing growth rates with past climate data, Hutyra and Reinmann also found that the growth boost on the forest edge lessened substantially during hot years, suggesting that temperate forest edges are more vulnerable than interiors to heat stress. So as the planet warms, fragmented forests may sop up less carbon.

That is one reason that despite her team’s results, Hutyra recommends against trying to fight climate change by punching more holes in the forest. The other is that carbon lost from the removed trees would outweigh carbon taken up at newly created edges.

“We’re not advocating for fragmenting our forest,” Hutyra said, noting that before Europeans arrived, southern New England’s forests absorbed 31 percent more carbon than they do today. “If we just had forest and pasture … we’d be better off.”

When it’s hot, plants become a surprisingly large source of air pollution

When it’s hot, plants become a surprisingly large source of air pollution

Planting trees is often touted as a strategy to make cities greener, cleaner and healthier. But during heat waves, city trees actually boost air pollution levels. When temperatures rise, as much as 60 percent of ground-level ozone is created with the help of chemicals emitted by urban shrubbery.

While the findings seem counterintuitive, “everything has multiple effects,” says Robert Young, an urban planning expert at the University of Texas at Austin, who was not involved with the study. The results, he cautions, do not mean that programs focused on planting trees in cities should stop. Instead, more stringent measures are needed to control other sources of air pollution, such as vehicle emissions.

Benefits of city trees include helping reduce stormwater runoff, providing cooling shade and converting carbon dioxide to oxygen. But research has also shown that trees and other shrubs release chemicals that can interact with their surrounding environment, producing polluted air. One, isoprene, can react with human-made compounds, such as nitrogen oxides, to form ground-level ozone, a colorless gas that can be hazardous to human health. Monoterpenes and sesquiterpenes also react with nitrogen oxides, and when they do, lots of tiny particles, similar to soot, build up in the air. In cities, cars and trucks are major sources of these oxides.

In the new study, Galina Churkina of Humboldt University of Berlin and colleagues compared simulations of chemical concentrations emitted from plants in the Berlin-Brandenburg metropolitan area. The researchers focused on two summers: 2006, when there was a heat wave, and 2014, when temperatures were more typical.

At normal daily maximum summer temperatures, roughly 25° Celsius on average, plants’ chemical emissions contributed to about 6 to 20 percent of ozone formation in the simulations. At peak temperatures during the heat wave, when temperatures soared to over 30°C, plant emissions spiked, boosting their share of ozone formation to up to 60 percent. Churkina says she and colleagues were not surprised to see the seemingly contrary relationship between plants and pollution. “Its magnitude was, however, quite amazing,” she says.

The results, she notes, suggest that campaigns to add trees to urban spaces can’t be done in isolation. Adding trees will improve quality of life only if such campaigns are combined with the radical reduction of pollution from motorized vehicles and the increased use of clean energy sources, she says.

Bee hotels are open for business

Bee hotels are open for business

A new kind of hotel is opening for business around the world. Its guests are wild bees. Built by people, these hotels are going up in rural farmlands, in suburban backyards — even on city rooftops. They don’t offer maid service, but they do give bees a place to nest.

Bees are a big deal in the world of plant reproduction. They move pollen from one plant to another, which helps make seeds that will grow into new plants. Some bees make honey. But that’s true only for a certain few species: the honeybees. They live communally in hives, often tended by people. Most bee species are wild and live on their own. (That’s why they’re called solitary bees.)

Honeybees aren’t native to the United States. Europeans brought them here roughly 400 years ago. Most wild American bees were here long before honeybees arrived, notes Sandra Rehan. She’s a biologist at the University of New Hampshire in Durham. Estimates show there are about 20,000 species of bees worldwide. About 4,000 call North America home. These insects are incredibly diverse, Rehan points out. “They have lived in these areas and ecosystems for thousands, if not millions, of years.”

Honeybees may be the better-known bugs, but wild bees are crucial pollinators too. “That’s an incredibly important service that they’re providing,” says Daphne Mayes. She’s a graduate student at the University of Kansas in Lawrence. Some bees stop by many types of flowers. Others visit just one type.

Unfortunately, wild bee species have lost much of their natural habitat — the places where they would normally choose to live. Many of those species nest in crevices and holes in the ground. Other nesting spots include gaps under rocks or fallen trees, holes in trees or even cracks in some buildings. Large farms may now plow over such nesting areas. Bee habitats also can be destroyed as people erect cities and suburbs.

Handmade habitats

Bee hotels address one problem caused by that loss of habitat. They give bees a place to nest.

Most guests at hotels for people don’t stay more than a few days. When they leave, they take their stuff with them. But when bees check out of their hotel rooms, they leave their kids behind, observes Scott MacIvor. He’s an ecologist in Canada at the University of Toronto Scarborough. “You can think of a female — a mom — buying a house with several rooms,” MacIvor says. She lays her eggs and leaves food for them to eat after they hatch. By the time her young are ready to check out, they too are now adults.

“It’s actually quite easy to make a bee hotel,” notes Rebecca Ellis. She’s a conservation biologist at the Edmonton & Area Land Trust in Alberta, Canada. The group manages nine parcels of land that are set aside to protect plants and wildlife. Creating bee hotels is so easy that this group already has helped people set up more than 400 in and around Edmonton.

A hotel design can be as simple as paper straws or hollow bamboo stems stuck into a clean milk carton. “Another way is just to drill holes in a block of wood,” Ellis says. Ideally, the holes will reach at least several centimeters (an inch or so) deep into the wood. Female bees lay a line of eggs with food for their young. Then they seal off the hole.

Different species prefer holes of different diameters. Mayes, in Kansas, has been studying bees that nest in tallgrass prairies across the eastern part of her state. Her field stations have become a chain of bee hotels. Among the guests, she found, was a type of leafcutter bee. Called Megachile brevis (MEG-ah-cheel BREH-vis), they nest in holes 12-millimeters (a half-inch) in diameter. A resin bee, Heriades carinata (HAIR-ee-AH-dees KAIR-ih-NAH-tah) uses holes 6 millimeters (a quarter-inch) in diameter. The so-called mason bees will move into either size hole.

You could even add some thin, clear plastic tubes to your bee hotel, suggests Laura Fortel-Vitrolles. She’s an ecologist in France who recently got her doctoral degree from the University of Avignon. For one of her research projects, she studied guests checking into a large bee hotel. “We used [plastic] tubes,” she says. “And we were able to observe bees while they were building their nests!”

Site a bee hotel in your yard, on a balcony — even a rooftop. “Generally, you want it to get morning sun,” Ellis recommends. Being near native plants — species that evolved in the area, and therefore are suited for it (as the wild bees are) — also helps. Ellis has a bee hotel near some goldenrod and blue flax flowers at her home. “Even your vegetable garden is great, because the bees will help pollinate the vegetables, or your herbs,” she says. “The garden is helping the bees, and the bees are helping your garden.”

Managing a bee hotel is easy. The guests don’t need room service or fresh towels. Nor should they be dangerous to their human helpers. “Solitary bees are not aggressive,” notes Ellis. Leave them alone and they generally won’t bother you.

But keep in mind that any bee hotel is “going to require some management,” Mayes says. Pathogens are germs that can cause disease. And, she notes, “There may be opportunities for pathogens to move from one hole to another and spread.” When many bees visit one spot, they could spread germs just as people can in a crowd. To avoid that, just remove straws, stems or wood blocks from a container after all the bees have left. Then add new ones each year.

Surprise guests

Bee hotels can bring interesting visitors. Fortel-Vitrolles and researchers at the Paris-based French National Institute for Agricultural Research set out bee hotels at 16 sites near the city of Lyon. Over two years, 21 species had visited their hotel chain. But just 87 percent of the bees belonged to just two species. They were the red mason bee, Osmia bicornis (OZ-mee-uh By-KOR-nis), and the builder bee, O. cornuta (Kor-NU-tuh).

Those bees are what scientists call “gregarious” species. Explains Fortel-Vitrolles, “They do not interact much with each other, but they live next to each other.” Think of them, she says, as strangers willing to move into a “kind of an apartment block for bees!”

In this part of France, red mason and builder bees emerge from their nests earlier in the year than do many other species. “They nest everywhere they can,” Fortel-Vitrolles learned, leaving fewer nesting spots for the bees that come out later. Her group shared its findings, last year, in the Journal of Insect Conservation.

Another study showed that native bees probably won’t be the only insects at many bee hotels. MacIvor and Laurence Packer at York University, in Toronto, collected data from almost 600 bee hotels that had been set up around that Canadian city.

More than 27,000 bees and wasps checked into the hotels during a three-year period. About one-quarter were non-native bees. Native wasps made up more than a third. That’s not necessarily bad. “These are very docile wasps that exist all around us,” MacIvor explains. And the wasps are “very, very important” because they help control various pests. But their presence could surprise someone who didn’t expect them. His team published its findings two years ago in PLOS ONE.

Bee hotels also could have unintended consequences, MacIvor notes. For example, the large number of insects in one place could invite parasites or predators. And if holes aren’t deep enough, the ratio of male to female bees could change in an area. That’s because some bees lay eggs that will hatch into females if they’re deep in a nest, but males if they’re closer to the outside. If there aren’t enough females, that could be bad for pollination and for reproduction, MacIvor explains.

Not just habitat

These hotels don’t just give bees somewhere to nest. They also can help researchers learn about the behaviors of wild insects. In one study, MacIvor collected leaf bits left in vacated hotel rooms by three types of leafcutter bees. This told him what types of plants those bees preferred to use for nests. He published those findings in the March 2016 Royal Society Open Science.

For this research, her team placed 24 modified bee hotels in orchards and forests around Queensland. Each hole has a removable straw inside it. That lets Wilson or one of her colleagues sample material from one egg compartment in each hole. Then the straw goes back in the hotel, so the rest of the eggs can develop. The team recently began collecting data.

Bee hotels make it quicker and easier for researchers to get those data, Wilson says. “Instead of watching bees come and go from flowers for several weeks, we can take a small amount of pollen bread [the bee’s food] from each nest every few months and use genetics to figure out which plants those pollens came from.” And, she adds, bee hotels “mimic the nesting material that different species like to use.”

Rehan’s group has a large bee hotel at the University of New Hampshire’s Woodman Farm. For the projects they work on, her team usually doesn’t collect data from the bee hotel. Nonetheless, Rehan says, the bee hotel at the farm is good for the bees that use it. Giving the pollinators comfortable guest quarters also benefits the plants nearby. It even helps the farm’s human visitors learn about wild bees.

Making the public aware of these insects is important because they face plenty of problems. Loss of habitat is a big one. Solitary bees have fewer nesting places as people have been transforming the countryside. The bees also have fewer flowers and other food sources. “They just don’t have enough resources,” says Rehan.

Another problem: farm chemicals. Common pesticides that farmers use may be harming wild bees. One study last year reviewed 18 years of data. The English researchers found evidence linking pesticide use to a drop in the number of wild bees. The group’s study appeared in Nature Communications.

Scientists from three U.S. Geological Survey laboratories also have reported that pesticides pose a risk to native bees. They collected the pollinators from Colorado wheat fields and grasslands. Tests of these bees turned up residues — in some cases high levels — of 19 pesticides and breakdown chemicals. And even the insects roaming open grasslands had been exposed to these pest-killing chemicals. Michelle Hladik, Mark Vandever and Kelly Smalling shared their findings in the January 2016 Science of the Total Environment.

Bee hotels won’t solve all the problems facing wild bees. But they can help these insects and the plants that they typically pollinate. Those hotels also can be fun to craft. And watching as bees or other guests visit to lay eggs — and then emerge the next year as adults — can offer even more fun.

The hotels also help scientists and the public alike learn more about wild bees. Bees “do wonderful work,” Rehan says, yet even today “we just don’t know much about them.”

Contested National Monuments in Utah House Treasure Troves of Fossils

Contested National Monuments in Utah House Treasure Troves of Fossils

 Seventy-five million years ago, a family of tyrannosaurs fled through a forest engulfed in flames. The predators — an adult more than 30 feet long, an adolescent two-thirds its size, and a baby no bigger than a Shetland pony — emerged from the inferno onto a muddy shoreline and plunged into the lake, desperate to escape the heat.

That’s what Alan Titus thinks happened here. He picks up a lump of charcoal, a burned remnant of the once-lush forest, and crumbles it between his fingers. Inches away lies an exposed tyrannosaur bone, smooth and brown.

The landscape today is a rugged plateau of pale dirt, sagebrush and scraggly junipers. It’s matched by Titus’ stained khaki vest and the dry smile creasing his stubbled cheeks. A paleontologist with the Bureau of Land Management, he oversees all research on dinosaurs and other fossils in Utah’s Grand Staircase-Escalante National Monument.

When President Bill Clinton proclaimed the now 1.9 million-acre monument in 1996, the decision infuriated many people in Utah — Titus included. At the time, Titus was teaching geology at Snow College, a small two-year college in the middle of the state.

“I was just as upset and shocked as anyone that this had happened, because I liked to take my students down here on field trips and collect fossils,” he says. The monument designation preserved many previous land uses, such as hunting and driving all-terrain vehicles, but it banned fossil collecting without a permit, says Titus.

But Titus’s perspective soon changed. The national monument designation led to new funding and resources for scientists to study fossils in Grand Staircase-Escalante, and what they found there was beyond all expectations. Rather than the familiar dinosaur species known from rocks of the same age in Canada and Wyoming, the Escalante rocks revealed a remarkable diversity of new species, upending scientists’ understanding of climates and habitats during the age of dinosaurs by revealing a brand new ecosystem.

“To a scientist, it doesn’t get any better than that, because you get into this business to make discoveries and contribute new knowledge,” says Titus.

Now, some paleontologists believe the same thing could happen in America’s newest national monument — at least, assuming it remains a national monument. In December, then-President Barack Obama proclaimed a 1.35 million-acre region that lies just east of Grand Staircase-Escalante as Bears Ears National Monument, reigniting outrage among some residents of Utah and their elected representatives. Now, Bears Ears lies at the center of a political debate over public lands and presidential power.

Bears Ears and Grand Staircase-Escalante are among 27 monuments currently under examination, following an executive order issued by President Donald Trump in April. The order directs Interior Secretary Ryan Zinke to review certain national monuments designated since 1996. Bears Ears is the only monument the order calls out by name, and on Monday, Zinke issued an interim reportrecommending that it be cut down in size.

Experts have questioned whether it would be legal for Trump to alter Bears Ears’ boundaries or its national monument status. If he tries, several groups have pledged to challenge him in court, according to reporting by the Salt Lake Tribune. On June 12, when Zinke announced his recommendation to shrink Bears Ears, he also announced the extension of the public comment period for the monument until July 10.

The rocks Titus is excavating in Grand Staircase-Escalante preserve the final chapter of the age of dinosaurs. Bears Ears, researchers believe, holds different stories from earlier times — how four-legged creatures first emerged from the sea, and how dinosaurs later rose to dominate the planet. The few paleontologists who have thus far explored in Bears Ears have made tantalizing finds, from plant-eating crocodiles to an amphibian whose skull is the size and shape of a toilet bowl lid.

But like Grand Staircase-Escalante 20 years ago, Bears Ears is in its paleontological infancy. The national monument designation, should it remain, could help researchers gain funding and support to uncover its secrets. If Bears Ears is cut in size, then funding, land use and access, and protections could change, although the exact impact on the preservation and excavation of fossils is unclear.

Rainbows and unicorns

Titus discovered the tyrannosaur site while exploring Grand Staircase-Escalante in 2014, after a rainstorm exposed a bit of buried bone.

“When I brushed it, it turned out to be the lacrimal, which is the big, scabby protrusion over the eye of an adult tyrannosaur,” says Titus. “I about wet my pants.”

Finding even one tyrannosaur is incredibly rare, since the Cretaceous period landscape held far fewer of the big, warm-blooded predators than it did of their plant-eating prey. But as Titus dug around the skull, he soon found the bones of at least two more individuals. This is only the third or fourth site in North America where multiple tyrannosaurs have been found together, and it provides evidence that this area’s species were social, says Titus.

There are many kinds of tyrannosaurs, and these bones belong to a smaller, more ancient relative of the famous Tyrannosaurus rex. The most likely species is Teratophoneus curriei, whose name means “monstrous murderer.” It is one of two tyrannosaur species discovered so far in Grand Staircase-Escalante. The remains could also represent a new species, but if they are Teratophoneus, the adult will be the first full-grown specimen ever found, says Titus.

Since the discovery of the site in 2014, Titus’s team has come back for 30-40 days of each year to expand the perimeter and depth of the search, as they did in May 2017. They have found more than 1,000 bones so far, and they expect to have the whole site excavated by 2019, says Titus.

The excavation site used to be at the bottom of a lake, and Titus and his team have found thousands of fish scales amongst the tyrannosaur bones. They have also found countless lumps of charcoal, and a few pieces of fossilized mud with imprints of burned wood, the rectangular crack patterns familiar to anyone who has watched a log shrink in a campfire. For Titus, the site paints a vivid picture of a family of tyrannosaurs caught in a fire.

Now, the paleontologist and his half-dozen volunteers crouch in the desert sun, using picks and brushes to remove rock and dirt in careful layers. A stuffed toy rainbow rests in the dirt beside them, while a pink unicorn perches in a tree, overlooking the scene with huge plastic eyes. The incongruous mascots are here because of an exchange between Titus’s field assistant and his former lab manager, which Titus now recounts.

“He’s like, ‘Hey, I hear Alan found this new tyrannosaur site. So what’s it really like? ‘Cause Alan, you know — with him, everything’s always rainbows and unicorns,'” says Titus. The field assistant reportedly answered, “Well, I’m afraid this time, it really is.”

Trees can make summer ozone levels much worse

Trees can make summer ozone levels much worse

People often recommend planting trees to make cities greener, cleaner and healthier. But during heat waves, city trees can actually boost air pollution. Indeed, a new study finds, up to 60 percent of the smoggy ozone in a city’s air on hot days may trace to chemicals emitted by trees.

The findings might seem the opposite of what you would expect, notes Robert Young. He’s an expert in city planning at the University of Texas at Austin who was not involved in the new study. Indeed, he notes, “everything has multiple effects.” The new findings do not mean cities should discourage tree planting, he says. Instead, cities may need stricter controls on other sources of pollution, such as tailpipe emissions from cars and trucks.

City trees offer a host of benefits. These include helping soak up stormwater that might otherwise drain into rivers (carrying pollution with it). Trees also provide cooling shade. They even soak up carbon dioxide, a greenhouse gas. At the same time, these trees release oxygen into the air.

But oxygen is far from the only gas that trees and certain other green plants release into the air. One of these chemicals is a hydrocarbon known as isoprene (EYE-so-preen). It can react with combustion pollutants, such as nitrogen oxides. The result is the formation of ozone. A component of smog, ozone can irritate the lungs and aggravate airway diseases, such as asthma.

Cars and trucks are major sources of nitrogen oxides. And these oxides don’t interact only with isoprene. They also react with certain scented compounds that trees can spew. Among these are monoterpenes (MON-oh-tur-peens) and sesquiterpenes (SES-kwih-tur-peens). These terpene reactions can help create lots of other very tiny airborne pollutants.

Galina Churkina works in Germany at Humboldt University of Berlin. She and her team wanted to probe how much the chemicals released by trees could affect city air.

To do this, the researchers turned to a computer. They asked it to model the likely reactions between plant chemicals and nitrogen oxides in air throughout the Berlin metropolitan area. To do that, the researchers fed in local weather data for two summers. One was 2006, when there was a heat wave. The other was 2014, when temperatures were milder.

An average daily high there in summer tends to max out at roughly 25° Celsius (77° Fahrenheit). On such a day, chemicals emitted by area greenery would likely have contributed to making about 6 to 20 percent of the ozone in the city’s air. But at peak spikes during a heat wave, when temperatures soar to more than 30 °C (86 °F), tree-chemical emissions also spike. As a result, they are now likely to be responsible for up to 60 percent of the ozone in air.

Churkina says her team was not surprised to see the seemingly contrary relationship between plants and pollution. She adds that “its magnitude was, however, quite amazing.”

Her team shared its new findings June 6 in Environmental Science & Technology.

The results, Churkina says, suggest that city tree-planting programs should not ignore the role this greenery may play in aggravating summer air pollution. Adding more trees will improve quality of life only if those cities also undertake plans to sharply cut vehicle pollution in summer and to boost their reliance on clean energy sources for electric power, she says.

Tiny air pollutants inflame airways and harm heart

Tiny air pollutants inflame airways and harm heart

If your nose runs year-round, air pollution could be part of the problem. A new study in mice shows how tiny airborne particles affect the nose and sinus areas. Those particles can also build up within fatty deposits in blood vessels, a second new study finds. Boosting these fatty build-ups could lead to more strokes and heart attacks. Together, these new data show that when inhaled, teeny particles can lead to big harm.

Particulates (Par-TIK-yu-lets) are a big category of small pollutants in the air. They include soot, smoke, dust, mists and other specks of material. Huge amounts come from burning coal, oil and wood. Particulates also spew from factories, farms and construction sites. The material is tiny — less than 10 micrometers (4 ten-thousandths of an inch) in diameter. Yet ongoing exposure to it raises the risk of lung disease, heart disease and other illnesses. Because of particulates, air pollution is the fourth leading cause of deaths worldwide, scientists reported last year.

Even when this pollution doesn’t kill, it can harm health, notes Murray Ramanathan. He’s a head and neck surgeon at Johns Hopkins University School of Medicine in Baltimore, Md. He and other scientists sometimes work with mice to learn more about what happens in human diseases of the nose, throat and sinuses. The animals are models that stand in for people.

In one of the new studies, Ramanathan’s team showed how particle pollution can cause or worsen chronic sinusitis (Sign-yu-SY-tis). Patients with this condition have stuffy and runny noses, facial pain behind the cheeks and other soreness. “Chronic” means that these symptoms last for 12 weeks or more.

More than 29 million people in the United States alone have the illness, reports the Centers for Disease Control and Prevention. This disease “has a huge impact on quality of life,” Ramanathan says. Patients have higher rates of depression than people who don’t have the disease. They miss more days of work. They’re also less productive and have lower well-being overall, he adds. Now his team has shown that pollution may be part of the problem.

Nosing around

For 16 weeks, the researchers exposed mice to particulates smaller than 2.5 micrometers. (That’s less than one ten-thousandth of an inch in diameter). The mice breathed this dirty air for six hours per day, five days each week. The time would be comparable to years of exposure in a person. The level of pollution was “probably about half of what we would see in India, or less,” Ramanathan says. India has some of the highest levels of air pollution in the world. So the mice experienced poor air quality, but not as bad as some people encounter in their daily lives. A control group of mice breathed only clean, filtered air.

At the end of the trial, the team rinsed the nose and sinus areas of each mouse with water. Then they examined the flushed-out water.

The rinse water from mice that had been breathing the polluted air showed that the inhaled particulates had triggered the immune systems in these rodents. For example, this water had an excess of macrophages (MAK-roh-fayj-es) — a type of white blood cells. These cells engulf and destroy foreign bodies, such as germs. Compared to the control group, the pollution-breathing mice had almost four times as many macrophages. Rinse water from these animals also had more proteins linked to inflammation. This inflammation is one way the body responds to injury. Its symptoms include swelling, heat and pain.

The researchers also looked at the nose and sinus tissues of these mice using a high-powered microscope. Tissues from the pollution-breathing mice showed signs of damage. It seemed like the particulates were “basically punching holes in the wall [of the sinuses],” Ramanathan says. That would make it easier for microbes and allergens to get through. (Allergens are things, such as pollen and pet dander, which can trigger allergies.)

The group’s work “adds to a growing list of ways that particles of air pollution have harmful effects around the body,” says Mark Miller. He’s a cardiovascular scientist at the University of Edinburgh in Scotland. He studies diseases of the heart and blood system.

Particulate levels in the United States are generally lower than those used in the study. Still, Ramanathan notes, exposure depends on location. A school playground near a busy highway, he points out, could have particulate levels two or three times higher than what is typical for a region.

The team published its new findings online February 28 in the American Journal of Respiratory Cell and Molecular Biology.

Chronic sinusitis isn’t life-threatening. But “it can be debilitating for those affected,” says Michael Brauer. He’s an epidemiologist (Ep-ih-dee-me-OL-oh-jizt) at the University of British Columbia in Vancouver, Canada. “We have quite strong evidence of effects of particulate matter on the lungs,” he says. “While not unexpected, this study provides evidence supporting effects on the sinuses.”

Into the blood

Inhaled particles don’t just cause breathing problems. They also increase risks for heart attacks, strokes and other diseases of the circulatory system. A new study shows how particles can move from the lungs into the heart and blood vessels to cause this harm.

Mark Miller’s team exposed people and mice to inhaled nanoparticles. Rather than have them breathe polluted outdoor air, the researchers exposed them to billionth-of-a-meter-size gold bits. These particles are “several thousand times smaller than the width of a human hair,” Miller explains. They are roughly the size of particles spewed in the exhaust of diesel engines.

Because gold does not react easily with other chemicals, it is “essentially harmless” to people, Miller explains. Yet special laboratory methods can easily detect where this faux pollution ends up within the body. Within a day of inhaling the nanoparticles, gold showed up in people’s blood and urine. And it was still there up to three months later! Gold also showed up in the mouse blood. But only the smallest nanoparticles made it into their urine.

Concludes Miller: “We showed that these tiny particles enter the blood and are carried around the body.” And the smaller the particles, the more likely they were to circulate in blood and end up in urine.

That’s not all. Some people in the study needed surgery (not for issues related to their taking part in the tests). These people had build-ups of certain fatty deposits, called plaque (PLAK), in the arteries that carry blood to the brain. When doctors removed some of that plaque, they found it contained gold bits. Gold also showed up in fatty plaques from mice that had similar build-ups. But the gold wasn’t present in tissue within healthy mouse arteries.

Fatty plaques are a sign of atherosclerosis (ATH-er-oh-skler-OH-sis). This disease contributes to heart attacks or strokes, Miller explains. Air pollution can make this plaque-based disease worse. Particles in air pollution carry harmful chemicals on their surface, he says — chemicals much more reactive than gold. If the tiny particles reach plaque, these pollutants might prompt the fatty deposits to break open. A heart attack or stroke could result. Miller’s team shared its findings April 26 in ACS Nano.

Many countries have laws meant to limit particulate pollution. It may seem that air in many places is getting cleaner. However, Miller notes, “levels of nanoparticles have been increasing with increasing amounts of [car and truck] traffic,” he says. And if the particles are especially small, they may not even be visible.

City air often contains up to 10,000 particles per cubic centimeter (or per 0.06 cubic inch). But, a team from Carnegie Mellon University in Pittsburgh showed that on days when the air appeared totally clear of pollution, up to 150,000 particles could pollute each cubic centimeter. The particulates were simply too small to affect visibility.

Adds Brauer: “We have more than enough evidence of the harmful effects of air pollution on human health.” And that “only increases the urgency to reduce air pollution exposure as a way to improve population health.

Cataclysmic Drought Part of the History of the Dead Sea

Cataclysmic Drought Part of the History of the Dead Sea

Drilling below the floor of the Dead Sea, scientists have found evidence of cataclysmic droughts, far worse than anything ever recorded by humans — a time when the Dead Sea was much deader.

Evidence taken from a layer of salt recorded rainfall rates of about a fifth of modern levels as recently as 6,000 years ago and another dry episode 120,000 years ago.

The scientists from six nations, who drilled for forty days and forty nights, reached 1,500 feet deep into the sea bed and into the beach.

About halfway down, drill samples showed layers of salt 300 feet thick from the time between ice ages. Mud had washed into the sea when the climate was wet. The crystalized salt precipitated out when it was dry and water receded.

The change in temperatures and weather patterns happened because of variations in the Earth’s orbit, according to Yael Kiro, associate research scientist at Columbia University’s Lamont-Doherty Earth Observatory, located in Palisades, New York. The paper was published in the journal Earth and Planetary Science Letters. Atmospheric temperatures during the super droughts were 4 degrees warmer than they are now. Then the ice age brought more rain and cooler temperatures to the area and the lake that would become the Dead Sea, filled with sweet water.

Measuring tiny air bubbles in the salt enabled the researchers to extrapolate rainfall and runoff and showed a decline in precipitation far worse than anything observed now.

For decades or even centuries the amount of rain that fell upon the sea was about 20 percent of the amount that falls currently. The researchers think a change in weather patterns that normally brought storms in from the Mediterranean was responsible for the severe drought.

Today, the Dead Sea is 42 miles long and 11 miles wide and shrinking. It borders on Israel, Jordan and the Palestinian territories.

At 1,300 feet below sea level, the beach is the lowest point on earth not covered with water. The sea water itself reaches around a thousand feet deep — depending on when and where the measurement is taken — and is about 35 percent salt, almost 10 times saltier than the ocean.

The water is so thick floating is nearly effortless. People read books while bobbing on the surface. Fresh water showers are in place so swimmers can wash off the salt. Occasionally a rain storm will put a layer of fresh water on the surface, Kiro said, but it soon evaporates.

The source of most of the water is the Sea of Galilee, by way of the Jordan River. There is no outlet to an ocean.

The Jordan was once 10 times bigger than it is now, said Daniel Kurtzman, a hydrologist at the Volcani Center in Bet-Dagan, Israel. The Dead Sea needs 800 million cubic meters of water a year to stay stable. It is getting an eighth of that now.

Yet, the Dead Sea isn’t totally dead. Divers exploring the bottom of the sea have found underwater springs that spout fresh water and some forms of hardy bacteria life. Otherwise it is too salty.

The area is alive with history. The biblical David hid there from King Saul. Even in biblical times the Dead Sea was referred to in Hebrew as Yam ha-Melah, or Sea of Salt.

Herod the Great built a fortress and palace at Masada, a flat-topped hill nearby. In A.D. 73, during the Jewish wars against Roman occupation, a band of around 1,000 Jewish zealots, the Sicarii, were besieged by the Tenth Roman legion and eventually, according to the Jewish-Roman historian Flavius Josephus, committed mass suicide to avoid capture. Ruins of the fortress and the ramp the Roman army built to finally assault the fortress are now major tourist attractions. Some officers in the Israel Defense Forces take their military oaths on Masada.

A mile from the sea, on the Jordanian side, a shepherd boy found the first of the Dead Sea Scrolls in a cave in 1948.

Hotels line the shore. The water is alleged to have curative powers and cosmetic firms sell lotions with Dead Sea salt.

Josephus also recorded a major earthquake in the area, and Uri ten Brink, a geophysicist at Woods Hole Science Center of the U.S. Geological Survey in Massachusetts, said the sea is on a fault in the Middle Eastern tectonic plate, called the Dead Sea Transform, that runs from Turkey to the Red Sea and separated Arabia from Africa about 17 million years ago.

“Both Africa and Arabia are moving north,” ten Brink said, “but Arabia is moving slightly faster.”

The valley created by the motion was intermittently connected to the Mediterranean Sea, but about 8 million years ago, that connection was cut off. Had the rift valley been connected to the Mediterranean, the salinity and water level would have been normal, and the Dead Sea would have a different name.

The sea water level currently is dropping 4 feet a year. Additionally, the whole valley, including the seabed, also is dropping, said ten Brink.

The cause of the decline in water level, besides drier weather and climate change, is the growing population in the area.

Between the people in the area drawing water for drinking and agriculture, and evaporation, more water is being taken from the system than is coming into it.

The solution, Kiro said, was to desalinate water from the Mediterranean Sea and pump it into the Sea of Galilee and then let it flow into the Jordan. Plans are underway in both Jordan and Israel to do that, but politics could intervene.

Chandrayaan-1 Orbiter Spots Water-Rich Volcanic Deposits on Lunar Surface

Chandrayaan-1 Orbiter Spots Water-Rich Volcanic Deposits on Lunar Surface

The volcanic beads don’t contain a lot of water — about 0.05% by weight — but the deposits are large, and the water could potentially be extracted.

The research, published in the journal Nature Geoscience, was led by Brown University associate professor Ralph Milliken.

“Detecting the water content of lunar volcanic deposits using orbital instruments is no easy task,” said Dr. Milliken and his co-author, Dr. Shuai Lifrom the University of Hawaii.

“Researchers use orbital spectrometers to measure the light that bounces off a planetary surface.”

“By looking at which wavelengths of light are absorbed or reflected by the surface, they can get an idea of which minerals and other compounds are present.”

“The problem is that the lunar surface heats up over the course of a day, especially at the latitudes where these pyroclastic deposits are located. That means that in addition to the light reflected from the surface, the spectrometer also ends up measuring heat.”

“That thermally emitted radiation happens at the same wavelengths that we need to use to look for water. So in order to say with any confidence that water is present, we first need to account for and remove the thermally emitted component,” Dr. Milliken explained.

To do that, he and Dr. Li used laboratory-based measurements of samples returned from the Apollo missions, combined with a detailed temperature profile of the areas of interest on the Moon’s surface.

Using the new thermal correction, they looked at data from Chandrayaan-1’s Moon Mineralogy Mapper.

They found evidence of water in nearly all of the large pyroclastic deposits that had been previously mapped across the Moon’s surface, including deposits near the Apollo 15 and 17 landing sites where the water-bearing glass bead samples were collected.

“The distribution of these water-rich deposits is the key thing,” Dr. Milliken said. “They’re spread across the surface, which tells us that the water found in the Apollo samples isn’t a one-off.”

“Lunar pyroclastics seem to be universally water-rich, which suggests the same may be true of the mantle.”

The idea that the interior of the Moon is water-rich raises interesting questions about the Moon’s formation.

Planetary researchers think the Moon formed from debris left behind after an object about the size of Mars slammed into the Earth very early in the history of Solar System.

One of the reasons they had assumed the Moon’s interior should be dry is that it seems unlikely that any of the hydrogen needed to form water could have survived the heat of that impact.

“The growing evidence for water inside the Moon suggests that water did somehow survive, or that it was brought in shortly after the impact by asteroids or comets before the Moon had completely solidified,” Dr. Li said.

“The exact origin of water in the lunar interior is still a big question.”

Astronomers discover rare fossil relic of early Milky Way

Astronomers discover rare fossil relic of early Milky Way

Terzan 5, 19 000 light-years from Earth in the constellation of Sagittarius (the Archer) and in the direction of the galactic centre, has been classified as a globular cluster for the forty-odd years since its detection. Now, an Italian-led team of astronomers have discovered that Terzan 5 is like no other globular cluster known. The team scoured data from the Multi-conjugate Adaptive Optics Demonstrator [1], installed at the Very Large Telescope, as well as from a suite of other ground-based and space telescopes [2]. They found compelling evidence that there are two distinct kinds of stars in Terzan 5 which not only differ in the elements they contain, but have an age-gap of roughly 7 billion years [3].

The ages of the two populations indicate that the star formation process in Terzan 5 was not continuous, but was dominated by two distinct bursts of star formation. “This requires the Terzan 5 ancestor to have large amounts of gas for a second generation of stars and to be quite massive. At least 100 million times the mass of the Sun,” explains Davide Massari, co-author of the study, from INAF, Italy, and the University of Groningen, Netherlands.

Its unusual properties make Terzan 5 the ideal candidate for a living fossil from the early days of the Milky Way. Current theories on galaxy formation assume that vast clumps of gas and stars interacted to form the primordial bulge of the Milky Way, merging and dissolving in the process.

“We think that some remnants of these gaseous clumps could remain relatively undisrupted and keep existing embedded within the galaxy,” explains Francesco Ferraro from the University of Bologna, Italy, and lead author of the study. “Such galactic fossils allow astronomers to reconstruct an important piece of the history of our Milky Way.”

While the properties of Terzan 5 are uncommon for a globular cluster, they are very similar to the stellar population which can be found in the galactic bulge, the tightly packed central region of the Milky Way. These similarities could make Terzan 5 a fossilised relic of galaxy formation, representing one of the earliest building blocks of the Milky Way.

This assumption is strengthened by the original mass of Terzan 5 necessary to create two stellar populations: a mass similar to the huge clumps which are assumed to have formed the bulge during galaxy assembly around 12 billion years ago. Somehow Terzan 5 has managed to survive being disrupted for billions of years, and has been preserved as a remnant of the distant past of the Milky Way.

“Some characteristics of Terzan 5 resemble those detected in the giant clumps we see in star-forming galaxies at high-redshift, suggesting that similar assembling processes occurred in the local and in the distant Universe at the epoch of galaxy formation,” continues Ferraro.

Hence, this discovery paves the way for a better and more complete understanding of galaxy assembly. “Terzan 5 could represent an intriguing link between the local and the distant Universe, a surviving witness of the Galactic bulge assembly process,” explains Ferraro while commenting on the importance of the discovery. The research presents a possible route for astronomers to unravel the mysteries of galaxy formation, and offers an unrivaled view into the complicated history of the Milky Way.

Small, distant worlds are either big Earths or little Neptunes

Small, distant worlds are either big Earths or little Neptunes

This conclusion emerges from data collected by the Kepler space telescope. It was charged with hunting for alien planets, meaning those outside our solar system. Now Kepler’s initial mission is over and its data in-hand.

Scientists released Kepler’s final tally of so-called exoplanets June 19 at a news conference. The spacecraft has turned up 4,034 of these candidate planets. Among them are 49 rocky worlds, including 10 newly discovered ones. These sit in their stars’ Goldilocks zones. That means they fall within a region that’s not too hot and not too cold to support life as we know it. To date, 2,335 of the candidates have been confirmed as planets. That includes about 30 rocky worlds that are in potentially habitable zones.

Benjamin Fulton studies these alien worlds. He works at the University of Hawaii at Manoa and at the California Institute of Technology (Caltech) in Pasadena. He and his colleagues made careful measurements of the candidate planets’ stars. This turned up something unexpected. Few planets had a radius more than 1.5 times that of Earth but less than twice as big as Earth’s.

This split the planet into two types, based on size. Rocky ones, like Earth, had the smaller radii (under 1.5 times the size of Earth’s). Gassy planets (the Neptune-like ones) tended to have a radius that was from 2 to 3.5 times the size of Earth’s.

“This is a major new division in the family tree of exoplanets,” Fulton reports. It is “somewhat analogous to the discovery that mammals and lizards are separate branches on the tree of life,” he says.

The Kepler space telescope launched in 2009. It had one ultimate goal: to identify the fraction of stars like the sun that host planets like Earth. To do this, it stared at a single patch of sky in the constellation Cygnus for four years. Kepler watched sunlike stars for telltale dips in brightness. Such dips point to when a planet passes in front of its star. Known as a transit, one might think of it as a mini or partial-eclipse.

The Kepler team has still not yet calculated what share of the sun-like stars in Kepler’s eye host planets in  the Goldilocks zone. But astronomers are confident that they finally have enough data to do so, said Susan Thompson. She is an astronomer at the SETI Institute in Mountain View, Calif. She presented the new data during the Kepler/K2 Science Conference IV being held at NASA’s Ames Research Center in Moffett Field, Calif. (K2 refers to Kepler’s second mission. It began when the telescope’s stabilizing reaction wheels broke.)

Thompson and her colleagues ran the Kepler dataset through “Robovetter” software. It acted like a sieve to catch all of the potential planets that the dataset contained. Running fake planet data through the software pinpointed how likely it was to confuse other signals for a planet or to miss true planets.

“This is the first time we have a population [of exoplanets] that’s really well-characterized,” Thompson says.

Astronomers’ knowledge of exoplanets is only as good as their knowledge of the planets’ host stars. So, in a separate study, Fulton and his colleagues turned to the Keck telescope in Hawaii. They used it to precisely measure the sizes of 1,300 planet-hosting stars that were in the Kepler telescope’s field of view. That let them compare the dips in light due to a planet crossing in front of its star to that star’s real size. Those star sizes helped pin down the sizes of the planets with four times more precision than ever before.

The split in planet types the team found could come from small differences in the planets’ sizes, compositions and distances from their stars. Young stars emit powerful winds of charged particles. These winds can blow a growing planet’s atmosphere away.

Bigger planets have more gravity, which helps them hold on to a thicker atmosphere. If a planet was too close to its star or too small to hold its atmosphere tightly — less than 75 percent larger than Earth — it would lose its atmosphere and end up in the smaller group. The planets that look more like Neptune today either had more gas to begin with. Or, they grew up in a gentler environment, Fulton  now concludes.

That split could have implications for the abundance of life in the Milky Way. That’s our galaxy. Consider the surfaces of mini-Neptunes, if they exist. They would suffer under the crushing pressure of a thick atmosphere.

“These would not be nice places to live,” Fulton said. “Our result sharpens up the dividing line between potentially habitable planets and those that are inhospitable.”

Better telescopes will sharpen the dividing line even further. Two such telescopes are slated to launch in 2018. The Transiting Exoplanet Survey Satellite will fill in the details of the exoplanet landscape with more observations of planets around bright stars. The James Webb Space Telescope will be able to check the atmospheres of those planets for signs of life.

“We can now really ask the question, ‘Is our planetary system unique in the galaxy?’” says Courtney Dressing. She is an exoplanet astronomer at Caltech. “My guess is the answer’s no. We’re not that special.”

How Earth got its moon

How Earth got its moon

The story of our moon’s origin does not add up. Most scientists think that that the moon formed in the earliest days of our solar system. That would have been back around 4.5 billion years ago. At that time, some scientists suspect, a Mars-sized rocky object — what they call a protoplanet — smacked into the young Earth. This collision would have sent debris from both worlds hurling into orbit. Some of the rubble eventually would have stuck together, creating our moon.

Or maybe not.

Astronomers refer to that protoplanet as Theia (THAY-ah). Named for the Greek goddess of sight, no one knows if this big rock ever existed — because if it did, it would have died in that violent collision with Earth.

And here’s why some astronomers have come to doubt Theia was real: If it smashed into Earth and helped form the moon, then the moon should look like a hybrid of Earth and Theia. Yet studies of lunar rocks show that the chemical composition of Earth and its moon are exactly the same. So that planet-on-planet impact story appears to have some holes in it.

That has prompted some researchers to look for other moon-forming scenarios. One proposal: A string of impacts created mini moons largely from Earth material. Over time, they might have merged to form one big moon.

“Multiple impacts just make more sense,” says Raluca Rufu. She’s a planetary scientist at the Weizmann Institute of Science in Rehovot, Israel. “You don’t need this one special impactor to form the moon.”

But Theia shouldn’t be left on the cutting room floor — at least not yet. Earth and Theia could have been built largely from the same type of material, new research suggests. Then they would have had a similar chemical recipe. There is no sign of “other” material on the moon, this explanation argues, because nothing about Theia was different.

“I’m absolutely on the fence between these two opposing ideas,” says Edward Young. He studies cosmochemistry — the chemistry of the universe — at the University of California, Los Angeles. Determining which story is correct is going to take more research. But the answer could offer profound insights into the evolution of the early solar system, Young says.

Mother of the moon

Earth’s moon is an oddball. Most other moons in our solar system live way out among the gas giants, such as Saturn and Jupiter. The only other terrestrial planet with orbiting moons is Mars. Its moons, Phobos and Deimos, are small. The leading explanation for them is that likely were once asteroids. At some point, they were captured by the Red Planet’s gravity. Earth’s moon is too big for that scenario. If the moon had come in from elsewhere, it probably would have crashed into Earth or escaped and fled into space.

An alternate explanation dates from the 1800s. It suggests that moon-forming material flew off of a fast-spinning young Earth. (Imagine children tossed from an out-of-control merry-go-round.) That idea fell out of favor, though, when scientists calculated the spin speeds required. They were impossibly fast.

In the mid-1970s, planetary scientists proposed the giant-impact hypothesis. (Later, in 2000, they named that mysterious planet-sized body as Theia.) The notion of a big rocky collision made sense. After all, the early solar system was like a game of cosmic billiards. Giant space-rock smash-ups were common.

But a 2001 study of rocks collected during NASA’s Apollo missions to the moon cast doubt on the giant-impact hypothesis. Research showed that Earth and its moon were surprisingly alike. To figure out a rock’s origin, scientists measure the relative abundance of different forms of oxygen. Called isotopes (EYE-so-toaps), they are forms of an element with different masses. (The reason they differ: Although each has the same number of protons in its nuclei, they have different numbers of neutrons.)

Cassini Sees Methane Clouds in Titan’s Summer Skies

Cassini Sees Methane Clouds in Titan’s Summer Skies

Compared to earlier in Cassini’s mission, most of the surface in Titan’s northern high latitudes is now illuminated by the Sun.

Summer solstice in the Saturn system (the longest day of summer in the northern hemisphere and the shortest day of winter in the southern hemisphere) occurred on May 24, 2017.

This image was taken with Cassini’s narrow-angle camera on June 9, 2017, using a spectral filter that preferentially admits wavelengths of near-IR light centered at 938 nm.

Cassini obtained the view at a distance of about 315,000 miles (507,000 km) from Titan.

The spacecraft is currently in its ‘Grand Finale’ phase, the final phase of its long mission.

Over the course of 22 weeks from April 26 to September 15, 2017, Cassini is making a series of dramatic dives between Saturn and its icy rings.

The mission is returning new insights about the interior of the gas giant and the origins of the rings, along with images from closer to Saturn than ever before.

The mission will end with a final plunge into Saturn’s atmosphere on September 15.

Oceans on Saturn’s Moon May Be Habitable For Microbes

Oceans on Saturn’s Moon May Be Habitable For Microbes

The group reports the results in a paper in the April 14 issue of the journal Science. They claim that the only plausible source for the particles detected in Enceladus’ plume is hydrothermal reactions between hot rocks and water at the bottom of the moon’s ocean. They detected the presence of molecular hydrogen and carbon dioxide, which together provide the ingredients necessary for methanogenesis — a biochemical reaction crucial for the survival of microbes that live in the deep-sea regions on Earth. However, according to a commentary by Jeffrey Seewald, a geochemist at Woods Hole Oceanographic Institution in Massachusetts, scientists still have a long way to go before fully understanding the possibility for life underneath the ice of Enceladus.

The Cassini spacecraft, which launched in 1997, reached Saturn’s orbit in 2004. It is now undertaking the final phase of a mission that started in December 2016 to explore Saturn’s rings. The last flyby of the planet is scheduled for April 19. The project will end when the spacecraft falls into Saturn’s atmosphere, probably around September of this year.

Jupiter gets surprisingly complex new portrait

Jupiter gets surprisingly complex new portrait

Scientists are repainting Jupiter’s portrait — scientifically, anyway. NASA’s Juno spacecraft swooped within 5,000 kilometers (3,100 miles) of Jupiter’s cloud tops last August 27. Scientists’ first close-up of the gas giant has unveiled several unexpected details about the planet’s gravity and powerful magnetic fields. They also give a new view of the planet’s auroras and ammonia-rich weather systems.

Researchers need to revamp their view of Jupiter, these findings suggest. They even challenge ideas about how solar systems form and evolve. The findings come from two papers published May 26 in Science.

“We went in with a preconceived notion of how Jupiter worked,” says Scott Bolton. “And I would say we have to eat some humble pie.” Bolton is a planetary scientist who leads the Juno mission. He works at the Southwest Research Institute in San Antonio, Texas.

Scientists thought that beneath its thick clouds, Jupiter would be uniform and boring. Not anymore. “Jupiter is much more complex deep down than anyone anticipated,” Bolton now observes.

One early surprise came from Jupiter’s gravity. Juno measured that gravity from its tug on the spacecraft. The values suggest that Jupiter doesn’t have a solid, compact core. Instead, the core is probably large and diffuse. It could even be as big as half the planet’s radius, Bolton and his colleagues conclude. “Nobody anticipated that,” Bolton notes.

Imke de Pater is a planetary scientist. She works at the University of California, Berkeley and was not involved in the new studies. The new gravity measurements should lead to a better understanding of the planet’s core, she says. But, she adds, doing so will require using some challenging math.

She was more surprised by Jupiter’s magnetic field. It is the strongest of any planet in our solar system. And Juno’s data show that it is almost twice as strong as expected in some spots. Its strength varies. It gets stronger than expected in some areas, weaker in others. These data support the idea that this magnetic field originates from circulating electric currents. Those currents are probably in one of the planet’s outer layers of hydrogen.

Responding to the ‘wind’

A second study looked at how Jupiter’s magnetic field interacts with a stream of charged particles flowing from the sun. Known as the solar wind, these particles affects Jupiter’s auroras, points out John Connerney. An astrophysicist, he led this study with colleagues at NASA’s Goddard Space Flight Center in Greenbelt, Md.

Auroras are brilliant shows of colored light that appear at or near a planet’s poles. (Earth’s auroras are known as the Northern and Southern Lights.) Juno captured Jupiter’s auroras in ultraviolet and infrared light. These images come from wavelengths beyond what the human eye can see. They showed particles falling into the planet’s atmosphere. That is similar to what happens on Earth. But they also showed beams of electrons shooting out from Jupiter’s atmosphere. Nothing like that occurs on Earth.

Bolton’s team described another oddity. Ammonia wells up from the depths of Jupiter’s atmosphere in a strange way. This upwelling resembles a feature on Earth called a Hadley cell. Warm air at our equator rises and creates trade winds, hurricanes and other forms of weather. Jupiter’s ammonia cycling looks similar to this. But Jupiter lacks a solid surface, the researchers note. So the upwelling likely works in a completely different way than on Earth. The scientists hope to figure out how this works on Jupiter. This could help scientists better understand the atmospheres of such huge gas planets.

Explains Bolton, Jupiter is a standard of comparison for all gas giants — both within and beyond our solar system. Most planetary systems have Jupiter-like planets. He says that means researchers can apply what they learn about Jupiter to giant planets elsewhere.

Planet Nine could spell doom for solar system

Planet Nine could spell doom for solar system

The solar system could be thrown into disaster when the sun dies if the mysterious ‘Planet Nine’ exists, according to research from the University of Warwick. Dr Dimitri Veras in the Department of Physics has discovered that the presence of Planet Nine – the hypothetical planet which may exist in the outer Solar System – could cause the elimination of at least one of the giant planets after the sun dies, hurling them out into interstellar space through a sort of ‘pinball’ effect.

When the sun starts to die in around seven billion years, it will blow away half of its own mass and inflate itself — swallowing the Earth — before fading into an ember known as a white dwarf. This mass ejection will push Jupiter, Saturn, Uranus and Neptune out to what was assumed a safe distance.

However, Dr. Veras has discovered that the existence of Planet Nine could rewrite this happy end-ing. He found that Planet Nine might not be pushed out in the same way, and in fact might instead be thrust inward into a death dance with the solar system’s four known giant planets — most notably Uranus and Neptune. The most likely result is ejection from the solar system, forever.

Using a unique code that can simulate the death of planetary systems, Dr. Veras has mapped nu-merous different positions where a ‘Planet Nine’ could change the fate of the solar system. The further away and the more massive the planet is, the higher the chance that the solar system will experience a violent future.

This discovery could shed light on planetary architectures in different solar systems. Almost half of existing white dwarfs contain rock, a potential signature of the debris generated from a similarly calamitous fate in other systems with distant “Planet Nines” of their own.

In effect, the future death of our sun could explain the evolution of other planetary systems.

Dr. Veras explains the danger that Planet Nine could create: “The existence of a distant massive planet could fundamentally change the fate of the solar system. Uranus and Neptune in particular may no longer be safe from the death throes of the Sun. The fate of the solar system would depend on the mass and orbital properties of Planet Nine, if it exists.”

“The future of the Sun may be foreshadowed by white dwarfs that are ‘polluted’ by rocky debris. Planet Nine could act as a catalyst for the pollution. The Sun’s future identity as a white dwarf that could be ‘polluted’ by rocky debris may reflect current observations of other white dwarfs throughout the Milky Way,” Dr Veras adds.

The paper ‘The fates of solar system analogues with one additional distant planet’ will be published in the Monthly Notices of the Royal Astronomical Society.

Scientists Predict a New Star Will Appear in 2022

Scientists Predict a New Star Will Appear in 2022

It’s the first time scientists have been able to predict such an explosion and it happened with the help of a little serendipity. Astronomer Larry Molnar, from Calvin College in Grand Rapids, Michigan, and his students became intrigued with a star known as KIC 9832227 after hearing a talk at an astronomy conference, according to a press release. KIC 9832227, which is about 1,800 light-years away from Earth, kept changing brightness, and close observations revealed that the star was actually two stars orbiting one another so closely their outermost layers touch.

The researchers calculated how long it took the stars to circle each other, and realized the orbit time was getting shorter. The data closely matched that of another star system, V1309 Scorpii, which exploded in 2008.

Molnar’s team thinks KIC 9832227 will follow the same path, producing a “red nova” stellar explosion that will briefly make the system 10,000 times brighter. The “guest star” will have a reddish tint and will appear in a wing of the constellation Cygnus, a swan-shaped collection of stars that grace the northern sky in summer and autumn.

High-Silica ‘Halos’ Found in Gale Crater Shed Light on Wet Ancient Mars

High-Silica ‘Halos’ Found in Gale Crater Shed Light on Wet Ancient Mars

The concentration of silica is very high at the centerlines of these halos,” saidDr. Jens Frydenvang, a scientist at Los Alamos National Laboratory and the University of Copenhagen.

“What we’re seeing is that silica appears to have migrated between very old sedimentary bedrock and into younger overlying rocks.”

“The goal of NASA’s Curiosity rover mission has been to find out if Mars was ever habitable, and it has been very successful in showing that Gale crater once held a lake with water that we would even have been able to drink, but we still don’t know how long this habitable environment endured,” he said.

“What this finding tells us is that, even when the lake eventually evaporated, substantial amounts of groundwater were present for much longer than we previously thought — thus further expanding the window for when life might have existed on Mars.”

Whether this groundwater could have sustained life remains to be seen. But the new study buttresses recent findings by another research team who foundboron on Mars, which also indicates the potential for long-term habitable groundwater in the planet’s past.

The halos were first analyzed in 2015 with Curiosity’s science-instrument payload, including the laser-shooting ChemCam instrument.

The rover has traveled more than 10 miles (16 km) over more than 1,700 sols (Martian days) as it has traveled from the bottom of Gale crater part way up Mount Sharp in the center of the crater.

The elevated silica in halos was found over 65 to 100 feet (20-30 m) in elevation near a rock-layer of ancient lake sediments that had a high silica content.

“This tells us that the silica found in halos in younger rocks close by was likely remobilized from the old sedimentary rocks by water flowing through the fractures,” Dr. Frydenvang said.

“Specifically, some of the rocks containing the halos were deposited by wind, likely as dunes. Such dunes would only exist after the lake had dried up.”

“The presence of halos in rocks formed long after the lake dried out indicates that groundwater was still flowing within the rocks more recently than previously known.”

Engineers invent the first bio-compatible, ion current battery

Engineers invent the first bio-compatible, ion current battery

In our bodies, flowing ions (sodium, potassium and other electrolytes) are the electrical signals that power the brain and control the rhythm of the heart, the movement of muscles, and much more.

In traditional batteries, the electrical energy, or current, flows in form of moving electrons. This current of electrons out of the battery is generated within the battery by moving positive ions from one end (electrode) of a battery to the other. The new UMD battery does the opposite. It moves electrons around in the device to deliver energy that is a flow of ions. This is the first time that an ionic current-generating battery has been invented.

“My intention is for ionic systems to interface with human systems,” said Liangbing Hu, the head of the group that developed that battery. Hu is a professor of materials science at the University of Maryland, College Park. He is also a member of the University of Maryland Energy Research Center and a principal investigator of the Nanostructures for Electrical Energy Storage Energy Frontier Research Center, sponsored by the Department of Energy, which funded the study.

“So I came up with the reverse design of a battery,” Hu said. “In a typical battery, electrons flow through wires to interface electronics, and ions flow through the battery separator. In our reverse design, a traditional battery is electronically shorted (that means electrons are flowing through the metal wires). Then ions have to flow through the outside ionic cables. In this case, the ions in the ionic cable — here, grass fibers — can interface with living systems.”

The work of Hu and his colleagues was published in the July 24 issue of Nature Communications.

“Potential applications might include the development of the next generation of devices to micro-manipulate neuronal activities and interactions that can prevent and/or treat such medical problems as Alzheimer’s disease and depression,” said group member Jianhua Zhang, PhD, a staff scientist at the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), part of the National Institutes of Health in Bethesda, Md.

“The battery could be used to develop medical devices for the disabled, or for more efficient drug and gene delivery tools in both research and clinical settings, as a way to more precisely treat cancers and other medical diseases, said Zhang, who performed biological experiments to test that the new battery successfully transmitted current to living cells..

“Looking far ahead on the scientific horizon, one hopes also that this invention may help to establish the possibility of direct machine and human communication,” he said.

Bio-compatible, bio-material batteries

Because living cells work on ionic current and existing batteries provide an electronic current, scientists have previously tried to figure out how to create biocompatibility between these two by patching an electronic current into an ionic current. The problem with this approach is that electronic current needs to reach a certain voltage to jump the gap between electronic systems and ionic systems. However, in living systems ionic currents flow at a very low voltage. Thus, with an electronic-to-ionic patch the induced current would be too high to run, say, a brain or a muscle. This problem could be eliminated by using ionic current batteries, which could be run at any voltage.

The new UMD battery also has another unusual feature — it uses grass to store its energy. To make the battery, the team soaked blades of Kentucky bluegrass in lithium salt solution. The channels that once moved nutrients up and down the grass blade were ideal conduits to hold the solution.

The demonstration battery the research team created looks like two glass tubes with a blade of grass inside, each connected by a thin metal wire at the top. The wire is where the electrons flow through to move from one end of the battery to the other as the stored energy slowly discharges. At the other end of each glass tube is a metal tip through which the ionic current flows.

The researchers proved that the ionic current is flowing by touching the ends of the battery to either end of a lithium-soaked cotton string, with a dot of blue-dyed copper ions in the middle. Caught up in the ionic current, the copper moved along the string toward the negatively charged pole, just as the researchers predicted.

“The microchannels in the grass can hold the salt solution, making them a stable ionic conductor,” said Chengwei Wang, first author of the paper and a graduate student in the Materials Science and Engineering department at the University of Maryland in College Park.

However, the team plans to diversify the types of ionic current electron batteries they can produce. “We are developing multiple ionic conductors with cellulose, hydrogels and polymers,” said Wang.

This is not the first time UMD scientists have tested natural materials in new uses. Hu and his team previously have been studying cellulose and plant materials for electronic batteries, creating a battery and a supercapacitor out of wood and a battery from a leaf. They also have created transparent wood as a potentially more energy-efficient replacement for glass windows.

Creative Work

Ping Liu, an associate professor in nanoengineering at the University of California, San Diego, who was not involved with the study, said: “The work is very creative and its main value is in delivering ionic flow to bio systems without posing other dangers to them. Eventually, the impact of the work really resides in whether smaller and more biocompatible junction materials can be found that then interface with cells and organisms more directly and efficiently.”

Moon has a water-rich interior

Moon has a water-rich interior

Scientists had assumed for years that the interior of the Moon had been largely depleted of water and other volatile compounds. That began to change in 2008, when a research team including Brown University geologist Alberto Saal detected trace amounts of water in some of the volcanic glass beads brought back to Earth from the Apollo 15 and 17 missions to the Moon. In 2011, further study of tiny crystalline formations within those beads revealed that they actually contain similar amounts of water as some basalts on Earth. That suggests that the Moon’s mantle — parts of it, at least — contain as much water as Earth’s.

“The key question is whether those Apollo samples represent the bulk conditions of the lunar interior or instead represent unusual or perhaps anomalous water-rich regions within an otherwise ‘dry’ mantle,” said Ralph Milliken, lead author of the new research and an associate professor in Brown’s Department of Earth, Environmental and Planetary Sciences. “By looking at the orbital data, we can examine the large pyroclastic deposits on the Moon that were never sampled by the Apollo or Luna missions. The fact that nearly all of them exhibit signatures of water suggests that the Apollo samples are not anomalous, so it may be that the bulk interior of the Moon is wet.”

The research, which Milliken co-authored with Shuai Li, a postdoctoral researcher at the University of Hawaii and a recent Brown Ph.D. graduate, is published in Nature Geoscience.

Detecting the water content of lunar volcanic deposits using orbital instruments is no easy task. Scientists use orbital spectrometers to measure the light that bounces off a planetary surface. By looking at which wavelengths of light are absorbed or reflected by the surface, scientists can get an idea of which minerals and other compounds are present.

The problem is that the lunar surface heats up over the course of a day, especially at the latitudes where these pyroclastic deposits are located. That means that in addition to the light reflected from the surface, the spectrometer also ends up measuring heat.

“That thermally emitted radiation happens at the same wavelengths that we need to use to look for water,” Milliken said. “So in order to say with any confidence that water is present, we first need to account for and remove the thermally emitted component.”

To do that, Li and Milliken used laboratory-based measurements of samples returned from the Apollo missions, combined with a detailed temperature profile of the areas of interest on the Moon’s surface. Using the new thermal correction, the researchers looked at data from the Moon Mineralogy Mapper, an imaging spectrometer that flew aboard India’s Chandrayaan-1 lunar orbiter.

The researchers found evidence of water in nearly all of the large pyroclastic deposits that had been previously mapped across the Moon’s surface, including deposits near the Apollo 15 and 17 landing sites where the water-bearing glass bead samples were collected.

“The distribution of these water-rich deposits is the key thing,” Milliken said. “They’re spread across the surface, which tells us that the water found in the Apollo samples isn’t a one-off. Lunar pyroclastics seem to be universally water-rich, which suggests the same may be true of the mantle.”

The idea that the interior of the Moon is water-rich raises interesting questions about the Moon’s formation. Scientists think the Moon formed from debris left behind after an object about the size of Mars slammed into the Earth very early in solar system history. One of the reasons scientists had assumed the Moon’s interior should be dry is that it seems unlikely that any of the hydrogen needed to form water could have survived the heat of that impact.

“The growing evidence for water inside the Moon suggest that water did somehow survive, or that it was brought in shortly after the impact by asteroids or comets before the Moon had completely solidified,” Li said. “The exact origin of water in the lunar interior is still a big question.”

In addition to shedding light on the water story in the early solar system, the research could also have implications for future lunar exploration. The volcanic beads don’t contain a lot of water — about .05 percent by weight, the researchers say — but the deposits are large, and the water could potentially be extracted.

“Other studies have suggested the presence of water ice in shadowed regions at the lunar poles, but the pyroclastic deposits are at locations that may be easier to access,” Li said. “Anything that helps save future lunar explorers from having to bring lots of water from home is a big step forward, and our results suggest a new alternative.”

The research was funded by the NASA Lunar Advanced Science and Exploration Research Program (NNX12AO63G).

Water bears will survive the end of the world as we know it

Water bears will survive the end of the world as we know it

These tough little buggers, also known as tardigrades, could keep calm and carry on until the sun boils Earth’s oceans away billions of years from now, according to a new study that examined water bears’ resistance to various astronomical disasters. This finding, published July 14 inScientific Reports, suggests that complex life can be extremely difficult to destroy, which bodes well for anyone hoping Earthlings have cosmic company.

Most previous studies of apocalyptic astronomical events — like asteroid impacts, neighboring stars going supernova or insanely energetic explosions called gamma-ray bursts — focused on their threat to humankind. But researchers wanted to know what it would take to annihilate one of the world’s most resilient creatures, so they turned to tardigrades.

The tardigrade is basically the poster child for extremophiles. These hardy, microscopic critters are up for anything. Decades without food or water? No problem. Temperatures plummeting to –272° Celsius or skyrocketing to 150°? Bring it on. Even the crushing pressure of deep seas, the vacuum of outer space and exposure to extreme radiation don’t bother water bears.

Water bears are so sturdy that they probably won’t succumb to nuclear war, global warming or any astronomical events that wreak havoc on Earth’s atmosphere — all of which could doom humans, says Harvard University astrophysicist Avi Loeb. To exterminate tardigrades, something would have to boil the oceans away (no more water means no more water bears). So Loeb and colleagues calculated just how big an asteroid, how strong a supernova, or how powerful a gamma-ray burst would have to be to inject that much energy into Earth’s oceans.

“They actually ran the numbers on everyone’s favorite natural doomsday weapons,” marvels Seth Shostak, an astronomer at the SETI Institute in Mountain View, Calif.

Loeb’s team found that there are only 19 asteroids in the solar system sufficiently massive enough to eradicate water bears, and none are on a collision course with Earth. A supernova — the explosion of a massive star after it burns through its fuel — would have to happen within 0.13 light-years of Earth, and the closest star big enough to go supernova is nearly 147 light-years away. And gamma-ray bursts — thought to result from especially powerful supernovas or stellar collisions — are so rare that the researchers calculated that, over a billion years, there’s only about a 1 in 3 billion chance of one killing off tardigrades.

“Makes me wish I were an extremophile like a tardigrade,” says Edward Guinan, an astrophysicist at Villanova University in Pennsylvania who was not involved in the work.

But even tardigrades can’t cheat death forever. In the next seven billion years, the sun will swell into a red giant star, potentially engulfing Earth and surely sizzling away its water. But the fact that tardigrades are so resistant to other potential apocalypses in the interim implies that “life is tough, once it gets going,” Shostak says.

Bones make hormones that communicate with the brain and other organs

Bones make hormones that communicate with the brain and other organs

Long typecast as the strong silent type, bones are speaking up.

In addition to providing structural support, the skeleton is a versatile conversationalist. Bones make hormones that chat with other organs and tissues, including the brain, kidneys and pancreas, experiments in mice have shown.

“The bone, which was considered a dead organ, has really become a gland almost,” says Beate Lanske, a bone and mineral researcher at Harvard School of Dental Medicine. “There’s so much going on between bone and brain and all the other organs, it has become one of the most prominent tissues being studied at the moment.”

At least four bone hormones moonlight as couriers, recent studies show, and there could be more. Scientists have only just begun to decipher what this messaging means for health. But cataloging and investigating the hormones should offer a more nuanced understanding of how the body regulates sugar, energy and fat, among other things.

Of the hormones on the list of bones’ messengers — osteocalcin, sclerostin, fibroblast growth factor 23 and lipocalin 2 — the last is the latest to attract attention. Lipocalin 2, which bones unleash to stem bacterial infections, also works in the brain to control appetite, physiologist Stavroula Kousteni of Columbia University Medical Center and colleagues reported in the March 16 Nature.

Bone-brain connection

After mice eat, their bone-forming cells absorb nutrients and release a hormone called lipocalin 2 (LCN2) into the blood. LCN2 travels to the brain, where it gloms on to appetite-regulating nerve cells, which tell the brain to stop eating, a recent study suggests.

Researchers previously thought that fat cells were mostly responsible for making lipocalin 2, or LCN2. But in mice, bones produce up to 10 times as much of the hormone as fat cells do, Kousteni and colleagues showed. And after a meal, mice’s bones pumped out enough LCN2 to boost blood levels three times as high as premeal levels. “It’s a new role for bone as an endocrine organ,” Kousteni says.

Clifford Rosen, a bone endocrinologist at the Center for Molecular Medicine in Scarborough, Maine, is excited by this new bone-brain connection. “It makes sense physiologically that there are bi­directional interactions” between bone and other tissues, Rosen says. “You have to have things to regulate the fuel sources that are necessary for bone formation.”

Bones constantly reinvent themselves through energy-intensive remodeling. Cells known as osteoblasts make new bone; other cells, osteoclasts, destroy old bone. With such turnover, “the skeleton must have some fine-tuning mechanism that allows the whole body to be in sync with what’s happening at the skeletal level,” Rosen says. Osteoblasts and osteoclasts send hormones to do their bidding.

Scientists began homing in on bones’ molecular messengers a decade ago (SN: 8/11/07, p. 83). Geneticist Gerard Karsenty of Columbia University Medical Center found that osteocalcin — made by osteoblasts —helps regulate blood sugar. Osteocalcin circulates through the blood, collecting calcium and other minerals that bones need. When the hormone reaches the pancreas, it signals insulin-making cells to ramp up production, mouse experiments showed. Osteocalcin also signals fat cells to release a hormone that increases the body’s sensitivity to insulin, the body’s blood sugar moderator, Karsenty and colleagues reported in Cell in 2007. If it works the same way in people, Karsenty says, osteocalcin could be developed as a potential diabetes or obesity treatment.

Wi-Fi could protect you from getting lost in virtual reality

Wi-Fi could protect you from getting lost in virtual reality

You’re at home playing a virtual reality (VR) game on the Oculus Rift, dodging zombies like a pro. But then you step too far back or look behind you, and suddenly you’re frozen in space, as the system’s infrared cameras can no longer see the lights on your goggles and it loses track of you. Instant brain food. Now, researchers have come up with a way to spare you such a frustrating end by using standard Wi-Fi technology to enhance VR’s tracking abilities. In addition to improving VR, the technology could also help track robots or drones and streamline motion capture for movies.

VR enables a user to move through a virtual 3D world projected through the video screens in the system’s headset. To track the user’s movement, the Rift uses one or more infrared cameras in a room, often on tripods. The headset has accelerometers to measure tilt, and it has infrared lights that the cameras use to track movement forward, back, or sideways. Another VR system, the HTC Vive, tracks movement by projecting infrared light from devices in the corners of the room that are detected by sensors on the headset. A related technology, called augmented reality (AR), maps virtual features onto the wearer’s view of the real world. So a user’s living room may be inhabited by virtual monsters. Microsoft’s HoloLens AR system uses several outward-facing cameras on the headset to track the user’s movement in relation to the environment.

Such systems have their limitations, however. In order for VR games to work without glitches, users often need to stay within a few square meters, and the infrared sightlines can’t be blocked by furniture or other people or by turning away. Microsoft’s AR system doesn’t work in all lighting conditions, it can be confused by blank walls or windows, and it can’t track your hands if they move out of view.

A team of researchers from Stanford University in Palo Alto, California, wanted a simpler, cheaper, more robust system. So they turned to the common radio technology Wi-Fi. Wi-Fi has been used to localize people and objects in space before, but only with an accuracy of tens of centimeters, says Manikanta Kotaru, a computer scientist at Stanford, and he and his colleagues thought they could do better.

Their solution, which they call WiCapture, requires two parts: a standard Wi-Fi chip, such as the one you might find in your phone, and at least two Wi-Fi “access points,” which are transmitters such as the ones found in home routers. Communication between the chip and a transmitter comes in high-frequency radio waves. In order to track a Wi-Fi signal source with millimeter-level accuracy, one needs to measure the time it takes a signal to travel from the chip to the transmitter with picosecond-level accuracy. However, the chip and transmitter have different clocks, and no two clocks in Wi-Fi devices are perfectly synchronized.

To get around this problem the researchers took advantage of the fact that signals reach the transmitter through many paths. Some radio waves travel directly to the receiver to create the main signal, whereas others bounce off walls to create echoes. Kotaru wrote an algorithm that looks at signals from two different paths, identified by triangulating among the transmitter’s multiple antennas. Those signals will be equally affected by clock asynchrony, so the algorithm can just compare their relative change as the chip moves and ignore the drift of the clocks’ timing. Still, this method measures distance to only one transmitter; using two or more transmitters in combination allows the algorithm to use triangulation to track motion in two dimensions. (The researchers will eventually expand WiCapture to track motion in three dimensions.)

To test the idea, scientists placed the Wi-Fi chip on a mechanical device that could move it with high accuracy in an office 5 meters by 6 meters with four Wi-Fi transmitters in the corners. As they moved the chip around in various patterns, WiCapture tracked its position to within a centimeter. Next, the researchers tried an office in which all the Wi-Fi transmitters were occluded by furniture or walls. As long as two were in the same room as the chip, WiCapture’s median error was still only 1.5 centimeters. Outside, the median error was again less than a centimeter, the team will report this month at the Conference on Computer Vision and Pattern Recognition in Honolulu.

“It was really nice to bridge work in the wireless community with work in the virtual reality community,” says Dina Katabi, a computer scientist at the Massachusetts Institute of Technology in Cambridge who was not involved in the experiment. Yuval Boger, a physicist and the CEO of Sensics, a VR hardware and software company in Columbia, Maryland, says, “the need is real” for a robust hi-resolution position tracker. He notes that 1 centimeter is not a high enough accuracy for head tracking, but would work for hand tracking. In a fighting game, “I’m not sure I’m going to do any small delicate movements with a sword.”

The authors acknowledge that WiCapture still has a slower reaction time and lower accuracy than infrared cameras, but they think they can improve both by combining it with an accelerometer to add another source of data and fill in the gaps. In any case, Kotaru says, the technology is basically ready to use.

Astronomers Find Giant Planet That’s Hotter Than Most Stars

Astronomers Find Giant Planet That’s Hotter Than Most Stars

Astronomers have discovered the hottest planet ever known, with a dayside temperature of more than 4,300 degrees Celsius. In fact, this planet, called KELT-9b, is hotter than most stars, according to a study published in the journal Nature.

“This is the hottest gas giant planet that has ever been discovered,” said Scott Gaudi, Professor at the Ohio State University in Columbus who led a study.

KELT-9b is 2.8 times more massive than Jupiter, but only half as dense.

It is nowhere close to habitable, but Gaudi said there is a good reason to study worlds that are unlivable in the extreme.

“As has been highlighted by the recent discoveries from the MEarth collaboration, the planet around Proxima Centauri, and the astonishing system discovered around TRAPPIST-1, the astronomical community is clearly focused on finding Earthlike planets around small, cooler stars like our sun,”Gaudi said.

“They are easy targets and there’s a lot that can be learned about potentially habitable planets orbiting very low-mass stars in general. On the other hand, because KELT-9b’s host star is bigger and hotter than the Sun, it complements those efforts and provides a kind of touchstone for understanding how planetary systems form around hot, massive stars,” he explained.

Because the planet is tidally locked to its star – as the moon is to Earth – one side of the planet is always facing toward the star, and one side is in perpetual darkness.

Molecules such as water, carbon dioxide and methane cannot form on the dayside because it is bombarded by too much ultraviolet radiation.

The properties of the nightside are still mysterious – molecules may be able to form there, but probably only temporarily.

“It’s a planet by any of the typical definitions of mass, but its atmosphere is almost certainly unlike any other planet we’ve ever seen just because of the temperature of its dayside,” said Gaudi, worked on this study while on sabbatical at NASA’s Jet Propulsion Laboratory, Pasadena, California.

Its star, called KELT-9, is even hotter – in fact, it is probably unravelling the planet through evaporation. It is only 300 million years old, which is young in star time.

It is more than twice as large, and nearly twice as hot, as our sun.

Given that the planet’s atmosphere is constantly blasted with high levels of ultraviolet radiation, the planet may even be shedding a tail of evaporated planetary material like a comet.

“KELT-9 radiates so much ultraviolet radiation that it may completely evaporate the planet,” said Keivan Stassun, Professor at Vanderbilt University, Nashville, Tennessee.

The KELT-9b planet was found using the Kilodegree Extremely Little Telescope, or KELT.

MXene Could Help Make Batteries That Charge as Fast as Supercapacitors: Study

MXene Could Help Make Batteries That Charge as Fast as Supercapacitors: Study

A new battery electrode design from a highly conductive, two-dimensional material called Mxene could pave the way for fully charging your smartphone in just a few seconds, a new study says.

The design, described in the journal Nature Energy, could make energy storage devices like batteries, viewed as the plodding tanker truck of energy storage technology, just as fast as the speedy supercapacitors that are used to provide energy in a pinch – often as a battery back-up or to provide quick bursts of energy for things like camera flashes.

“This paper refutes the widely accepted dogma that chemical charge storage, used in batteries and pseudocapacitors, is always much slower than physical storage used in electrical double-layer capacitors, also known as supercapacitors,” said lead researcher Yury Gogotsi, Professor at Drexel University in Philadelphia, Pennsylvania, US.

“We demonstrate charging of thin MXene electrodes in tens of milliseconds. This is enabled by very high electronic conductivity of MXene. This paves the way to development of ultrafast energy storage devices than can be charged and discharged within seconds, but store much more energy than conventional supercapacitors,” Gogotsi added.

The key to faster charging energy storage devices is in the electrode design.

Electrodes are essential components of batteries, through which energy is stored during charging and from which it is disbursed to power our devices.

So the ideal design for these components would be one that allows them to be quickly charged and store more energy.

The overarching benefit of using MXene as the material for the electrode design is its conductivity.

“If we start using low-dimensional and electronically conducting materials as battery electrodes, we can make batteries working much, much faster than today,” Gogotsi said.

“Eventually, appreciation of this fact will lead us to car, laptop and cell-phone batteries capable of charging at much higher rates – seconds or minutes rather than hours,” Gogotsi added.

NASA developing first asteroid deflection mission

NASA developing first asteroid deflection mission

NASA is developing the first-ever mission that will deflect a near-Earth asteroid, and help test the systems that will allow mankind to protect the planet from potential cosmic body impacts in the future.

The Double Asteroid Redirection Test (DART) — which is being designed and would be built and managed by the John Hopkins Applied Physics Laboratory — is moving from concept development to preliminary design phase, the US space agency said.

“DART would be NASA’s first mission to demonstrate what’s known as the kinetic impactor technique — striking the asteroid to shift its orbit — to defend against a potential future asteroid impact,” said Lindley Johnson, planetary defense officer at NASA Headquarters in Washington.

“This approval step advances the project towards a historic test with a nonthreatening small asteroid,” said Johnson.

“DART is a critical step in demonstrating we can protect our planet from a future asteroid impact,” said Andy Cheng, who serves as the DART investigation co-lead.

“Since we don’t know that much about their internal structure or composition, we need to perform this experiment on a real asteroid,” Andy said.

Protecting our planet

“With DART, we can show how to protect Earth from an asteroid strike with a kinetic impactor by knocking the hazardous object into a different flight path that would not threaten the planet,” he said.

The target for DART is an asteroid that will have a distant approach to Earth in October 2022, and then again in 2024.

The asteroid is called Didymos — Greek for “twin” — because it is an asteroid binary system that consists of two bodies: Didymos A, about 780 metres in size, and a smaller asteroid orbiting it called Didymos B, about 160 metres in size.

DART would impact only the smaller of the two bodies, Didymos B.

The Didymos system has been closely studied since 2003.

The primary body is a rocky S-type object, with composition similar to that of many asteroids. The composition of its small companion, Didymos B, is unknown, but the size is typical of asteroids that could potentially create regional effects should they impact Earth.

After launch, DART would fly to Didymos and use an APL- developed onboard autonomous targeting system to aim itself at Didymos B.

Then the refrigerator-sized spacecraft would strike the smaller body at a speed about nine times faster than a bullet, about six kilometres per second.

Kinetic impact

Earth-based observatories would be able to see the impact and the resulting change in the orbit of Didymos B around Didymos A, allowing scientists to better determine the capabilities of kinetic impact as an asteroid mitigation strategy.

The kinetic impact technique works by changing the speed of a threatening asteroid by a small fraction of its total velocity, but by doing it well before the predicted impact so that this small nudge will add up over time to a big shift of the asteroid’s path away from Earth.

Controversy greets Trump pick to lead EPA chemical safety programs

Controversy greets Trump pick to lead EPA chemical safety programs

A toxicologist named this week by President Donald Trump to oversee the U.S. Environmental Protection Agency’s (EPA’s) chemical safety programs is catalyzing controversy. Some scientists and industry groups are praising Michael Dourson of the University of Cincinnati in Ohio for his policy experience and technical expertise. But critics worry Dourson’s links to the chemical industry will color how he’ll implement a new law reforming EPA’s process for regulating potentially dangerous chemicals. Dourson’s religious beliefs are also attracting attention, in particular his past use of scientific findings to support claims made in the Bible.

Dourson, tapped on 17 July to be EPA’s assistant administrator for the Office of Chemical Safety and Pollution Prevention (OCSPP), would oversee EPA programs that regulate industrial chemicals and pesticides if he earns Senate confirmation. The nomination has become a political hot potato, as Dourson would lead OCSPP as it reworks its approach to implementing a bipartisan 2016 law that reformed the 1976 Toxic Substances Control Act (TSCA, the statute that governs EPA’s ability to regulate industrial chemicals).

Dourson’s experience, spanning 4 decades, includes multiple science- and risk-assessment–related posts in low- to middle-tier EPA offices throughout the 1980s and 1990s, as well as decades as a research toxicologist. That experience has proved to be a double-edged sword, though, as a chemical risk-assessment nonprofit that he led for 2 decades has come under scrutiny for its longtime reliance on chemical industry funding and its history of consulting for chemical companies

“Unfortunately, this nomination fits the clear pattern of the Trump administration in appointing individuals to positions for which they have significant conflicts of interest,” Richard Denison, senior scientist at the Environmental Defense Fund (EDF), a New York City–headquartered group, said in an 18 July statement.

The nomination follows another controversial appointment, of Nancy Beck—a toxicologist formerly of the Washington, D.C.–based American Chemistry Council (ACC), the largest U.S. chemical industry lobbying group—as deputy assistant administrator of OCSPP. More generally, EPA Administrator Scott Pruitt has sought to institute changes at EPA that could lead to greater industry voice in agency decisions, including through changes to its science advisory panels.

ACC and other industry groups have welcomed the new approach, faulting what they called the Obama administration’s overly stringent and economically stifling regulations. The group has called on the Senate to swiftly confirm Dourson, who it says is a “highly respected, award winning scientist,” to ensure the success of TSCA reform.

But environmental, health, and consumer advocates have balked at what they view as an industry-friendly implementation of reforms so far. “If his track record is any indication, Dr. Dourson’s nomination threatens to move us further away from health-protective implementation of the new TSCA,” EDF’s Denison said.

Meanwhile, another aspect of Dourson’s background—his authorship of “science-Bible stories”—is attracting attention. Dourson authored a trio of books called Evidence of Faith, which assume Bible stories are literally true and discuss how modern scientific findings might relate to the stories. Dourson has characterized his works as “matching science and Biblical text,” according to BuzzFeed News.

The Reverend John Arthur Nunes, president of Lutheran Church–Missouri Synod–affiliated Concordia College in the New York City metropolitan area, cited Dourson’s “judicious integration of faith and the sciences” as a big asset. “Far too often the proposal of a relationship between science and religion is viewed with incompatibility at best or with inimicality at worst,” he said in statement shared by EPA.

Detractors are highlighting a remark he made to the Center for Public Integrity and InsideClimate News in 2014, when he used a biblical analogy to defend his risk-assessment nonprofit group’s industry funding and consulting work for chemical companies: “Jesus hung out with prostitutes and tax collectors. He had dinner with them.”

This week, the Natural Resources Defense Council, a New York City–based environmental group, had a blunt reaction to that comment: “God help us.”

Future heat waves are going to make air travel a pain

Future heat waves are going to make air travel a pain

Heat waves associated with rising global temperatures will dramatically affect air travel later this century, occasionally triggering flight delays and bumping passengers and cargo, a new study suggests. When air warms, its density drops—which, in turn, affects the amount of lift air can generate as it rushes across aircraft wings. Less lift means an aircraft can carry less weight, but it also means an airplane—especially a weighty one—needs a longer runway in hot weather, a restriction that can lead to flight delays or cancellations like those caused by record-breaking heat in Phoenix last month. To assess how future heat waves might affect air travel, researchers used climate models to estimate hour-by-hour temperatures throughout the year at 19 particularly busy airports in the United States, Europe, the Middle East, China, and South Asia for the period between 2060 and 2080. At some airports—especially those with long runways in temperate regions and at low altitude where the air is relatively dense, like New York City’s John F. Kennedy, London’s Heathrow, and Paris’s Charles de Gaulle airports—impacts should be minimal, the researchers report today in Climatic Change. But at another New York City airport, La Guardia, shorter runways would trigger weight restrictions on fully laden Boeing 737-800 aircraft more than half the time on the hottest days. Similarly, at Dubai International Airport, a fully booked Boeing 777-300 could be weight-restricted during the hottest part of the day about 55% of the time. Overall, weight restrictions on hot days worldwide could range as high as 4% or more, the researchers say. But even a weight restriction of only 0.5% would result in the bumping of three passengers from an aircraft designed to carry 160 people, the team notes.

There are millions of protein factories in every cell. Surprise, they’re not all the same

There are millions of protein factories in every cell. Surprise, they’re not all the same

The plant that built your computer isn’t churning out cars and toys as well. But many researchers think cells’ crucial protein factories, organelles known as ribosomes, are interchangeable, each one able to make any of the body’s proteins. Now, a provocative study suggests that some ribosomes, like modern factories, specialize to manufacture only certain products. Such tailored ribosomes could provide a cell with another way to control which proteins it generates. They could also help explain the puzzling symptoms of certain diseases, which might arise when particular ribosomes are defective.

Biologists have long debated whether ribosomes specialize, and some remain unconvinced by the new work. But other researchers say they are sold on the finding, which relied on sophisticated analytical techniques. “This is really an important step in redefining how we think about this central player in molecular biology,” says Jonathan Dinman, a molecular biologist at the University of Maryland in College Park.

A mammalian cell may harbor as many as 10 million ribosomes, and it can devote up to 60% of its energy to constructing them from RNA and 80 different types of proteins. Although ribosomes are costly, they are essential for translating the genetic code, carried in messenger RNA (mRNA) molecules, into all the proteins the cell needs. “Life evolved around the ribosome,” Dinman says

The standard view has been that a ribosome doesn’t play favorites with mRNAs—and therefore can synthesize every protein variety. But for decades, some researchers have reported hints of customized ribosomes. For example, molecular and developmental biologist Maria Barna of Stanford University in Palo Alto, California, and colleagues reported in 2011 thatmice with too little of one ribosome protein have short tails, sprout extra ribs, and display other anatomical defects. That pattern of abnormalities suggested that the protein shortage had crippled ribosomes specialized for manufacturing proteins key to embryonic development.

Definitive evidence for such differences has been elusive, however. “It’s been a really hard field to make progress in,” says structural and systems biologist Jamie Cate of the University of California (UC), Berkeley. For one thing, he says, measuring the concentrations of proteins in naturally occurring ribosomes has been difficult.

In their latest study, published online last week in Molecular Cell, Barna and her team determined the abundances of various ribosome proteins with a method known as selected reaction monitoring, which depends on a type of mass spectrometry, a technique for sorting molecules by their weight. When the researchers analyzed 15 ribosomal proteins in mouse embryonic stem cells, they found that nine of the proteins were equally common in all ribosomes. However, four were absent from 30% to 40% of the organelles, suggesting that those ribosomes were distinctive. Among 76 ribosome proteins the scientists measured with another mass spectrometry-based method, seven varied enough to indicate ribosome specialization.

Barna and colleagues then asked whether they could identify the proteins that the seemingly distinctive ribosomes made. A technique called ribosome profiling enabled them to pinpoint which mRNAs the organelles were reading—and thus determine their end products. The specialized ribosomes often concentrated on proteins that worked together to perform particular tasks. One type of ribosome built several proteins that control growth, for example. A second type churned out all the proteins that allow cells to use vitamin B12, an essential molecule for metabolism. That each ribosome focused on proteins crucial for a certain function took the team by surprise, Barna says. “I don’t think any of us would have expected this.”

Ribosome specialization could explain the symptoms of several rare diseases, known as ribosomopathies, in which the organelles are defective. In Diamond-Blackfan anemia, for instance, the bone marrow that generates new blood cells is faulty, but patients also often have birth defects such as a small head and misshapen or missing thumbs. These seemingly unconnected abnormalities might have a single cause, the researchers suggest, if the cells that spawn these different parts of the body during embryonic development carry the same specialized ribosomes.

Normal cells might be able to dial protein production up or down by adjusting the numbers of these specialized factories, providing “a new layer of control of gene expression,” Barna says. Why cells need another mechanism for controlling gene activity isn’t clear, says Cate, but it could help keep cells stable if their environment changes.

He and Dinman say the use of “state-of-the-art tools” makes the results from Barna’s team compelling. However, molecular biologist Harry Noller of UC Santa Cruz doubts that cells would evolve to reshuffle the array of proteins in the organelles. “The ribosome is very expensive to synthesize for the cell,” he says. If cells are going to tailor their ribosomes, “the cheaper way to do it” would entail modifying a universal ribosome structure rather than building custom ones.

These orbiting black holes may be locked in one of the universe’s tightest embraces

These orbiting black holes may be locked in one of the universe’s tightest embraces

In the heart of a huge, warped galaxy about 750 million light-years from Earth, a dance is unfolding. And the dancers—two of the largest black holes on record—may be orbiting each other in the closest such pas de deux ever reported, according to a new study. The black holes are separated by just 24 light-years in Galaxy 0402+379, and together contain 15 billion times the mass of our sun. Using four sets of measurements taken by a widespread network of radio telescopes between 2003 and 2015, along with data gathered at optical wavelengths, astronomers discovered that the black holes appear to be circling each other on a 30,000-year cycle, they report today in The Astrophysical Journal. Besides identifying the closest orbiting black holes yet reported, the new study is notable for another reason, the astronomers write: The apparent speed at which these black holes are slowly moving away from one another, as measured from Earth, may be the smallest motion ever discerned. Their apparent separation of just 1 microarcsecond per year (an angle about one-billionth the size of the smallest object visible to the naked eye) is equivalent to the motion earthbound astronomers would measure for a snail creeping across the surface of a planet located 4 light-years from Earth.

Tiny fossil reveals what happened to birds after dinosaurs went extinct

Tiny fossil reveals what happened to birds after dinosaurs went extinct

The fossils of a tiny bird found on Native American land in New Mexico are giving scientists big new ideas about what happened after most dinosaurs went extinct. The 62-million-year-old mousebird suggests that, after the great dino die-off, birds rebounded and diversified rapidly, setting the stage for today’s dizzying variety of feathery forms.

“This find may well be the best example of how an unremarkable fossil of an unremarkable species can have enormously remarkable implications,” says Larry Witmer, a paleontologist at Ohio University in Athens who was not involved in the research.

The newly discovered fossils, described online today in the Proceedings of the National Academy of Sciences, are a scrappy collection of bits and pieces rather than a complete skeleton. But certain tell-tale characteristics—such as its fourth toe, which it could turn around forward or backward to help it climb or grasp—convinced the team that it was an ancient mousebird. Researchers unearthed the fossils in New Mexico on ancestral Navajo lands, in rocks dating to between 62.2 million and 62.5 million years old. They named the creatureTsidiiyazhi abini—Navajo for “little morning bird.” Its mousebird descendants—about the size of a sparrow and marked by their soft, grayish or brownish hairlike feathers—still dwell in trees in sub-Saharan Africa today.

But it’s the age of the fossil that is particularly interesting. It’s just a few million years after an asteroid struck Earth and brought the age of dinosaurs to an abrupt end 66 million years ago. Groups such as mammals and frogs are known to have rebounded rapidly after that event, diversifying into multiple new forms as they occupied newly available niches—a process evolutionary biologists called adaptive radiation. But there has been scant fossil evidence for what happened to birds—the only dinosaurs to survive the extinction—in its aftermath.

Paleontologists have suspected birds made a quick rebound. But bird fossils from the early Paleogene period immediately after the extinction—particularly those of small, tree-dwelling animals—are rare. So researchers have used genetic studies to suggest that “a few lineages survived extinction and had a really fast radiation right afterwards,” says Daniel Ksepka, a paleo-ornithologist at the Bruce Museum in Greenwich, Connecticut, and the lead author on the paper.

This new find clinches that notion with fossil evidence, and helps flesh out the fate of birds during this crucial time period. The team combined the new fossil evidence with previously collected genetic data from living birds to update the phylogenetic tree of bird evolution. Previous trees used these data to differentiate the birds into different groups, but weren’t able to determine when they had diverged. Now, with the new fossils so precisely dated, the team could determine when exactly different bird lineages split off from one another. As a result, Ksepka and colleagues estimate that the ancestors of some nine major land bird lineages—from mousebirds to owls to raptors like hawks and eagles—must have emerged in quick succession, all practically in the shadow of the extinction event.

“There’s just basically 3.5 million years for all of these groups to start splitting off,” Ksepka says. He adds that other recent finds suggest that water birds such as penguins did the same thing: Earlier this year, researchers reported finding a 61-million-year-old fossil of a 1.5-meter-tall penguin in what is today New Zealand.

T. abini “is a significant find” that shifts the fossil record of tree-dwelling birds significantly back in time, says paleontologist Gerald Mayr of the Senckenberg Research Institute in Frankfurt, Germany, who led the team that reported on the penguin fossils.

The new fossil has “tremendous value,” agrees paleobiologist Helen James, the curator of the division of birds at the Smithsonian Institution in Washington, D.C., who was also not involved in the study. “Firmly resolving the relationships of birds continues to be a headache, whether using genetic or morphological data, or both,” she says. “The paper fortifies the evidence for an early, explosive radiation of modern birds.”

The study also gives paleontologists new reason to scrutinize early Paleocene rocks, not to mention existing museum collections, for signs of other representatives of modern bird groups, Witmer says. “This little fossil mousebird signals that those groups must have been there—we just need to find them.”

Pesticides could hike risk of catching a parasitic worm

Pesticides could hike risk of catching a parasitic worm

Pesticides are a double-edged sword: They make farming more productive, but they can harm wildlife and people if not used properly. Now, ecologists have identified a new threat from pesticides in the developing world. By killing off insect predators of worm-infested snails, they can raise the risk of schistosomiasis, the second most common parasitic disease after malaria.

“It’s a ground-breaking article,” says Russell Stothard, a parasitologist at the Liverpool School of Tropical Medicine in the United Kingdom, who was not involved in the research.

Schistosomiasis is a debilitating disease caused by a parasitic flatworm. Some 258 million people are infected, mostly in Africa. The worm spends part of its life in freshwater snails, which release larvae that can penetrate the skin of someone swimming, bathing, or washing clothes. The centimeter-long worms spread through blood vessels, causing fever, diarrhea, anemia, and stunted growth. Immune responses can damage the kidneys and other organs. When infected people relieve themselves, the worms’ eggs can spread into streams or ponds via their urine and feces. There, they hatch and seek out new snails, beginning their life cycle again. Schistosomiasis can easily be treated with drugs, but where the parasites are endemic, people quickly become reinfected.

The leader of the new research, ecologist Jason Rohr of the University of South Florida in Tampa, had previously studied a similar parasitic flatworm in amphibians. His research showed that common agricultural chemicals, like fertilizer, can worsen the situation for frogs. When these chemicals enter streams and ponds, they increase the amount of algae, which is then eaten by snails that serve as a host for the flatworms. That boosts their population and leads to more parasite infections in frogs.

The similar life cycles of the amphibian flatworm and the one that causes schistosomiasis made Rohr and his colleagues wonder whether agricultural pollution might also affect disease transmission. They created a simple ecological model inside 60 open tanks. After filling each with 800 liters of pond water, they added two species of snails that spread the schistosomiasis parasite, algae for the snails to eat, and two kinds of predators—crayfish and water bugs. Finally, they spiked the tanks with three kinds of farm chemicals—fertilizer, herbicide, and insecticide—in various combinations. The concentrations were typical of streams and ponds near corn fields in the United States.

As expected, fertilizer increased the amount of algae in the tanks, which in turn swelled the number of snails. The herbicide also led to more food for the snails, because it predominately killed microscopic algae that clouded the water. When these died, the water cleared, allowing more light to reach larger algae growing on the bottom of the pond—the snails’ food. An epidemiological model of schistosomiasis suggested that the increase in snail population from this typical amount of fertilizer would jack up the risk of transmission to humans by 28%.

The insecticide, chlorpyrifos, had an even bigger effect by killing the two predators of the snails. Water bugs stick their heads inside the shell, bite the mollusc, inject digestive enzymes, then slurp up the remains. The 20-centimeter-long crayfish rely on brute force, crushing the 2-centimeter-long snails. “They’re absolutely voracious,” Rohr says. With these predators gone, the snail population exploded. In such a scenario, disease risk to humans would rise 10-fold, the team reports in a preprint posted this week to bioRxiv. Although only one concentration of insecticide was added to the tanks, the model indicated that lower concentrations in ponds would still have substantial impacts on parasite transmission.

The findings identify what looks like a “strong risk factor” for schistosomiasis, says Joanne Webster, a parasitologist at Imperial College London who was not involved.

Dams have also caused an increase in schistosomiasis in many countries, because snails live in the reservoirs and irrigation channels. In some places, dams have also caused a decline in the natural predators of snails, such as fish, crayfish, and prawns. The combination of new habitat from irrigation and runoff of pesticides may be a “perfect storm” for schistosomiasis where agriculture is intensifying in the developing world, Rohr says.

Rohr is now investigating the impact of insecticides on snail predators and disease transmission in northwest Senegal, as part of an experiment run by a research partnership called the Upstream Alliance, based in Pacific Grove, California. This project has reintroduced prawns near several villages to evaluate their efficacy in controlling freshwater snails. Rohr will study whether helping farmers switch to insecticides less toxic to prawns could lessen the burden of schistosomiasis, while maintaining food production. “In schistosomiasis-endemic regions, we need to think more carefully about the impact of agrochemicals,” he says.

The study highlights the complex links between agriculture and disease, says Charles Godfray, a biologist at the University of Oxford in the United Kingdom. By boosting agricultural productivity, pesticides and other chemicals can help raise people out of poverty and lessen malnutrition, which worsens diseases. “The really clear thing is the importance of precision agriculture, in which agrochemicals are used as efficiently as possible, with as little runoff as possible.”

A fungus is attacking Europe’s most beloved salamander. It could wreak havoc if it gets to North America

A fungus is attacking Europe’s most beloved salamander. It could wreak havoc if it gets to North America

Until recently, the Bunderbos was the best place in the Netherlands to find fire salamanders. With tall broadleaf trees shading small streams, the small forest was home to thousands of the 20-centimeter-long creatures, glistening black with bright yellow spots. “It’s a very charismatic animal,” says Annemarieke Spitzen-van der Sluijs, a conservation biologist with Reptile, Amphibian & Fish Conservation Netherlands (RAVON), a nonprofit group based in Nijmegen. “It’s like a dolphin among amphibians, always smiling, with pretty eyes.”

But starting around 2008, the population in the Bunderbos began to plummet for no apparent reason. When Frank Pasmans and An Martel, veterinarians here at Ghent University, heard about the enigmatic deaths, they recalled extinctions caused by Batrachochytrium dendrobatidis (Bd), a highly lethal fungus that infects more than 700 species of amphibian. Yet tests for Bd at their lab were negative.

The declines became so alarming that RAVON removed 39 fire salamanders (Salamandra salamandra) from the park, safe-guarding them temporarily in an employee’s basement. When these animals began to die as well, Spitzen-van der Sluijs rushed them here, about 2 hours away, where Martel and Pasmans cultured a fungus from a salamander clinging to life. It was a new pathogen, related to Bd. They named it B. salamandrivorans (Bsal) for the ulcers that voraciously eat away at the animals’ skin.

So began a bittersweet odyssey for the couple, partners in life as well as work. Studies they have led since their initial discovery show that Bsal—probably introduced from Asia by the pet trade—has the potential to wipe out salamander populations across Europe. An even bigger fear is that the pathogen will reach North America, which holds the world’s greatest diversity of salamanders. (Tennessee alone has 57 species.)

The work has brought Martel and Pasmans funding and scientific recognition. “They’re doing amazing work, in the right place, at the right time,” says Vance Vredenburg, an ecologist at San Francisco State University in California. But the couple worries they have front row seats to the extinction of rare species they love. “This would be a true loss,” Pasmans says. Nobody knows how to slow Bsal‘s spread, although Martel, Pasmans, and many others are discussing measures such as trade restrictions, habitat protection, and even enlisting other organisms to fight the pathogen. “It will be a race,” says Dirk Schmeller, a conservation biologist with the Helmholtz Centre for Environmental Research in Leipzig, Germany. “And there may not be enough time.”

As a child, Pasmans, 42, played with newts in the ditches of a park near his home in the suburbs of Antwerp, Belgium, and he has been fascinated by amphibians and reptiles ever since. “I just loved watching their behavior, seeing the larvae grow and go through metamorphosis,” he says. At 17, he joined a group of herpetologists and enthusiasts, which focused on amphibian and reptile diseases. Martel, 41, loved mammals at first, growing up with cats, dogs, and guinea pigs, but she caught Pasmans’s enthusiasm for amphibians after they started dating in graduate school. They live in a farmhouse with two dogs, 15 sheep, and about 50 fire salamanders in the cellar.

Veterinary science was a natural career choice for both of them. Pasmans joined the faculty at Ghent in 2005, followed by Martel 2 years later. Because funding for research on amphibians and reptiles was scarce, they spent several years researching infectious diseases of poultry and pigs, while working on turtles and salamanders after hours.

Pasmans had long been aware of threats to amphibians, whose habitat—moist woodlands, ponds, and streams—makes them vulnerable to development and water pollution. By now, nearly one-third of the more than 7600 known amphibian species are endangered, a higher proportion than in any other major group of vertebrates. Lately, diseases have emerged as a major concern, among them ranaviruses, which have led to a few documented cases of mass mortality and local extirpations worldwide since the 1990s.

The worst infectious disease has been Bd, which in 1999 was implicated in amphibian declines in rain forests in Panama and Australia but is thought to have started spreading and harming populations at least 2 decades earlier. Where the fungus came from is unknown. It infects the skin of susceptible species, causing problems with respiration and fluid regulation, and, eventually, triggering heart attacks. Bd drove to extinction many species of frogs in the Americas and the northern gastric brooding frog of Australia. It has been found across North America, where it threatens several species of frogs. Although Bd wiped out salamanders in Mexico, Guatemala, and Costa Rica, it does not seem to have caused significant problems for the highly diverse salamanders in the southeastern United States.

In 2008, Ghent University awarded Martel and Pasmans a grant to study Bd and skin pathology in amphibians. That same year, while on vacation in Costa Rica, they got their first look at the power of the fungus. Visiting the mountain cloud forests, which once echoed with the chirps of harlequin frogs, they were struck by the silence. “They’re completely deserted,” Pasmans says. “It was really dramatic.”

By then, European herpetologists were alarmed as well. In 2001, Bd had been linked to a severe decline of the midwife toad in Spain. Over the decade, Bd was detected in northern Europe as well. Spitzen-van der Sluijs found it in amphibians all over the Netherlands and in Belgium. Yet it didn’t seem to cause die-offs. “The dogma was: When Bd enters, everything dies,” says Pasmans, who felt the impacts had been a bit hyped.

Then salamanders began dying in the Bunderbos. In most other forests, the outbreak might have gone unnoticed. But volunteers have systematically surveyed the 1.4-square-kilometer reserve and surrounding woods since 1997 to keep tabs on what was the biggest of three populations of fire salamanders in the Netherlands. When Pasmans and Martel first heard about the deaths in 2008, they were not particularly alarmed. Wild animals die all the time, after all, and northern Europe’s amphibians seemed resistant to Bd. But year after year, the news got worse, with estimated population declines of nearly 20% a year.

In 2012, with die-offs reaching a crisis, Spitzen-van der Sluijs drove two dozen living salamanders to the lab at Ghent. From one visibly sick animal, Pasmans and Martel could tell that this was something new. Unlike frogs suffering from Bd, which thickens and hardens their skin, the salamander had ulcers all over its body. (Other symptoms, such as lethargy and loss of appetite, matched those of Bd.) Martel and Pasmans took skin samples and after 3 weeks managed to culture a fungus on substrates of agar and broth.

A genetic analysis revealed that, like Bd, it was a chytrid—a cousin to the frog scourge. “My face went white when I heard about it,” Schmeller recalls. Found around the world, this varied group of fungi typically feeds on pollen or the degraded remains of plants and insects in ponds, streams, or moist soil. In a unique—and for some amphibians deadly—adaptation, they release so-called zoospores that can swim a few centimeters by whipping a flagellum.

When Bd zoospores land on a susceptible amphibian, they grow a skinny tube that penetrates the outer skin. The end of the tube swells into a round body that sends another tube even deeper. The burrowing disrupts the amphibian’s ability to regulate its fluids. After a few days or weeks, structures called sporangia develop, which migrate to the skin surface, then burst to release massive new batches of zoospores. (The ulcers caused by Bsal suggest its behavior differs in some respects.)

To confirm that Bsal was a pathogen rather than a secondary infection, the duo took zoospores from a laboratory culture and dripped them onto the backs of healthy fire salamanders. The animals developed the symptoms seen in the first sick salamander, and all died a few days later, as they reported in 2013 in the Proceedings of the National Academy of Sciences. “This thing is as pathogenic as it gets,” says Joe Mendelson, a herpetologist at Zoo Atlanta.

But was Bsal as broadly destructive as Bd? To find out, Martel and Pasmans conducted experiments on 35 species of amphibians from around the world. All seven European salamander species they tested were highly susceptible. So was one from Turkey and two from North America, including the widespread eastern spotted newt. Frogs resisted or tolerated infection.

The study also identified Bsal‘s birthplace. Martel and Pasmans detected the fungus in samples of salamanders that other researchers had collected in Thailand, Vietnam, and Japan—including a museum specimen more than 150 years old—but not in salamanders from other parts of the world. During the infection experiments, they found that some Asian salamanders developed symptoms and then recovered, whereas others were completely resistant. The team concluded that Bsal and salamanders probably have coexisted in Asia for millions of years.

In the Netherlanders, however, Bsal was on the warpath. By the time Martel and Pasmans identified the fungus, it had annihilated the population in the Bunderbos. They had missed the chance to study the outbreak firsthand. Then in April 2014, they got a tip from a Dutch man vacationing in Belgium who had come across a dead fire salamander in a forest near Robertville. Aware of the Bunderbos catastrophe, he emailed the lab at Ghent.

Wasting no time, Martel and Pasmans began monitoring the population. “Every week we went and found more diseased animals,” Martel recalls. “It was really heartbreaking.” Within 6 months, Ph.D. student Gwij Stegen, who took over the fieldwork, had trouble spotting any fire salamanders at all. The study quantified the rate of decline and also showed that sexually mature fire salamanders are much more likely than juveniles to get infected (probably during fights with rivals or mating), which prevents them from reproducing and makes the population less likely to recover.

Back in the lab, there was more bad news. Martel and Pasmans tested other common European amphibians and found that two species can act as a reservoir for the fungus. Alpine newts (Ichthyosaura alpestris) and midwife toads (Alytes obstetricans) both recovered from mild infections during which they shed spores for weeks to months. By ensuring that the pathogen will continue to circulate, these wild reservoirs make it more likely that highly susceptible species such as the fire salamander will go extinct.

Martel and Pasmans discovered another reason why Bsal will probably persist. Zoospores usually survive a few days at most before they’re consumed by microscopic predators, but Bsal creates a second, much hardier type of spore that has a sturdy cell wall and can survive in pond water for more than 2 months. These spores float, which helps them avoid being eaten.

All of that means Bsal could rapidly devastate susceptible species of salamander, the team concluded in a Nature paper in April. Species with small populations are especially vulnerable. Ten European species, including five on Sardinia in Italy, each live in an area smaller than 5000 square kilometers, and a species named Calotriton arnoldi makes its home in less than 10 square kilometers in a Spanish nature park.

Contact Us

Please fill out the form below to contact us.
Please enter your full name.
Phone number can help us reach you quickly.
Please enter an email address where we can send response to your message.