Space-Time is Constantly Moving, Physicists Say

Space-Time is Constantly Moving, Physicists Say

The study, published in the journal Physical Review D, suggests that if we zoomed in-way in-on the Universe, we would realize it’s made up of constantly fluctuating space and time.

“Space-time is not as static as it appears, it’s constantly moving,” said lead author Qingdi Wang, a Ph.D. student at the University of British Columbia.

“This is a new idea in a field where there hasn’t been a lot of new ideas that try to address this issue,” said Bill Unruh, a physics and astronomy professor at the University of British Columbia.

In 1998, astronomers found that the Universe is expanding at an ever-increasing rate, implying that space is not empty and is instead filled with dark energy that pushes matter away.

The most natural candidate for dark energy is vacuum energy.

When physicists apply the theory of quantum mechanics to vacuum energy, it predicts that there would be an incredibly large density of vacuum energy, far more than the total energy of all the particles in the Universe.

If this is true, Einstein’s theory of general relativity suggests that the energy would have a strong gravitational effect and most physicists think this would cause the Universe to explode.

Fortunately, this doesn’t happen and the Universe expands very slowly. But it is a problem that must be resolved for fundamental physics to progress.

Unlike other physicists who have tried to modify the theories of quantum mechanics or general relativity to resolve the issue, Wang and co-authors suggest a different approach.

They take the large density of vacuum energy predicted by quantum mechanics seriously and find that there is important information about vacuum energy that was missing in previous calculations.

Their calculations provide a completely different physical picture of the Universe.

In this new picture, the space we live in is fluctuating wildly.

At each point, it oscillates between expansion and contraction.

As it swings back and forth, the two almost cancel each other but a very small net effect drives the Universe to expand slowly at an accelerating rate.

But if space and time are fluctuating, why can’t we feel it?

“This happens at very tiny scales, billions and billions times smaller even than an electron,” Wang said.

“It’s similar to the waves we see on the ocean. They are not affected by the intense dance of the individual atoms that make up the water on which those waves ride,” Prof. Unruh said.

Scientists expect to calculate amount of fuel inside Earth by 2025

Scientists expect to calculate amount of fuel inside Earth by 2025

Earth requires fuel to drive plate tectonics, volcanoes and its magnetic field. Like a hybrid car, Earth taps two sources of energy to run its engine: primordial energy from assembling the planet and nuclear energy from the heat produced during natural radioactive decay. Scientists have developed numerous models to predict how much fuel remains inside Earth to drive its engines — and estimates vary widely — but the true amount remains unknown. In a new paper, a team of geologists and neutrino physicists boldly claims it will be able to determine by 2025 how much nuclear fuel and radioactive power remain in the Earth’s tank. The study, authored by scientists from the University of Maryland, Charles University in Prague and the Chinese Academy of Geological Sciences, was published on September 9, 2016, in the journal Nature Scientific Reports.

“I am one of those scientists who has created a compositional model of the Earth and predicted the amount of fuel inside Earth today,” said one of the study’s authors William McDonough, a professor of geology at the University of Maryland. “We’re in a field of guesses. At this point in my career, I don’t care if I’m right or wrong, I just want to know the answer.”

To calculate the amount of fuel inside Earth by 2025, the researchers will rely on detecting some of the tiniest subatomic particles known to science — geoneutrinos. These antineutrino particles are byproducts of nuclear reactions within stars (including our sun), supernovae, black holes and human-made nuclear reactors. They also result from radioactive decay processes deep within the Earth.

Detecting antineutrinos requires a huge detector the size of a small office building, housed about a mile underground to shield it from cosmic rays that could yield false positive results. Inside the detector, scientists detect antineutrinos when they crash into a hydrogen atom. The collision produces two characteristic light flashes that unequivocally announce the event. The number of events scientists detect relates directly to the number of atoms of uranium and thorium inside the Earth. And the decay of these elements, along with potassium, fuels the vast majority of the heat in the Earth’s interior.

To date, detecting antineutrinos has been painfully slow, with scientists recording only about 16 events per year from the underground detectors KamLAND in Japan and Borexino in Italy. However, researchers predict that three new detectors expected to come online by 2022–the SNO+ detector in Canada and the Jinping and JUNO detectors in China–will add 520 more events per year to the data stream.

“Once we collect three years of antineutrino data from all five detectors, we are confident that we will have developed an accurate fuel gauge for the Earth and be able to calculate the amount of remaining fuel inside Earth,” said McDonough.

The new Jinping detector, which will be buried under the slopes of the Himalayas, will be four times bigger than existing detectors. The underground JUNO detector near the coast of southern China will be 20 times bigger than existing detectors.

“Knowing exactly how much radioactive power there is in the Earth will tell us about Earth’s consumption rate in the past and its future fuel budget,” said McDonough. “By showing how fast the planet has cooled down since its birth, we can estimate how long this fuel will last.”

Electrostatic Lenses for Wigner Entangletronics

Electrostatic Lenses for Wigner Entangletronics

Electrostatic lenses are used for manipulating electron evolution and are therefore attractive for applications in novel quantum engineering disciplines, in particular in entangletronics (i.e. entangled electronics). A fundamental aspect involved in the manipulation of the electron dynamics are the processes maintaining coherence; coherence describes all properties of the correlation between physical quantities of a single wave, or between several waves or wave packets. However, so-called scattering processes strive to counteract coherence and therefore have a strong impact on the entire process. A physically intuitive way of describing the coherence processes and scattering-caused transitions to classical dynamics is entirely missing, impeding the overall progress towards devising novel coherence-based nanodevices in the spirit of entangletronics.

To tackle this issue, Paul Ellinghaus, Josef Weinbub, Mihail Nedjalkov, and Siegfried Selberherr (TU Wien, Austria), have expressed the new quantifying theory of coherence (derived recently based on the theory of entanglement) in the Wigner formalism and – in this setting – discuss a lense-splitting simulation conducted with the group’s simulator ViennaWD. The signed particle model of Wigner evolution enables physically intuitive insights into the processes maintaining coherence. Both, coherent processes and scattering-caused transitions to classical dynamics are unified by a scattering-aware particle model of the lense-controlled state evolution. In particular, the evolution of a minimum uncertainty Wigner state, controlled by an electrostatic lense is analyzed in Wigner function terms. It is shown, that cross-domain phase space correlations maintain the coherence while scattering impedes this exchange.

Overall, the work shows the importance of the Wigner singed particles model in the novel field of entangletronics and paves the way for future entirely novel devices and structures where coherence and entanglement are used as fundamental mechanisms for the operation.


New small angle scattering methods boost molecular analysis

New small angle scattering methods boost molecular analysis

A dramatic leap forward in the ability of scientists to study the structural states of macromolecules such as proteins and nanoparticles in solution has been achieved by a pair of researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab). The researchers have developed a new set of metrics for analyzing data acquired via small angle scattering (SAS) experiments with X-rays (SAXS) or neutrons (SANS). Among other advantages, this will reduce the time required to collect data by up to 20 times.

“SAS is the only technique that provides a complete snapshot of the thermodynamic state of macromolecules in a single image,” says Robert Rambo, a scientist with Berkeley Lab’s Physical Biosciences Division, who developed the new SAS metrics along with John Tainer of Berkeley Lab’s Life Sciences Division and the Scripps Research Institute.

“In the past, SAS analyses have focused on particles that were well-behaved in the sense that they assume discrete structural states,” Rambo says. “But in biology, many proteins and protein complexes are not well-behaved, they can be highly flexible, creating diffuse structural states. Our new set of metrics fully extends SAS to all particle types, well-behaved and not well-behaved.”

Rambo and Tainer describe their new SAS metrics in a paper titled “Accurate assessment of mass, models and resolution by small-angle scattering.” The paper has been published in the journal Nature.

Says co-author Tainer, “The SAS metrics reported in our Nature paper should have game-changing impacts on accurate high-throughput and objective analyses of the flexible molecular machines that control cell biology.”

In SAS imaging, beams of X-rays or neutrons sent through a sample produce tiny collisions between the X-rays or neutrons and nano- or subnano-sized particles within the sample. How these collisions scatter are unique for each particle and can be measured to determine the particle’s shape and size. The analytic metrics developed by Rambo and Tainer are predicated on the discovery by Rambo of an SAS invariant, meaning its value does not change no matter how or where the measurement was performed. This invariant has been dubbed the “volume-of-correlation” and its value is derived from the scattered intensities of X-rays or neutrons that are specific to the structural states of particles, yet are independent of their concentrations and compositions.

“The volume-of-correlation can be used for following the shape changes of a protein or nanoparticle, or as a quality metric for seeing if the data collection was corrupted,” Rambo says. “This SAS invariant applies equally well to compact and flexible particles, and utilizes the entire dataset, which makes it more reliable than traditional SAS analytics, which utilize less than 10-percent of the data.”

The volume-of-correlation was shown to also define a ratio that determines the molecular mass of a particle. Accurate determination of molecular mass has been a major difficulty in SAS analysis because previous methods required an accurate particle concentration, the assumption of a compact near-spherical shape, or measurements on an absolute scale.

“Such requirements hinder both accuracy and throughput of mass estimates by SAS,” Rambo says. “We’ve established a SAS-based statistic suitable for determining the molecular mass of proteins, nucleic acids or mixed complexes in solution without concentration or shape assumptions.”

The combination of the volume-of-correlation with other metrics developed by Rambo and Tainer can provide error-free recovery of SAS data with a signal-to-noise ratio below background levels. This holds profound implications for high-throughput SAS data collection strategies not only for current synchrotron-based X-ray sources, such as Berkeley Lab’s Advanced Light Source, but also for the next-generation light sources based on free-electron lasers that are now being designed.

“With our metrics, it should be possible to collect and analyze SAS data at the theoretical limit,” Rambo says. “This means we can reduce data collection times so that a 90- minute exposure time used by commercial instruments could be cut to nine minutes.”

Adds Tainer, “The discovery of the first x-ray scattering invariant coincided with the genesis of the Berkeley Lab some 75 years ago. This new discovery of the volume-of-correlation invariant unlocks doors for future analyses of flexible biological samples on the envisioned powerful next-generation light sources.

How selenium compounds might become catalysts

How selenium compounds might become catalysts

Traditionally, metal complexes are used as activators and catalysts. They form complete, i.e. covalent bonds with the molecule whose reactions they are supposed to accelerate. However, the metals are often expensive or toxic.

Weaker bonds suffice

In the recent years, it has become evident that a covalent bond is not absolutely necessary for activation or catalysis. Weaker bonds, such as hydrogen bonds, might be sufficient. Here, the bond forms between a positively polarised hydrogen atom and the negatively polarised centre of another molecule. In the same way as hydrogen, elements of group 17 in the periodic table, namely halogens such as chlorine, bromide and iodine, can form weak bonds — and thus serve as activators or catalysts.

Stefan Huber’s team transferred this principle to elements from group 16 of the periodic table, i.e. chalcogens. The researchers used compounds with a positively polarised selenium atom. It forms a weak bond to the substrate of the reaction, the transformation of which was accelerated by 20 to 30 times as a result.

For comparison purposes, the chemists also tested compounds in which they’d replaced the selenium centre by another element. Molecules without selenium did not accelerate the reaction. “Consequently, the observed effect can be clearly attributed to selenium as active centre,” says Huber.

Better than sulphur

In earlier studies, only one comparable case of chalcogen catalysis had emerged; there, sulphur was used instead of selenium. “As selenium can be polarised more easily than sulphur, it has greater potential as a catalyst component in the long term,” explains Stefan Huber. “In combination with halogen bonds, chalcogen bonds have added two fascinating mechanism to the chemists’ repertoire, for which there is no known equivalent in nature, for example in enzymes.”

In the next step, the team plans to demonstrate that selenium compounds can be utilised as adequate catalysts. At present, the researchers refer to them as activators, as relatively large amounts of the substance are required to trigger the reaction. This is because the term catalyst cannot be used until the amount of the necessary selenium compounds is smaller than the amount of the starting materials required for the reaction.

Birds Of A Feather Flock Like A Magnetic System

Birds Of A Feather Flock Like A Magnetic System

An interdisciplinary team of physicists, biologists and biophysicists has developed a model that can imitate the mesmerizing patterns of starling flocks. By drawing an unlikely parallel between starling flocks and magnetic systems, the scientists were able to describe this vastly complicated biological system using only a few physical equations.

All sciences can be boiled down to the observation and prediction of patterns. Sometimes patterns that seem totally unrelated resemble each other in some way, like human eyes and nebulae, or hurricanes and galaxies. Sometimes these things have nothing in common except the associations we contrive, but sometimes they actually do. For example, fractal patterns, which can be found in tree roots, river branches and lightning strikes, all share a similar appearance and can also be represented by a common physical principle.

“The difficult thing is figuring out how to do a proper study to test the extent of these analogies,” said Frederic Bartumeus, an expert in movement ecology from the Centre for Advanced Studies of Blanes in Spain. “To see how accurate one can describe a biological system using concepts from statistical physics for example.”

This is exactly what the scientists from École Normale Supérieure in France and Institute for Complex Systems in Italy have done. In their recent paper published in Nature Physics in August, a team led by physicist Thierry Mora has successfully modeled how starling flocks behave by modifying existing theories on magnetic systems.

A flock of tiny little magnets

Largely driven by the demand for better computing and data storage devices, for decades physicists have studied how to manipulate magnetic materials for practical uses. But physicists being physicists, it is not enough just to know how; they also needed to know why.

They have learned that inside a ferromagnet — like a fridge magnet — there are billions and trillions of individual magnetic dipoles. Each dipole is itself essentially a tiny magnet, and they all have to line up for the big magnet to work. So the physicists developed and tested models and theories on what happens at microscopic scales inside magnets — how individual dipoles behave, how each of them interacts with their neighbors and how these interactions between these tiny magnets affect the big magnet.

Similar to a magnetic dipole, an individual starling adjusts itself depending on its neighbors. While there have been other parallel efforts that have investigated flocking starlings, Mora’s team has taken a different approach to the problem — by studying the flocks using theories originally intended for magnetic systems.

By carefully modifying theories of magnetic dipole interactions, they were able to develop a model that can accurately describe the behavior of flocking starlings.

Essentially the model simulates the interactions between neighboring individuals within the group. It not only predicts the movements of individual starlings, but more importantly the time it takes for an individual’s adjustment to affect the movement of the entire flock.

The bigger picture

“People have also looked into collective motion at a cellular scale, for example in biological studies like tissue repairs,” said Mora. “The same class of models have been proposed to describe those motions, … basically any system where agents move by themselves.”

A better understanding of collective systems in general can have an impact on a broad range of subjects, from crowd control to bacteria growth and even the design of future self-propelled medical nanobots.

“Of course there will be some conditions for what kind of system this model can be applied,” said Bartumeus,”but to have any kind of transferrable knowledge among different biological systems, that’s already magic for a biologist.”

Bird flocks and magnetic systems have been studied before, but mostly by biologists and physicists working in different buildings. Now we know that the two share something in common, and from this, new connections may emerge.

“Sometimes it’s difficult to find physicists who are willing to study biology, or biologists who are interested in physics,” said Bartumeus. “There’s a pool of theories in physics that can be used to describe collective motion in biology, and this research has just opened another door.”

Physicists Produce World’s First Sample of Metallic Hydrogen

Physicists Produce World’s First Sample of Metallic Hydrogen

Originally theorized by Princeton physicists Eugene Wigner and Hillard Bell Huntington in 1935, metallic hydrogen is ‘the holy grail of high-pressure physics.’

Theoretical work suggests a wide array of interesting properties for this material, including high temperature superconductivity and superfluidity (if a liquid).

To create it, Harvard University physicists Dr. Ranga Dias and Professor Isaac Silvera squeezed a tiny hydrogen sample at 495 GPa (gigapascal) — greater than the pressure at the center of the Earth.

At those extreme pressures, solid molecular hydrogen -which consists of molecules on the lattice sites of the solid – breaks down, and the tightly bound molecules dissociate to transforms into atomic hydrogen, which is a metal.

“We have studied solid molecular hydrogen under pressure at low temperatures,” the researchers said.

“At a pressure of 495 GPa hydrogen becomes metallic with reflectivity as high as 0.91.”

“We fit the reflectance using a Drude free electron model to determine the plasma frequency of 32.5 ± 2.1 eV at T = 5.5 K, with a corresponding electron carrier density of 7.7 ± 1.1 × 1023 particles/cm3, consistent with theoretical estimates of the atomic density.”

“The properties are those of an atomic metal,” they noted.

To create the material, the authors turned to one of the hardest materials on Earth — diamond.

But rather than natural diamond, they used two small pieces of carefully polished synthetic diamond which were then treated to make them even tougher and then mounted opposite each other in a device known as a diamond anvil cell.

“It was really exciting,” Professor Silvera said.

“Ranga was running the experiment, and we thought we might get there, but when he called me and said, ‘The sample is shining.’ I went running down there, and it was metallic hydrogen.”

“I immediately said we have to make the measurements to confirm it, so we rearranged the lab, and that’s what we did.”

“It’s a tremendous achievement, and even if it only exists in this diamond anvil cell at high pressure, it’s a very fundamental and transformative discovery.”

While the work offers a new window into understanding the general properties of hydrogen, it also offers tantalizing hints at potentially revolutionary new materials.

“One prediction that’s very important is metallic hydrogen is predicted to be meta-stable,” Professor Silvera explained.

“That means if you take the pressure off, it will stay metallic, similar to the way diamonds form from graphite under intense heat and pressure, but remains a diamond when that pressure and heat is removed.”

“Metallic hydrogen may have important impact on physics and perhaps will ultimately find wide technological application,” the researchers said.

“A looming challenge is to quench metallic hydrogen and if so study its temperature stability to see if there is a pathway for production in large quantities.”

Uranium-based compound improves manufacturing of nitrogen products

Uranium-based compound improves manufacturing of nitrogen products

Despite being widely used, ammonia is not that easy to make. The main method for producing ammonia on an industrial level today is the Haber-Bosch process, which uses an iron-based catalyst and temperatures around 450oC and pressure of 300 bar — almost 300 times the pressure at sea level.

The reason is that molecular nitrogen — as found in the air — does not react very easily with other elements. This makes nitrogen fixation a considerable challenge. Meanwhile, numerous microorganisms have adapted to perform nitrogen fixation under normal conditions and within the fragile confines of a cell. They do this by using enzymes whose biochemistry has inspired chemists for applications in industry.

The lab of Marinella Mazzanti at EPFL synthesized a complex containing two uranium(III) ions and three potassium centers, held together by a nitride group and a flexible metalloligand framework. This system can bind nitrogen and split it in two in ambient, mild conditions by adding hydrogen and/or protons or carbon monoxide to the resulting nitrogen complex. As a result, the molecular nitrogen is cleaved, and bonds naturally with hydrogen and carbon.

The study proves that a molecular uranium complex can transform molecular nitrogen into value-added compounds without the need for the harsh conditions of the Haber-Bosch process. It also opens the door for the synthesis of nitrogen compounds beyond ammonia, and forms the basis for developing catalytic processes for the production of nitrogen-containing organic molecules from molecular nitrogen.

Scientists Capture ‘Spooky Action’ In Photosynthesis

Scientists Capture ‘Spooky Action’ In Photosynthesis

Photosynthesis and other vital biological reactions depend on the interplay between electrically polarized molecules. For the first time, scientists have imaged these interactions at the atomic level. The insights from these images could help lead to better solar power cells, researchers added.

Atoms in molecules often do not equally share their electrons. This can lead to electric dipoles, in which one side of a molecule is positively charged while the other side is negatively charged. Interactions between dipoles are critical to biology — for instance, the way large protein molecules fold — often depend on how the electric charges of dipoles attract or repel each other.

One process where dipole coupling is key is photosynthesis. During photosynthesis, dipole coupling helps chromophores — molecules that can absorb and release light — transfer the energy that they capture from sunlight to other molecules that convert it to chemical energy.

Intriguingly, a consequence of dipole coupling is that chromophores may experience a strange phenomenon known as quantum entanglement. Quantum physics suggests that the world is a fuzzy, surreal place at its very smallest levels. Objects experiencing quantum entanglement are better thought of as a single collective than as standalone objects, even when separated in space. Quantum entanglement means that chromophore properties can strongly depend on the number, orientations and positions of their neighbors.

Understanding the effects that dipole coupling might have on chromophores might help shed light on photosynthesis and light-harvesting applications such as solar energy. However, probing these interactions requires imaging chromophore activity with atomic levels of precision. Such a task is well beyond the capabilities of light-based microscopes, which are currently limited to a resolution slightly below 10 nanometers or billionths of a meter at best, said Guillaume Schull, a physicist at the University of Strasbourg in France. In comparison, a hydrogen atom is roughly one-tenth of a nanometer in diameter.

Instead of relying on light to illuminate and image chromophores, scientists in China used electrons. Scanning tunneling microscopes bring extremely sharp electrically conductive tips near surfaces they scan. Quantum physics suggest that electrons and other particles do not have one fixed location until they interact with something else, and so has some chance of existing anywhere. As such, electrons from the microscope’s tip can “tunnel” to whatever the microscope is scanning. A fraction of these electrons lose energy during tunneling, which gets emitted as light that the microscope uses to image targets with atomic-scale resolution.

The researchers experimented with chromophores made of a purple dye known as zinc phthalocyanine. These chromophores were each about 1.5 nanometers wide, and they shone red light when the microscope excited them.

The scientists used the microscope’s tip to push chromophores together. When the chromophores were roughly 3 nanometers apart, the spectra of light they gave off began shifting.

“I was quite surprised by the dramatic spectral change when two isolated molecules were simply pushed together,” said study co-senior author Zhenchao Dong, a physical chemist at the University of Science and Technology of China in Anhui.

The research team’s theoretical simulations suggest these changes were a direct visualization of dipole coupling between the chromophores.

“Dipole-dipole interactions play an important role in many biological and photophysical processes. To my knowledge, it is the first time that one has directly imaged them with sub-molecular resolution,” said Schull, who did not participate in this study. “This is really impressive.”

The scientists experimented with clusters of up to four chromophores. The varieties of light from these chromophores suggested they might have been entangled. Future research can explore dipole coupling in more complex arrangements — for example, in 3-D systems, Dong said. He and his colleagues detailed their findings in the March 31 issue of the journal Nature.

By analyzing how molecules interact and exchange energy, “the most important implication of these findings is the possibility to understand and therefore engineer molecular structures for efficient solar-energy conversion devices,” said  Elisabet Romero, a biophysicist at VU University Amsterdam, who did not take part in this research. A better understanding of dipole coupling might also yield insights into the function of structures of molecules such as catalysts, Romero added.

Scientists know that you pee in the pool

Scientists know that you pee in the pool

We know you would never do it. But some people pee in swimming pools and hot tubs. This isn’t just a gross habit. When chlorine reacts with urine, it creates chemicals that can irritate eyes and lungs. Now researchers can measure this disgusting behavior. They’ve found a simple way to estimate the volume of urine in a pool.

The technique could help people decide when to change some or all of the water in a pool or hot tub, the researchers say. But the new research isn’t really meant to create new rules for pool managers. It’s supposed to emphasize a message: Don’t pee in the pool!

By itself, urine in pools isn’t a problem. That’s because a healthy person’s pee is typically sterile, or germ-free, says Lindsay Blackstock. She’s an analytical chemist at the University of Alberta in Edmonton, Canada. But pool water also contains chlorine, a chemical that kills germs. Trouble can arise when that chlorine reacts with urine. It can trigger the production of dozens of new byproducts. Many of these new chemicals will cause no harm. But some, especially one called trichloramine (Try-KLOR-ah-meen), are known irritants.

Even if you’ve never heard of trichloramine, you’ve probably smelled it. That distinct “swimming pool smell” at most pools doesn’t come from the chlorine, notes Blackstock. It’s trichloramine. It can sting the eyes. The pungent chemical also can irritate the lungs.

As pee in a pool increases, the amounts of trichloramine will too. The more trichloramine there is, the more irritating it can be to swimmers. So Blackstock and her teammates wanted to see if they could estimate how much urine was present in pool water. There’s no simple way to test for urine directly. (Have you ever heard that pool water has a chemical in it that will change color if you pee? That’s only a myth.)

So the researchers needed a marker for the urine — some other substance that would signal the likely presence of pee. And that’s what caused them to focus on acesulfame (ASS-eh-sul-faym) potassium. It’s an artificial sweetener used in foods and drinks. It’s sold under the brand names Sunett and Sweet One. The chemical is also called Ace-K for short.

It makes a good marker for pee, says Blackstock. For one, it has no natural sources and is very stable. It doesn’t break down at normal temperatures, which is why many food manufacturers use Ace-K. Even after being stored in foods at room temperature for 10 years, it won’t have broken down. It also won’t break down in pools or be removed during water-cleanup treatments.

Moreover, Ace-K passes right through the human body without being digested. That makes it a great choice as a low-calorie sweetener (the body doesn’t get any energy from it). But it also made Ace-K a good choice for their study, says Blackstock. The substance doesn’t leave the body in sweat, breath or poop. Ace-K only leaves the body in urine. And when it comes out, it will be the same form of the chemical as had been ingested.

Foul findings

First, the researchers needed to know how much Ace-K is present in the average person’s urine. They collected urine samples from 20 people and mixed them together. Each milliliter of urine (about one-fifth of a teaspoon) contained about 2.36 micrograms of Ace-K.

Then, on 15 days in August 2016, the team collected water samples from two swimming pools in a city in Canada. One pool held about 420,000 liters (110,000 gallons). The other had about twice that volume. On the same days, the researchers also collected three samples from the city’s water supply.

Liter-sized samples of the city’s tap water contained between 12 and 20 nanograms of Ace-K. (Remember, Ace-K doesn’t decompose during water treatment.) If there were no pee in the pools, they should have had similar levels of Ace-K. The smaller pool, though, had 156 nanograms of Ace-K per liter of water. And the larger pool had even more, about 210 nanograms per liter. That adds up to about 30 liters (almost 8 gallons) of urine in the small pool. The larger pool held a whopping 75 liters (almost 20 gallons) of pee!

These pools probably aren’t unusual, says Blackstock. In 2014, the same researchers found Ace-K in unusually high concentrations in 21 public swimming pools, 8 hot tubs and even a private swimming pool. In other words, every pool and hot tub they tested had pee in it. Blackstock and her team shared their new findings online March 1 in Environmental Science & Technology Letters.

The team’s approach “is a pretty cool idea,” says Beate Escher. She’s a toxicologist at the Helmholtz Center for Environmental Research in Leipzig, Germany. Researchers have used Ace-K before to measure water pollution, she says, both on and just beneath Earth’s surface. And Ace-K holds some advantages over other substances, such as caffeine, that researchers have used as a marker of urine. Caffeine, for instance, can break down after it leaves the body. “Ace-K is much more stable,” Escher says.

Like Blackstock and her team, Escher suggests the best way to tackle urine is pools is prevention, not clean-up. So please, she urges, don’t pee in the pool: “Self-control is the best thing.”