Sunday, 4 December 2016

Strange Moments in Science: The Battle for the Planck Length

Many people use Wikipedia as a go-to source of information on a topic. People know that anyone can edit Wikipedia at any time, but generally trust that it is largely accurate and trustworthy because of the large number of vigilant volunteers and the policy of requiring sources for facts. A few years ago, I noticed that the article on the Planck length had some very detailed yet weird and incorrect information that was causing a lot of confusion among physics enthusiasts, and I decided to look into it.

This image taken from the Wikipedia commons and based on an xkcd.

I have written about the Planck length and its physical meaning on PhysicsForums. Besides being a "natural" unit, I stressed that the main significance of the Planck length is that it is approximately the order of magnitude at which quantum gravity becomes relevant, but that the idea that it makes up some sort of "space pixel" is a common misconception. There may be additional significance to it, but that wouldn't be part of established physics.

The Wikipedia page on the Planck length made the claim that a photon with a Planck length wavelength would collapse into a black hole. This is physically problematic (what does it even mean for a photon to collapse into a black hole?) in part because it violates Lorentz symmetry, because you could just observe the photon in a different reference frame and it would have a longer wavelength. Would the black hole "uncollapse" if you moved away from it? Is it only a black hole in some privileged reference frame. There was a large "proof" of this purported fact based on the Schwarzschild metric and the gravitational self-energy of a photon (which doesn't really make sense either because photons don't have a rest frame...nor are they uniform spheres). If you're using general relativity to contradict special relativity, you have done something wrong.

I thought this was strange, so I wanted to look at the source, which was a paper with a Russian title that in English meant "Comprehending the Universe." I did a search for it, and all that came up was the same Wikipedia page. The author of the referenced paper suspiciously had the same name as the person who had made the majority of the recent edits to the Wikipedia page. The reference is published by Lambert Academic Publishing, which is widely known to be a scammy vanity press that will publish your manuscript and then sell you a paper copy. They have recently been emailing me asking if I want them to publish my thesis. I was suspicious and perturbed, so I made a comment on the Wikipedia talk page mentioning that the reference couldn't be found. The author then posted a link to a page where it could be purchased for $60.

So basically, the situation was that there was this incorrect information being passed off on Wikipedia as "proven" physical truth, based on an article published in Russian in a vanity press and added to Wikipedia by the author. This was Bad, and it was misleading people, and I wanted to fix it. It can be quite difficult to actually change things on Wikipedia, because of the large number of policies that have to be followed and protectively zealous editors who will revert a lot of the changes they see. I figured if I just started removing the relevant material, it would quickly be undone by this guy who wanted his stuff on Wikipedia. So, I went to a friend who I knew to be a pretty serious Wikipedian, who goes by the nickname Oreo Priest.  I explained the situation to him, and he logged in to back me up. He pointed out on the talk page the various policies that were being violated (original research, reliable source, conflict of interest, etc) and gave me carte blanche to remove the offending material.

So, I went ahead and started removing it, and when the author protested, my friend and some other Wikipedians backed me up by citing official policies. The author's final argument was that his work was presented at the "5th International Conference on Gravitation and Astrophysics of Asian-Pacific Countries" in Moscow in 2001, and was therefore legit. Then, the final smackdown was laid:

The fact that you also presented your work at a conference doesn't change any of the above points, sorry. 
After that, the pollution stopped, although I haven't checked the guy's profile lately to see what he's up to. A large chunk of the article was removed, and hasn't been added back. However, there hasn't been a lot of improvement to the article since, so the "Theoretical Significance" section is a bit disjointed and sparse. It, and some of the other Planck unit articles, could use the attention of some quantum gravity researchers. Still, it's much better than the page on topological insulators. That page is terrible.

Tuesday, 29 November 2016

The Bremen Physics Catapult

In the town of Bremen in Northwest Germany there is a facility where researchers can launch their experimental apparatus by catapult. It is at a research facility called ZARM, the Center for Applied Space Technology and Microgravity (the acronym makes more sense in German).

The tower in all its glory (from the ZARM website).

The main goal of this facility is to provide microgravity environments for physics experiments, on a scale more accessible than orbital space launch or parabolic aircraft. Besides answering the kinds of questions that keep us up at night, like how cats behave in zero gravity, microgravity is a useful method to control the effective gravitational field in an experiment. As Albert Einstein reminds us, true freefall is indistinguishable from inertial rest, and remaining static in a gravitational field is indistinguishable from uniform acceleration. If you want your experiment to take place in a truly inertial reference frame, dropping it down a tower is a simple way to achieve this.

The entire experiment is built into a capsule like this, and then launched.

Initially, it was simply a drop facility: an experiment built into a modular capsule could be raised by an elevator and dropped from the top of the tower, falling 120 meters and experiencing four seconds of freefall. In 2004 the facility was modified to include a catapult launch with an initial speed of 168 km/h, giving it twice the freefall time but making the facility at least eight times as cool.

A gif of a lanch and landing, pillaged from this youtube video.

Based on a few videos of the facility, it appears that immediately after launch, a crash pad full of protective foam slides into place, protecting the experiment as it lands. Some of the videos are quite well done, check them out if you're interested.

Launch and landing.

I first became aware of this experiment when I read a paper (or possibly a news article) about launching a Bose-Einstein condensate in the catapult. In quantum mechanics class, we learn that a system whose wavefunction that is initially a Dirac delta will expand isotropically through time-evolution, becoming more spread out. Bose-Einstein condensates, functionally made of cold atoms in an electromagnetic trap, serve as a good experimental system for wavefunction dynamics because they are much larger than an individual particle while still behaving like one. However, none of those quantum mechanics homework problems included an anisotropic acceleration vector like we have on Earth's surface, and breaking the symmetry like that causes the condensate to expand differently than it would in true freefall. To measure this predicted isotropic expansion, the research group launched their experimental from a catapult. Because these Bose-Einstein condensates are so sensitive to acceleration, the group was proposing they could be used for accelerometers in spacecraft navigation systems. We will see if that actually happens.

The Bose-Einstein Projectile, from the Humboldt University group.

Not every experiment is as exotic, sometimes it's used to characterize apparati that will eventually be launched on satellites. Some experiments are testing the expansion of explosions in microgravity rather than atomic condensates, both for science's sake and also to gauge safety protocols aboard spacecraft. There are also some biological experiments, for example studying how fish orient themselves and turn in microgravity (I guess they couldn't get the approval to use cats), and others to test how plants perceive gravity (I feel like "how to plants know to grow up" can become a pretty complicated question. The projects are listed on the ZARM website, some in English and some in German.

My experiments typically take place in a little pocket of fluid on a piece of glass on top of a microscope on an optical table in a stationary lab. Compared to launching an experiment from a catapult, that seems rather mundane.

Thursday, 3 November 2016

DNA, topological ropelength, and minimal lattice knots

This post is about some diversions into knot theory, a branch of topology, that I took while contemplating my experiments. My job involves performing and analyzing experiments where I stretch out DNA molecules with knots in them. We study DNA because it serves as a good model polymer that's analogous to the much smaller polymer molecules that plastics and other modern materials are made of, and we study knots because we want to understand how entanglement affects polymer dynamics. There are also some genetic sequencing technologies for which we might want to add or remove knots, but right now I'm mainly just trying to figure out how polymer knots work. The theoretical side of the project (largely carried out by my colleague Vivek and Liang) involves running simulations of polymer chains with different kinds of knots, or simulating the formation of knots, and thinking about that is where I start to dip into knot theory.

A DNA molecule with two knots in it, from one of my experiments. The molecule is about 50 microns when stretched.

Mathematically, knots are only truly defined in loops, as a knot in an open string can just be slid off the end. Knots are loosely categorized by the minimum number of times the loop crosses over itself in a diagram (see below). The simplest knot, the trefoil, when drawn must cross itself a minimum of three times. For a given crossing number, there can be multiple kinds of knots. The two simplest knots, with three and four crossings, only have one type, then there are two for five, three for six, seven for seven, then it starts to explode:  165 for ten crossings, 9988 for thirteen, and over a million for sixteen. Beyond this, it is unknown how many kinds of knots there are (although there are constraints). Each type of knot has certain parameters that are unique to the  knot, called invariants, and calculating these invariants is how knots can be distinguished. When dealing with knots in real ropes and polymers and DNA, we have to remember that it doesn't count as a knot unless the ends are closed. If the ends are very far from the interesting knotty part we can just pretend this is the case without too much issue; in simulations of tightly bunched knots however there are various algorithms called "minimally interfering closure" that make knots in open strings topologically kosher.

Knots with up to seven crossings.
The typical model of a polymer is that of a self-avoiding walk, which I've talked about in the past. When the walk is forced to have closed ends, it represents a topologically circular molecule, and can be thought of as a self avoiding polygon (even though polygons are typically imagined in two dimensions). These self-avoiding polygons can take the form of different types of knots, and in fact it can be shown that as a self-avoiding walk becomes longer and longer, the probability of having a knot in it approaches 100%. A number of computational papers have shown that knotted ring polymers have a size (quantified by the radius of gyration, the standard deviation of the location of each monomer in the chain) which is smaller than an unknotted chain of the same length.  This makes intuitive sense, there are topological constraints which prevent the chain from adopting a widespread conformation.

The chains in these simulations tend to be quite "loose," and I was curious about the scaling of tight or  "ideal" knots, which are knots that have the minimum possible length-to-diameter ratio. Each type of knot has a parameter called the ropelength, which is the minimum length required to make a knot of unit diameter. The ropelength can be a difficult quantity to calculate, and one group has written a few papers just slightly hammering down the ropelength of the simplest knot, bringing it from 16.38 in 2001 to 16.37 in 2014.

These knots are "ideal" meaning they have the smallest possible length-diameter ratio.

The ropelength comes up in my research in part because of a theory from Grosberg and Rabin, who derived the free energy of a knot in a polymer. The free energy arises from the fact that the length of polymer in the knot is confined, reducing entropy, and bent, increasing energy. They argued that the energy of a given knot type is minimized at a certain length, and such a knotted conformation constitutes a metastable local minimum in the free energy (compared to the unknotted global minimum) that implies that an open knotted polymer wouldn't spontaneously unknot itself, but instead would unknot through a slower diffusion process when the knot reaches one end. One of the terms in the Grosberg-Rabin is the excess knot contour, which is the length of polymer in the knot minus the ropelength, and thus it is necessary to know the ropelength to calculate the metastable knot energy. Another paper argued that metastable knots don't exist, and that is one of the things I'm trying to figure out with my knotted DNA experiments.

The most topologically accurate DNA knot experiment comes from Robert Bao, Heun Jin Lee, and Stephen Quake in 2003, where knots of known topology were tied in individual DNA molecules using optical tweezers. I get the impression that Bao developed the expertise to do this, wrote his paper, and made the sanity-promoting decision never to do it again. In their paper, they measured the amount of DNA in different kinds of knots and the diffusion of the knots along the molecule, and used this to ascertain how much intramolecular friction was occurring within the knot.  Grosberg and Rabin claimed to have postdicted Bao et al.'s findings with their model. In my experiments, unfortunately, the knots are much more complicated and I don't know what kind they are.

In my experiments, I'm interested in figuring out what kind of knots I'm looking at, and one way to do that is to estimate the amount of DNA in the knot and backstrapolating to figure out the number of essential crossings. To aid in this endeavor, I looked up the known ropelength of knots up to 11 crossings, which was not super-easy nor super-difficult to find. I don't think anyone has analyzed this specific relationship before, but not too surprisingly, with more crossings you tend to have a longer knot, although there is overlap between knots with adjacent or superadjacent crossing number. This relationship fits well with either a linear or square-root function (the best-fit power is 0.8...not sure that's meaningful but I love power-law fits). Once you get past 11 crossings there are problems because A. nobody has bothered calculating this for 12-crossing knots (of which there are 2176) and B. the mean isn't a useful parameter anymore because there are two populations: alternating and non-alternating knots, which have their own behaviours. Complex knots often have more in common with other knots of a certain class (e.g. twist, torus knots) than with the same crossing number.

More complex knots require more rope to tie.
My idea was to use this trend to figure out or constrain the topology in my experiments. For example, if my knot has 2 microns of DNA in it, I can figure out (hypothetically) that it must have at most 30 crossings based on extrapolating the ropelength curve or Bao's data, or something like that. I still haven't solved the topology-constraining problem. However, looking into the ropelength scaling got me interested in other aspects of maximally tight knots, even though maximally tight knots don't really occur in my experiments.

In addition to the lengths of ideal knots, I was interested in their size in general, which for a polymer is typically quantified by the radius of gyration, and a lot of polymer physics looks at the scaling of the radius of gyration with length. It's established that loose floppy knots grow more slowly than loose floppy rings, but what about tight knots? It turns out, the radii of gyration for ideal knots hasn't been calculated (at least not to my knowledge), so to examine this I'd have to generate the knots based on tables of 100 Fourier coefficients and then discretize and measure them. Instead I opted to look at a less-ideal system, minimal knots on cubic lattices. These were tabulated by Andrew Rechnitzer for a paper on the topic, and exist for knots up to 10 crossings (for which there are 165). These are useful for doing simulations of knotted polymers because they allow the initialization of complex knots, and the lack of tabulation for more complex knots makes them difficult to systematically simulate.

A minimal 10-crossing knot on a cubic lattice.
These are the knots whose coordinates on a cubic lattice have the least number of points, but they are still longer than ideal knots. How much longer? The length of each minimal lattice knot (which I call the cubelength) is roughly 40% greater than the equivalent ideal knot, decreasing slightly as a function of crossing number. This makes handwaving sense, because you have an extra factor of square root of two if you go across a diagonal instead of taking two steps. We can see all the different knots (from 3 to 10 crossings) in the graph below; notice there is overlap between the largest 7-crossing knots and the smallest 10-crossing knots. I had a hypothesis that these tight knotted lattice polygons would be more compact in their scaling (the radius of gyration would increase with a weaker power of length) than regular compact lattice walks, and went to examine it.
Every minimal lattice knot up to 10 crossings (except one of the 10-knots...I only ended up with 164 for some reason).

The size of a random walk grows with the square root or 0.5 power of its length. The size of a self-avoiding walk grows faster, with roughly the 0.6 power, for reasons I explain here. A compact globule (the form a polymer takes when it collapses) which grows by wrapping around itself, will grow with a volume proportional to its length, so its radius is expected to grow with the 0.33 power of its length. What do we see when we look at lattice knots? The ensemble of compact lattice knots grows at the 0.33+-0.02 power, entirely consistent with a compact globule.

Left: Average izes of minimal cubic lattice knots as a function of the crossing number. Right, same as a function of their length. I intuitively guessed the function Rg=(L/6)$^{1/3}$ and it fit the data really well.
Nothing too exciting there, but another question one could ask is whether minimal knots grow faster or slower than random self-attracted walks of comparable length, as polymers scale differently as they grow, before they reach the asymptotic limit. To figure this out, I generated self-avoiding random walks on a cubic lattice up to length 60, biasing the steps they take (with a Metropolis method) to have a strong preference for being next to non-adjacent occupied sites. Even with very strong self-attraction (well past the theta-point in polymer language), the scaling was 0.36, and not 0.33, but looking at a graph there is not a noticeable difference between the scaling of knots and tight lattice walks.

So, what's the moral of this story? A tangent from my research into DNA knots lead me to look into some of the scaling properties of ideal and minimal cubic lattice knots. I didn't find anything too profound: more complex ideal knots are bigger, and minimal cubic knots grow like the compact globules they are. It was a fun little investigation that I don't think anyone had bothered doing before. I didn't find anything interesting enough to publish, so I thought I'd share it here.

Sunday, 30 October 2016

Media appearance: Discover Magazine

An article about my work on falling through the Earth was published in this month's edition of Discover Magazine. The article, based on some phone interviews I did with them, can be found here. Unfortunately you will need a subscription to read it, and because they were nice enough to write this article I'm not going to blatantly infringe their copyright and post a scanned version here. Enjoy, if you can.

Saturday, 22 October 2016

Visit to the Alcator C-Mod Fusion Reactor at MIT

I recently became aware that MIT had an experimental fusion reactor on campus, so I decided to check it out. After a few back-and-forth emails, I ended up joining a tour that was being given to the Tuft's energy club (yes, Tufts) on Friday afternoon.

MIT's tokamak research began as an offshoot of its magnet research lab (where MRI was developed), that eventually became so large as to spawn a new center and envelop its host. Alcator is an elision of Alto Campo Toro (high field torus), and this is the third iteration of it. It was recently in the news because they had achieved a fusion reaction at a comparatively high pressure, immediately before it was shut off because the US government diverted all fusion research funds to ITER.

The tour began in the lecture room with an overview of the why's and what's of fusion. In discussing the when, the speaker showed the improvement towards the break-even point in fusion reactors over the past several decades, but said that because the tokamaks had gotten so much bigger, they timescale of the experiments had slowed down considerably and now they have to wait several decades for ITER to be ready before their next big learning experience. I talked more about the timeline to fusion here; it's a bit more pessimistic.

"Why use fusion to give plants energy to make sugar and then bury them for millions of years until they form fuel and burn the fuel to make electricity, when we can just use fusion to make electricity?"

Another amusing tidbit the speaker mentioned is that more money has been spent in recent years making movies about fusion (e.g. The Dark Knight Rises, Spiderman 2, etc) than on fusion research itself. He was also overjoyed when someone asked why tokamaks were shaped like donuts and not spheres and he got to cite the hairy ball theorem.

After the speech we went across the street to the facility. There was a big control room where a few people were watching a video and looking at data from the last experiment. There was also a neat demonstration where a plasma was established by running a high voltage across a mostly-evacuated tube, and then a magnetic field was activated that visibly pinched the plasma.

We then entered the room with the tokamak. Most of the room, however, was full of electrical equipment. One of the neatest things I learned was about the way the thing is actually powered. It requires very powerful short lived surges of electricity to power the magnetic coils, on the order of 200 megawatts. To achieve this, they integrate power from the grid and store it in a massive rotating flywheel. The infrastructure for transforming the grid electricity is actually much bigger than the tokamak and its other supporting infrastructure. I was informed that the flywheel is bigger than the tokamak and its shielding, but I wasn't allowed to see it. It is apparently 75 tons and spins up to 1800 RPM, storing 2 gigajoules of kinetic energy.

The flywheel at the Joint European Torus, bigger than Alcator's.

On the wall outside the reactor room was the floor plate from a previous version. It was large.

The tokamak itself is big enough for a person to crouch in, and is surrounded by the wires that generate the magnetic field, and a lot of shielding. The whole thing is about the size of four elephants. It is painted light blue on the outside. On top there is a tank of liquid nitrogen that they use to cool down the copper wires after each heating pulse (which lasts about two seconds). People took photos and asked our tour guide some questions.

Here I am with the reactor (light blue tank thing behind me). You can see the tank of liquid nitrogen on top. The black pipe behind me is the power supply cable for the magnets. Not sure what the cheese grater thingy is.

Afterwards in the lobby there were a few artefacts we could play with, including some of the copper wires used for the field and the superconductors that are replacing them, and some tungsten shielding plates that were visibly damaged from years of hot plasma abuse.

Overall, I didn't learn much that I didn't already know about fusion (except the flywheel!) but I'd beat myself up if I knew I worked down the street from a fusion reactor and never went to see it. I appreciate the Plasma Science and Fusion Center guys for putting on the tour, and the Tufts club for (perhaps unknowingly?) letting me crash their event.  Also, one of the safety signs had a typo.

Monday, 10 October 2016

The Nested Logarithm Constants

Depending on your philosophical interpretation of mathematics, I have either discovered or invented a new number: the Nested Logarithm Constant.


As is natural, the logarithms are base-e. I first calculated this number while reading about the nested radical constant, and I wanted to see if something similar existed for logarithms...and it did! While I cannot be certain that others have not examined this number previously, googling the first few digits only yields random lists of digits. I have entered its decimal expansion into the Online Encyclopedia of Integer Sequences, the closest thing that exists to a Wiki of mildly interesting numbers.

It converges quite rapidly with respect to the final digit, being effectively exact after about 5. The plot below shows the convergence with respect to the final digit, and the fractional deviation from the asymptotic value, with an exponential decay shown for comparison.

While it seems obvious empirically that this converges, it is not proven [by me, see update below]. As far as I can tell, a proof of the convergence of the nested radical constant is also lacking. I would suspect that the nested logarithm constant is both irrational and transcendental, and there is a short proof that the logs of integers are irrational, but I'm not aware of a proof that nested logarithms are irrational.

There are a few extensions one can examine. Changing the base of the logarithm changes the value of the final number, decreasing with increasing base towards zero, and towards infinity with decreasing base.

Another extension is to include an exponent on the integers, such that the number becomes:


When N is large, 2$^N$ will be much larger than log(3$^N$+...), so we can take the N out of the exponent and see that $\alpha_{N}$ approaches log(1+N·log(2))  log(N)+log(log(2)). The difference between $\alpha_{N}$ and its large-N approximation decreases inversely with N, with a prefactor empirically* very close to the square root of two. With this reciprocal correction, I can almost write down a closed form expression for $\alpha_{N}$, except it starts to diverge close to N=1, and I don't know if it's a coincidence or not (where does the square root come from?). I would be interested in exploring this further to try to find a closed form expression for the 0.82...constant.

The nested logarithm constants as a function of the exponent, and the deviation of two approximations.
There is one more related number of interest. I can calculate log(1+2·log(1+3·log(1+4·log...)))) which converges to roughly 1.663.... I have not investigated this in as much detail, but it is empirically very close to the Somos Quadratic Recurrence Constant, which can be calculated as √(1√(2√(3√(4...)))) and converges to 1.662. Unless I'm lead to believe otherwise I will assume that this is a coincidence.

This investigation started with me playing around with logarithms and seeing what came out, and lead to a few findings that I think are kind of interesting. Maybe when I have the time and energy I'll investigate this further and be able to determine if the nested logarithm constant has an exact value. There is a journal called Experimental Mathematics, which is basically what this is, so perhaps I can send it there.

UPDATE: A commenter on reddit, SpeakKindly, has put forward a simple proof of convergence by establishing an upper bound using induction. I will repeat it here for the readers:

First, consider the related term log(a+log(a+1+log(a+2+log...(log(a+k)))) for some k and a>1. SpeakKindly proved this is less than a for all k. The base case, for k=0, we have no nested terms and just have log(a), which is less than a. Then we assume this is true for k-1 nestings, so we have log(a+log(a+1+log(a+2+log...(log(a+k)))) < log(a+(a+1))=log(2a+1), which is always less than a. The nested logarithm constant is the log(1+(the above term for a=2)), which is less than log(1+2), thus the nested logarithm constant is less than log(3).

Another user, babeltoothe, used a different argument using the integral definition of the logarithm, where the upper limit of each integral was another integral.

*Best power-law fit is 1.004√2 for the prefactor and -0.998 for the exponent.

Saturday, 27 August 2016

What's the deal with fusion power?

"Fusion is the energy of the future...and always will be."
I recently attended a seminar at the MIT Plasma Science and Fusion Center from a director of the ITER project about the prospects of a commercial fusion reactor. I decided to write a post about why fusion always seems to be 50 years away and what needs to happen to make it a reality. What it comes down to, is that it requires a huge initial investment that nobody wants to pay for.

The SimCity 2000 transit advisor understands the challenges facing the fusion plants in his city.

The most-studied system for fusion is the tokamak (from the Russian for toroidal chamber with magnetic coils), which is a big donut-shaped chamber with a plasma inside, with a strong applied magnetic field that keeps the plasma confined in thin ring-shaped region in the middle of the torus. The plasma is heated up by driving it with electric fields and inducing Joule heating. If the plasma gets  hot enough, it can begin  to fuse hydrogen into helium, which produces a lot of excess energy in the form of neutrons, gamma rays, and overall heat. The heat is used to boil water to run a turbine. While the sun typically fuses four protons into helium (the p-p chain), the most accessible reaction is deuterium-tritium fusion, which has a lower energy barrier. Tritium is extremely rare naturally, but it can be produced by surrounding a nuclear reactor or the tokamak itself with lithium, which becomes activated by the neutron flux and then decays into tritium.

The inside of the Joint European Torus tokamak, with a view of the plasma.
Fusion research began after the Second World War, not only towards the development of hydrogen bombs but also for power generation, with plasma systems getting bigger and hotter over the next few decades. Due to the Cold War, a lot of this research was classified.

The sad graph

In 1976, fusion researchers in the US wrote a report about future prospects for fusion power, suggesting that with appropriate funding fusion power would be realized within 15 to 30 years. They also predicted that with current levels of funding, they would not be able to achieve their goals in the foreseeable future. The actual funding reality since 1976 has been even bleaker than that. So it's not just scientists always saying that it's 20 years away and not making any progress, it's the lack of investment that prevented those 20 years from counting down. Their prediction was correct.

Prediction of fusion progress from 1978 based on funding, compared to the actual historic funding for fusion in the US. Even though this graph looks like it was drawn in Microsoft Paint, I have looked up the source material and this graph represents it accurately.
The reality is that the science and technology required for fusion power is ready to go, there are just a number of engineering and economic challenges that need to be overcome. Not engineering challenges like "how do you make a magnet that big" but more like "we need a gigantic building with complicated plumbing that won't expose its workers to radiation." We know how to build a giant magnet donut and fill it with plasma, it just needs to be made big enough to generate power, and big is expensive.

The Future

The next step in this direction is ITER (International Thermonuclear Experimental Reactor), a giant tokamak being built in France. Its main goal, from what I've read, is to generate ten times more power than it requires to operate (although this still won't make it viable as a power plant). It has gone massively overbudget and is now expected to run a tab of 20 gigaeuros. I think, moreso than any plasma science objectives, this will shed a lot of light on what is required to engineer a facility of this magnitude.

ITER, dream and reality.

At the fusion seminar I attended, a director of ITER was talking about his ideas for a commercially viable fusion reactor, which would have to be much bigger than ITER. One of the reasons tokamaks are so expensive is that they have to be really big. The fusion reactor would require about 500 megawatts just to maintain the fusion reaction: beyond that, power can be sold to the grid. However, if it's just selling an extra 50 megawatts after that 500, the electricity would have to be extremely expensive to cover the costs of the plant. It is estimated that a fusion plant would have to generate at least 2.5 gigawatts of electricity (slightly more than Hoover Dam) in order to sell the electricity at a cost comparable to current power sources. Thus, the minimum sensible infrastructure investment is that which is needed to make a plant that big. Not all of this goes into the generator itself, a lot of it goes into the building that houses it, as well as the plumbing necessary to extract tritium such that the reactor can keep making its own fuel.

The price tag that he quoted was 30 billion dollars, plus maintenance costs and morgtage payments over the next sixty years totalling over 100 billion, but it would produce enough power that that electricity could be sold at grid prices. A debate arose in the seminar about the economics of choosing the ideal initial size, about which costs scaled super-linearly with size and which scaled sub-linearly. It was pointed out that the first fission reactor was not sufficiently large to produce electricity that could be sold at a reasonable price, but the fact that it demonstrated that the technology was viable lead to investments in bigger nuclear plants. After the first viable fusion reactor is built, it won't be as difficult to build the next one.

The speaker claimed that what was required for this to actually exist was a rapid increase in fossil-fuel prices that would be driven by scarcity and increased energy usage in China and India. He cited $200/barrel as roughly the price at which a 30 billion dollar fusion plant would seem like a not-crazy investment. However, I've heard this story before; the rise in prices in the last decade made it viable to extract oil from fracking and from the Canadian tar sands and didn't give us a renewable energy revolution. Someone in the audience mentioned that General Electric was now developing its coal power technology, even dirtier than oil, to placate the rising energy demand in China.

The National Ignition Facility

I'm mainly discussed magnetic confinement fusion, but I'll also mention the National Ignition Facility (NIF) that was built for fusion research and then hastily re-purposed. The idea was to fire an extremely powerful laser at a small deuterium-tritium pellet such that it rapidly compressed and heated up until it was so hot and dense that fusion ignited. To this end, they built the world's most powerful laser array that was basically in inverted Death Star, with all the lasers focused on a little target at the center. They gradually ramped up the power, occasionally publishing papers about the behaviour of shock waves and the radiation emitted from these tests, and just as they were on the verge of getting powerful enough for fusion, that aspect of the project was halted and the facility was turned into a materials characterization facility and a way to test whether the fuel in nuclear bombs still works, without detonating the bombs themselves. I really wish they'd keep trying for fusion.

NIF from the outside and inside.
ITER and NIF aren't the only extant fusion projects, there is also a really cool looking one in Germany called the Wendelstein 7-x and the American Z-Machine as well as smaller facilities around the world, and I hope the future of fusion is brighter than what was laid out at that seminar.

Sunday, 21 August 2016

The duplication of BioMed Central.

This post is about something strange that happened in 2014 and my attempts to deal with it.

Towards the end of my master's, I wasn't sure what to do next, so I picked up a part time consulting job, basically looking into public health statistics and compiling the trends and developing models to forecast them. I ended up doing a Ph.D. after my master's but I kept doing this job on the side. The first project I worked on involved the epidemiology of tuberculosis in Quebec, which is a fairly serious problem up North. It lead to two papers, one looking at the statistics and one forecasting them into the future. Working on these papers was a good learning experience but I would say that they are not as good as my physics papers, which I am more proud of.

The second paper was published in BioMed Central (BMC) Public Health. BMC is a massive journal network owned by a massive publishing company, Springer, and there are over 100 BMC journals, which I believe are all pay-to-publish open access. Pay-to-publish always seems kind of sketchy but there was definitely a peer-review process.

One day, I was compulsively checking my citations on Google Scholar and noticed that there was a new citation to my first TB paper, bringing the total up to two. I clicked to see who cited me, and found that the two citations were 1. my second TB paper on BMC Public Health, and 2. also my second TB paper, on the domain (don't bother going there, it's just an ad site now). My paper was randomly hosted on this Russian website with no explanation, so I decided to look into it. I found that the entire BMC database, consisting of hundreds of journals and tens of thousands of papers, was completely duplicated on this Russian server, with no explanation. I don't understand why someone would do this, and the BMC papers are free to read anyway. In addition to duplicating my paper, they also had links to rough pre-publication versions of the paper, including MS Word track-changes notes, that were still hosted on the BMC server.

I decided to contact the editor-in-chief of BMC, who was nominally in the UK but actually in Cyprus. I mentioned the serious issue that their whole website was duplicated on a Russian website, and the less serious issue that my rough drafts were still being linked to. The editor responded to me saying:

Please note that authors of articles published in BMC Public Health are the copyright holders of their articles and have granted to any third party, in advance and in perpetuity, the right to use, reproduce or disseminate the article, in its entirety or in part, in any format or medium, provided that no substantive errors are introduced in the process, proper attribution of authorship and correct citation details are given, and that the bibliographic details are not changed. If the article is reproduced or disseminated in part, this must be clearly and unequivocally indicated. Please see the BioMed Central copyright and license agreement for further details.

Furthermore, as stated in our ‘About this journal’ page ( the pre-publication history including all submitted versions, reviewers' reports and authors' responses are linked to from the published article. We are unable to remove the pre-publication history of the manuscript following its publication in the journal.
 In other words, she completely ignored the fact that I was trying to tell her about HER ENTIRE JOURNAL NETWORK being pirated, and instead decided to focus on a minor editorial policy. That was honestly a bit disturbing, how little she cared. I sent another email telling her that she ignored the important part of my email and asking if it bothered her that her entire journal database was copied on another website.

Meanwhile, I looked for a better way to get into contact with somebody in charge, because the editor of the journal didn't seem to care. I managed to find the secure whistleblower ombuds page for Springer, which is basically for people who want to report scientific fraud and related misconduct. I left a message explaining the situation, and my concern as to how casually the editor seemed to be taking it. A few days later I got two messages, an email from the editor apologizing for the misunderstand, and a secure message in the ombuds inbox from the legal team:

we have looked into this and the Legal Department will approach the contact that it given on the (fake) site. Please rest assured that Springer takes piracy very serious. In order to protect our authors´ rights and interests, Springer proactively screens websites for illegal download links of Springer eBooks and subsequently requires hosts of such download sites to remove and delete the files or links in question. This necessary action has become increasingly important with the growing number of eBooks within the Springer eBook collection.
Eventually, the fake version of BMC on disappeared and my citation count went back down to 1, and the issue appeared to be resolved. I am still left wondering, however, why someone would bother duplicating a massive network of free papers to their own server, and the whole incident with the editor left me a little wary of BMC.

Sunday, 31 July 2016

DNA Waves: New paper in Physical Review E.

In June I submitted my latest paper to Physical Review E, and today it was published. I also uploaded a preprint to, and that free version can be found here. It is (probably) the last paper from my PhD work, which I wrote on-and-off in my spare time over the last year. Here, I'll briefly summarize what the paper is about.

A DNA molecule snaking its way through an array of cavities (which are separated by about a micron). The paper is about how "waves" appear to propagate along the DNA molecule.

As I've mentioned in some  other articles, my Ph.D. work was about the physics of DNA molecules trapped in cavities connected by a narrow slit. I was studying this both to better understand polymers in geometries using DNA as a model system, and to possibly develop genetic sequencing technology. The first paper I published on this was about diffusion through these cavities, and most of my Ph.D. I spent working on measuring the entropy loss involved in confining DNA. Towards the end I started working on a paper looking at how long it takes DNA to fluctuate from one cavity to another, which we described in terms of modes of a coupled harmonic oscillator system. That was published last summer.

From my two-pit fluctuations paper.

One day I did an experiment using much longer DNA than usual, and I noticed something cool: when you look at videos of the molecules, it looks almost like there are waves propagating back and forth along the molecule. I decided to investigate that, and that's what the paper was about.

Do you see the waves?
Looking at these movies you can sort of convince yourself that there are these transient waves, and a good way to look at how these things propagate over time is through a kymograph, which averages out one spatial dimension so you can see how the profile in the other dimension evolves over time. Doing this, you can see diagonal streaks of brightness, which are the propagating waves. You can see both positive waves, where excess DNA propagates between the pits, and negative waves, where paucity propagates, which are analogous to electron holes in a semiconductor. I also thought I could see evidence of waves reflecting off the end of the molecules, although that didn't make it into the final paper.

A wave of brightness propagating through a molecule.
The main way I analyzed the data was through correlation functions. A correlation function basically measures the probability that a deviation from the mean of one thing leads to a deviation from the mean in another thing. In my two-pit fluctuation paper, I was looking at the cross-correlation between the intensity in one pit and the other, and since DNA is just going from one pit to the other, if the intensity pit 1 deviates upward from the mean, it's very likely that pit 2 has deviated downward, so they are anti-correlated. At short time-scales this anti-correlation is strong, while at long time-scales it decays towards an uncorrelated noise floor.

This is from an actual presentation I gave at group meeting.
With two pits this is simple, but with a larger number it gets complicated. With three pits you have one-two, two-three, one-three, plus the autocorrelation functions one-one, two-two, and three-three. In general for N pits you have N(N-1)/2+N unique correlation functions, which provide a lot of information. My biggest molecule was in 15 pits, which would give 120 correlation functions. We decided to focus on correlations between neighboring pits: 1-2, 2-3, 3-4 etc. This would allow us to look at the process of DNA leaving one pit and going to the next one, then from that one to the next one, etc.

Basically what we observed was that any given point in time, neighboring pits were anti-correlated (as expected, because if there's more DNA in one pit it's less likely to be in another), and then that grew to a positive correlation at some later time (this is the propagation of the "wave": excess DNA in one pit at one time is more likely to be found in a neighboring pit at a short time later), and then a long-time decay (everything averages out due to random thermal motion).

Cross-correlation functions between different pit intensities. At zero time (A), they are anti-correlated because DNA in one pit is less likely to be in another. At a short time later (B), they are positively correlated, because the excess DNA at A is likely to have reached B at this time. At long times (C), random fluctuations bring correlation down to zero. This is the meat of the analysis.

This pattern was repeatably observable for all these large-N systems, which gave us a lot of data on how these waves propagate through confined DNA. We could also see something similar looking at next-nearest neighbors, and even next-next-nearest neighbors. The hard part was understanding all this data and what it was telling us about the underlying physics that lead to these waves. Presumably, such an explanation would allow us to predict what these correlation functions look like.

When the DNA molecule is at equilibrium, there is a certain length in each cavity (the ideal length balances its own self-repulsion and the entropy loss from the slits), and a certain tension in the strands linking each cavity. If, due to a thermal fluctuation, one cavity has an excess of DNA, the whole system gains some energy that is harmonic with respect to the excess length of DNA, and this excess is diminished as DNA is transferred to adjacent cavities through propagating changes in tension in the linking strands.

Because of the harmonic energy cost and my previous work mapping the two-pit system onto harmonic oscillators, the way I initially thought of the waves was in terms of a chain of harmonic oscillators, where a disturbance in one propagates down the chain as a phonon. It is a bit tricky to map this phenomenon onto a polymer in solution, because it is overdamped and effectively massless, so there is no momentum that is conserved. I spent a while trying to figure out the theoretical correlation function for an overdamped harmonic chain in thermal reservoir, and writing Monte Carlo simulations thereof, but that only got me so far.
One dimensional random hopping on an array. This model turned out to describe our system very well.
A simpler model turned out to work better: if you just imagine a bunch of Brownian random walkers on a one-dimensional lattice, each with some random probability of hopping in either direction at some rate, you can show that this system gives rise to collective motion which is exactly solvable and evolves in time in a way that looks like our observed correlation functions. In our system, we can treat the DNA as randomly fluctuating in either direction, but fluctuations are more likely in a direction that reduces excess DNA in a cavity, and that is essentially what the multi-hopping model is describing. A lot of the paper involves matching the predictions of the model to our observed correlation functions. It turned out to be different than I had initially envisioned it; I was originally thinking of it in terms of sound waves propagating through the molecule as tension perturbations.

I like this paper because it started out as an investigation of a neat phenomenon I happened to observe, and lead to something more systematic that we eventually understood in terms of some fairly fundamental statistical physics.  I'm glad the reviewers liked it too!

Update: this post was modified after the paper was published.

Thursday, 28 July 2016

Do radioactive things glow?

In a lot of depictions of radioactive materials, there is a green glow that is emitted. This is so common the green glow itself is associated with radioactivity. Does this green glow exist, and if so, where does it come from?

This animated plutonium rod should not be confused with the inanimate carbon rod, which does not glow.

Radioactive materials such as uranium and plutonium do not, by themselves, glow. Pure uranium looks like a boring grey metal, and plutonium is slightly shinier. The glow associated with radioactivity originates from materials containing radioactive isotopes, but is due to electronic rather than nuclear transitions, similar to how certain materials glow under a blacklight. These materials can glow without an external power supply, because the atomic transitions can be excited by the radioactive decay. This is known as radioluminescence.

Radioluminescent materials were more common in the early 20th century, before it was understood how incredibly bad for you extended radiation exposure is. Clock faces often had dials painted with radium, so that they could be read in the dark. The luminescence was actually produced by zinc sulphide, which was activated to an excited state by the radium decay. Radioluminescent watches can still be purchased, but they use the safer tritium as an isotope.

Another common product was uranium glass, which was marketed as vaseline glass, apparently because it was the same colour as petroleum jelly was at the time. Once again, it is not the uranium producing the light, but the transitions excited by its decay.

Uranium glass glowing green.. The blacklight in the background is the main source of illumination.
Perhaps the most common radioactive green movie trope is plutonium, but plutonium is not used in radioluminescent compounds. It can appear to glow red, but that is due to a chemical reaction with oxygen (aka burning) rather than anything associated with radioactivity.

All of these heavy-element radioluminescent materials rely on alpha-decaying isotopes. Alpha particles tend to move slowly (compared to light) and are comparatively safe; alpha radiation can essentially be blocked by clothes, but if you inhale a piece of alpha-emitting dust you can be in serious trouble. The main scenario where radioactivity does produce a glow is not from alpha decay but beta decay*, typically in nuclear reactors. Nuclear reactors are often kept underwater, for cooling and for neutron shielding, and when emitted beta particles (electrons) exceed the speed of visible light in water, they emit Cerenkov radiation, which manifests itself as a blue glow (This is analogous to a sonic boom when something exceeds the speed of sound).

Cerenkov radiation from a nuclear reactor.
So just to summarize, radioactive materials do not emit light just because they are radioactive, but essentially act as a power source to make other things glow. Except nuclear reactors, those actually glow.

*Electrons are 1/7000th the mass of alpha particles, making them much faster at the same energy.

Monday, 18 July 2016

"What's the application?": Genetic sequencing and polymer physics

Long time no post. I wrote an article on PhysicsForums trying to explain a few of the applications of my field of research, and how they relate to some of the physics problems. Check it out!

Wednesday, 1 June 2016

The Fall of the Twin Prime Gap

The twin prime conjecture states that there are an infinite number of primes separated by a gap of two, e.g. 17 and 19 or 29 and 31. In May 2013, a relatively unknown mathematician named Yitang Zhang proved that there are infinitely primes separated by a gap of some number smaller than 70,000,000. This number is much bigger than the postulated two, but much smaller than infinity, which is where the number theory community was stuck at before. This finding heralded in a rush of work attempting to lower this bound, causing it to drop rapidly over the next several months, finally coming to a rest at 246 less than a year later, where it has sat. This post is about cataloguing and quantifying this drop.

Timeline of the twin prime gap, on log-linear and log-log axes.
Most of this work was carried out by the Polymath Project, which is a group of mathematicians that collaborate on unsolved math problems. The results have been catalogued here, which is where I got the data for those graphs. The Polymath Project can be really interesting to read through, because you can see chunks of the problem getting solved in real time on an online discussion forum, like reading the world's most high-brow youtube comments. The retrospective on the project  is a nice read even for non-mathematicians such as myself. Midway through this project, an independent mathematician (working in Montreal!) named James Maynard developed another method to shrink this gap, which was eventually worked into the project.

I will not say too much about the techniques that were used to prove these bounds; there are people who are more qualified to do so who have already done so, such as Terry Tao, who was part of the project. From what I gather, the  method involving generating a large list of numbers and then "sieving" them until the solution converges*. Two assumptions can be made to make the bound easier to shrink: the  Elliott-Halberstam conjecture, and Deligne's Theorem. While their validity in this situation is unproven, assuming the  Elliott-Halberstam can reduce the bound to 6. I have tried to take only data that does not rely on these conjectures (the proven bound, if you will), and have ignored data marked as conditional in the table.

Ok, so let's look at the falling bound. There are clearly three phases: the "let's get the ball rolling" phase, the wild-mathematical-rollercoaster phase, and the "honeymoon is over" phase. The first two weeks were just people starting to realize what could be done (Zhang kind of caught the mathematical community by surprise). In fact, the first post-Zhang paper was called "A poor man's improvement on Zhang's result" and subsequently there was a blog post called "I just can’t resist: there are infinitely many pairs of primes at most 59470640 apart."  Then Terry Tao proposed the Polymath Project, and that's when it really started dropping. If you look through their website, the progress involves people posting tables of their iterated sieving data and discussing it; sometimes it works and sometimes it doesn't.

In my line of work, data is described by two types of functions: power laws and exponential curves, both of which look like straight lines if you have the right number of logarithmic axes (or the power is 1 or the x-variable covers a small range). Exponentials crop up more when something is varying with time (e.g. the length of the DNA molecule I'm looking at), while power laws come up when looking at how one quantity varies with another (e.g. how stretched a molecule is vs how confined it is). A big part of my thesis was demonstrating how we shouldn't just blindly look at power law fits when analyzing data, and that is exactly what I'm going to do now.

 When I fit the steep part of the bound dropping curve to a power law, the exponent is -9.6 with respect to time. That's pretty crazy; it falls 1000 times as much in a fortnight compared to a week. If instead I fit it to exponential decay, I find that the characteristic time for the gap to fall by a factor of e is about three days, although this doesn't fit the data as well as the power law. I have also ignored the slowdown that occurred around day 30 (although, within a month of Zhang's news, they had already dropped by bound by a factor of 300).

The results sort of petered out in the thousands, and then they were re-invigorated by Maynard's discoveries, which is what got them down to 246. If I do the same type of analysis for the slow phase, there is an inverse-square decay of the gap with respect to time, or a characteristic exponential decay of about 72 days. Can this teach us anything about the nature of collaborative number theory? Probably not.

The gap data with unjustified exponential (left) and power law (right) fits to the fast and slow portions of the drop.

I was watching this gap fall as it was happening, and I was curious as to why it stopped at the seemingly arbitrary 246. A googling came up with an excellent reddit comment explaining it from a user named BoundingBadger. It was pretty neat seeing an unsolved problem get hammered down by professional and sometimes famous mathematicians in real time and openly discussing their results, and I'm glad I caught that neat little bit of history. I was curious as to how the bound would end up behaving over time, and now I know!

*That is a really bad explanation but it's the limit of my understanding right now.

Tuesday, 24 May 2016

Perturbative Champions: Cohen and Hansen take it to next-to-next-to-next-to-next-to-next-to-next-to-next-to-next-to leading order.

In 1999, in a preprint on, Thomas D. Cohen and James M. Hansen, physicists at the University of Maryland, claimed the following:
If one insists on an accuracy of ∼ 20%, one estimates contributions at their nominal order and Λ is taken to be 300 MeV, then one has to work to order (Q/Λ)$^7$ , this corresponds to next-to-next-to-next-to-next-to-next-to-next-to-next-to-next-to leading order.
This octet is, to my knowledge, the largest string of next-to's ever to appear in scientific literature. As far as I can tell, the runner up is a recent paper by Eltern et al. in Physical Review C, with a paltry five next-to's, although this may be the champion of peer-reviewed literature. Below this, four next-to's is fairly common; it has its own notation, N4LO. I found a reference to N6LO in the literature, but it becomes hard to google these.

What does this actually mean, and why does it sound so silly? A powerful tool in physics is perturbation theory, where we approximate the solution to a problem with a power series, and then compute first the simplest terms, then terms of increasing complexity until we have a solution that is close enough to the exact solution that it's useful to us. There will be the zeroth-order solution that tells us the order of magnitude, the first-order solution with its basic dependences on system parameters, then second and higher order solutions for non-linear effects. In many cases, certain powers will be zero, so the "leading order" term might be the first non-zero term besides zeroth-order, although not necessarily first order. For example, the leading order term of something that is perturbed about equilibrium is second-order (which is one of the reasons why treating things as harmonic oscillators is so useful). Feynman diagrams are essentially a way to express perturbation theory in a graphic form, first drawing interactions with no loops, then one loop, then two loops, etc.

Anyway, it may have been the case that all of the even and odd powers of the perturbation series in Cohen and Hansen's paper were zero, they couldn't just say "eighth order," but thought they had to expand all those next-to's. Each term makes the approximation more accurate, and for their 20% desired accuracy, they needed to compute all these Feynman diagrams really far past leading order, but unfortunately, as they claim, "it is implausible that such calculations will ever prove tractable."

Another, similar situation involves describing sites on a lattice. For example, if you're sitting at a site on a square lattice (with lattice constant x), you have four "nearest neighbors" at distance x from you, four "next-nearest neighbors" across the diagonals at distance $\sqrt{2}$x, "next-next-nearest neighbors at distance 2x, etc. This paper uses the term "next next next next nearest neighbors" which is designates 4NN.

There was perhaps a better way to visualize this.

These record nextings are fairly insignificant but perhaps mildly interesting.

Tuesday, 17 May 2016

Measuring Newton's Constant with a Space-Borne Gravity Train

I recently came across a paper which was published in Classical and Quantum Gravity, a respected journal, after it had initially appeared on arXiv. It proposes a space mission consisting of a metal sphere with a cylindrical hole, floating through space as a smaller reflective object oscillates back and forth along the hole, pulled by the gravitational field of the sphere. The position of the the smaller object can be monitored by another space probe, and the period of oscillation can be used to measure Newton's gravitational constant, big G. I like this idea, it draws upon some of my recent work and I place it firmly in the "just crazy enough to work" paper category, which are my favourite papers to read.

Diagram of the proposed experiment, from the arXiv version of the paper.

The proposal is motivated by some recent analysis (by some of the current proposal's authors) of independent measurements of the gravitational constant, which showed that even though they are measuring the constant with smaller and smaller uncertainty, the different measurements are not in precise agreement of each other, sometimes deviating by 40 times the standard error on the measurements. The analysis makes the bolder claim that the difference between the measurements has the same periodicity as Earth's length-of-day variations, which are due to large-scale seismic effects. They conclude that there are systematic effects caused by the fact that all these experiments (which typically involve monitoring a rotating pendulum near large masses) take place on the Earth, and desire a way to measure this constant away from the Earth. The National Science Foundation has put out a call for proposals for more accurately measuring big G; I recommend reading its three paragraphs if you're wondering why anyone would bother caring about this.

The various non-agreeing measurements of G over the years, from Anderson et al. Pay more attention to the red than the black. The black is the "bold proposal" I mentioned.

To make this measurement, the authors, lead by independent researcher Michael Feldman, suggest sending a miniature gravity train into space. A gravity train, something I have written about in great detail, consists of a spherical mass (often taken to be the Earth, but not here) with a tunnel through it (often through its center), with a smaller object falling through the hole. It builds up speed due to gravitational attraction to the sphere, passes the halfway point, and then starts decelerating, coming to a rest on the other side. Inside the Earth, it would take 38 minutes to fall through this tunnel. Feldman and friends propose a small metal sphere, roughly 10 cm in diameter and 1.3 kg in mass, that would take about two hours for the small object to fall through.

How could G be measured from such a device? For a uniform density sphere, it can be shown that the period of a gravity train is $T=2\pi\sqrt{\frac{R^3}{MG}}$. If the mass and radius are known, and the time is measured, G can be extracted. In the proposed setup, the position of the small object will be monitored by a laser aimed at the tunnel from another nearby space probe, and from these periodic measurements of position, the time can be measured, and sent back to Earth by an antenna on the probe. The paper consists of more detailed derivations of the G-T measurement, unique to the proposed design (which consists of two layers of different materials).

Zoomed out schematic of the probe and sphere, from the arXiv version of the paper.

The authors are concerned with the precision of such a device, and which systematic errors contribute to the overall uncertainty in G. These include the metallurgy of the sphere and hole (uncertainties in R, M, and the uniformity thereof), the initial placement of the small reflecting object in the hole (which must be extremely gentle), the ideal place to position sphere with respect to the host probe so that the probe is close enough to block the sun but not so close that its own gravity affects the experiment, the radiation pressure from the probe laser on the device, deformations of the sphere due to the tidal influence of the sun, possible charging of the tunnel's interior due to cosmic rays, and more. They even calculate the change in the period if general relativity is taken into account, which is something I was curious about for my gravity tunnel research, but didn't have the tools to solve. The hypothetical uncertainty analysis was probably the most fun part of the paper to read.

They estimate that, given advances in metallurgy and aerospace deftness, they can get the precision of the G measurement down to an optimistic 63 parts per billion. The current record for Earth-based G measurements is 13,000 parts per billion. This would be a huge improvement if it actually worked, and would eliminate some of the systematic issues with measuring the strength of gravity in Earth's gravitational field.

The question, of course, is whether this thing will actually exist, and whether the budgetary will exists to make it so. The authors suggest that the experiment would not be the main mission payload of a space launch, but rather would piggyback on a larger, more important probe headed out of the solar system.

I was interested in this paper because I like crazy yet scientifically rigorous ideas, and it draws upon a system that is close to my scientific heart [disclaimer: the paper cites mine]. It was a pleasure to read about all the potential effects that could skew the time measurement, and how they planned to deal with them. I hope the available metallurgy, metrology, and money becomes sufficient to launch this thing into space.