Thursday, 24 December 2015

Microcannons firing nanobullets

Sometimes I read papers that enhance my understanding of how the universe works, and sometimes I read papers about fundamental research leading to promising new technologies. Occasionally though, I read a paper that is just inherently cool. The paper by Fernando Soto, Aida Martin, and friends in ACS Nano, titled "Acoustic Microcannons: Toward Advanced Microballistics" is such a paper.

The grand scheme of this research is developing a tool that can selectively shoot drugs into cells at a microscopic level. This is hard because everything happens really slowly at the  microscopic scale in a liquid, in ways that meter-sized beings who live in air would not necessarily expect. For example, it is impossible for small organisms to move through a fluid using a repetitive motion that looks the same in reverse. For example, the way we move our feet back and forth to walk would not work for a tiny aquatic human, because the forward motion in the first phase of movement would be nullified by backwards motion in the second phase. This is why bacteria use things like rotating flagella to move*. Digressions aside, if you tried to shoot a tiny bullet through a cell wall, it would halt really quickly and diffuse away. Soto, Martin, and collaborators wanted to beat this.

They developed a "microcannon," starting with a thin layer of polycarbonate  plastic studded with small pores, which is a thing you can buy and don't have to make. They deposited graphene oxide onto the inside of pores in polycarbonate using electrochemistry, and then sputtering gold onto the inside of the graphene layer. The polycarbonate could be washed away with acid, leaving free-floating carbon and gold cannon barrels a few microns in size. While they were still in plastic membrane, the cannon pores were filled with a gel (literally gelatin from the supermarket) loaded with micron-sized plastic beads to act as bullets, and the "gunpowder."

The microcannons, loaded with nanobullets before and after firing.

Regular readers of my blog will remember that bubbles are somewhat of an exception to the small+water=slow rule, and that when they collapse it can lead to very fast motion on very small scales. So, the authors of the paper used perfluorocarbon (same structure as a hydrocarbon but with fluorine instead of hydrogen) droplets as a propellant, which they turned into bubbles with an ultrasound-induced phase transition. The bubbles collapse, leading to a pressure wave which drives the nanobullets out of the barrel towards their target**.

Composition and operation of the microcannons.
The authors wanted to characterize how powerful these things were, so they did two relevant tests. First, they embedded the cannons in an agar gel and loaded them with fluorescent beads. They looked at where the beads were before firing the ultrasound trigger at the cannon, and after. It was observed that they penetrated an average of 17 microns through the gel. I don't have much context to gauge whether this is a lot or a little.

The bullets were too fast to record with a microscope camera, so their second test involved recording the motion of the cannon after it fired the bullets. Naively one would expect to be able to calculate the bullet speed with conservation of momentum from knowing the cannon's speed, but momentum isn't conserved in a noisy viscous environment. They modeled the fluid dynamical forces acting on the system, measured that the terminal speed of the cannon was about 2 meters per second, and concluded that the initial speed of the bullets is 42 meters per second or 150 kilometers per hour. Pretty fast, especially for something so small in a draggy environment.

I don't know if this technology will succeed in the authors' goal of localized drug delivery to cells, but I think it's awesome that they made a functioning microscale cannon.
Oh the humanity.

*I recommend reading Life at Low Reynolds Number if this interests you.
**Or just in whatever direction it was pointing, I guess.

Wednesday, 2 December 2015

An old teacher hacking life itself: Christian Bok's Xenotext at MIT

This morning, I was googling for seminars at MIT that would provide lunch for me so I could avoid eating my own food which would bring me closer to havingö to buy groceries. I was planning on going to a fairly boring looking talk at Harvard about cell mechanics for a pizza dinner. However, I found a link saying that Canadian poet Christian Bök (pronounced "book") was reading from his new book of poetry.

I met Christian Bök in 1998 when he was my sixth grade teacher at a small school called Fieldstone. At the time I thought he was a great teacher; he always had lots of really interesting knowledge to share and would answer any question in an enlightening way. I thought he was the smartest person I had ever met. My friends often accuse me of knowing everything, but they are wrong. I don't know everything. Christian Bök knows everything. This was also the year that I stopped hating school and everything associated with it, and started becoming excited about learning new things. It is also around the time I stopped being one massive walking behavioural problem and started being someone that teachers would want to teach. I think Dr. Bök had a big part in both these things.

Some years after he taught me, he released his then-magnum opus, Eunoia, a book of poetry where each chapter uses only one vowel. So, the A chapter has passages like "Catamarans as fast as narwhals dash past sandbars and make ballast at landfall," while the U chapter has the coarser "Ubu untucks Ruth's muumuu; thus Ruth must untruss Ubu's truss." Each chapter is a self-contained coherent story. Eunoia put him on the map of the poetry world.

So naturally I was excited that he was at MIT today. So excited that I decided to forgo free pizza at Harvard and go to his poetry reading*. I also emailed his host mentioning that I was his old student and asked if there was time to meet with him. There was, so I met with him in the early afternoon. Fieldstone was a very small school and I was the most eager student so he has a decent memory of me. I imagine teachers are happy to find out that their students stayed in school, and boy did I ever stay in school. We talked for a bit about what I was working on, and its broad applications. I was quickly reminded of how sharp he was, as he quickly grasped the overview of my work and started asking tough and insightful questions. I mentioned that in 1998 I thought he was really smart. For the past fifteen years, through highschool, university, and grad school I have been surrounded by people who have been pre-selected for intelligence. I have met a lot of smart people. In this 20 minute conversation I got the impression that he was still the smartest person I had ever met. I mentioned that I had listened to the CD of Eunoia and that I liked the A chapter the best; he said he prefers I but most people like U.

After work I went to his poetry reading. His current project, Xenotext, is even grander and crazier than Eunoia. He plans to encode a poem in the genetic sequence of a bacterium, and when that gene is transcribed into a protein, that protein also reads as another poem in response. He wants to put this in the genome of an extremely robust extremophile, so that his poem will survive the death of humanity. I'll talk a bit more about this in a bit.

The poems in his book (not all of which will make it into the genome) generally have an apocalyptic theme, and draw a lot upon the Greek myth of Orpheus and Eurydice, about a poet who travels to the underworld to reclaim his lost wife. The first reading was a description of the destruction of the Earth through various man-made and natural cataclysms, which seemed to draw upon a lot of astronomical research (he mentioned Gliese 710 and Wolf-Rayet stars). It reminded me of the book Seveneves by Neal Stephenson, the best novel I've read this year**, which details aftermath of the moon's destruction and its effect on the Earth. He also read a short love poem, which sounded normal but he said took him five months to write due to the constraints he imposed on the writing.

After his readings there was a Q&A session and somebody asked him about the constraints on his love poem. He listed them off one by one, and they started normal and quickly got more and more extreme. First of all, it was a sonnet of seventeen lines and each line needed to have twelve syllables. Each line also had its own internal rhyming. The dedication of the poem was written as an acrostic in both the first and last letters of each line, so the first letter of each line read "FOR THE MAIDEN IN" and the last letter of each line read "HER DARK PALE MEADOW." To make the last letters line up, each line needed to have 33 letters (in addition to twelve syllables), so that they'd all fit on a grid. Then, finally, all the letters of the poem are actually a re-arrangement of all the letters in another poem by John Keats. I am no poetry expert but I thought that was pretty impressive.

Then somebody asked about the relevant biochemistry of his encoding the poem into a genome and getting it to read out another poem. He went on to explain it, and even when you don't include biochemistry it sounds insane. He made the analogy of encoding a text using a two-way cyper, switching let's say A with N, B with O, C with P, etc, but when you encode your sentence with this cypher, it makes another sentence. To do this, you have to choose an appropriate mapping between letters. If you start with A, you have 25 choices of what to make it, and after that you have 23 choices for which letter to map to B (unless you already mapped it to A) etc, so the total number of possible ciphers is 25x23x21... which is known as the double-factorial of 25, which is 7,905,853,580,625. So, find one of these eight trillion possible ciphers that will allow you to cipher English words into other English words, which I imagine is a very small subset, and Bök had the additional constraint of having to have it sound nice.

Ok so he eventually found a cipher that let him encode a poem into another poem, which seems like a huge accomplishment to me, but then he had to encode this into the genome of a bacteria! The way genes work is that there is a DNA sequence written in the four letters of A, T, G, and C, and the DNA is copied into a complementary RNA strand. The RNA goes through this little cellular machine called a ribosome, that concatenates amino acids onto a protein chain. There are twenty-three amino acids and only four nucleobases, so  combination of three bases is required to tell the ribosome to add a given amino acid to the chain, which it does by following the genetic code (which is not the same as your genetic sequence). For example, if the DNA reads CAG then the amino acid glutamine gets added. Each gene codes for one protein (there is a combination that essentially acts as a period and stops the transcription), and one protein does one chemistry thing. He didn't go into the details, but in order for his project to make sense, he would need a mapping of three bases to one Latin letter (unless he restricts himself to a 16 letter alphabet), and then another mapping of amino acids back to Latin, either using a 23 letter alphabet or using pairs of adjacent acids to make 26 letters. He would have to choose his mappings between Latin, DNA, and proteins to work as a poem. THEN he would have to engineer this gene and implant it in the genome of a bacteria, not have it die, and have it actually produce this protein. This could be verified through genetic sequencing on one end, and protein sequencing on the other.

The central dogma of biology and poetry. Source.

It turns out he actually did succeed at doing this, implanting a poetic gene into E. coli which produces a poetic protein. They are called Orpheus and Eurydice respectively. The first poem, which he read at the reading, begins "any style of life / is prim..." and the second begins "the faery is rosy / of glow..." Apparently the Eurydice protein is also fluorescent (he used a supercomputer to see how each proemtein would fold),  so you can tell when it's expressed. His next step is to implant this in the extremophile.

I have a deep respect for ideas that seem too crazy to work but are attempted anyway. This is the craziest such idea that I have heard of. After the reading I bought his new book, which he signed. It also has a lot of biochemistry-themed poetry in its explanation of the project, for instance a poetic ode to each nucleobase and poems where the words end and start with complementary base letters. I look forward to reading it.

It was a nice surprise re-acquainting myself with Dr. Bök  today. He is a truly impressive human being, and had a big effect on who I am today.

*It's ok though because I found free pizza anyway.

**The worst is The Land of Painted Caves by Jean M. Auel. Do not read this book.

Saturday, 28 November 2015

Physicists should take pride in their work

All of my posts thus far have been about presenting information or telling a story. This one is more of an opinion and a rant.

Too often, speaking with other physicists, I have had conversations that go like this:

Me:  What do you work on?

Them: Condensed matter physics.

Me: Oh yeah? What aspect?

Them: Uhh...materials physics.

Me: Ok, what kind of materials?

Them: Uhhhh...solid state.

Me: What kind of solid state physics?

Them: Uhhhhhhhh systems with many atoms.

This will continue for as long as I have patience, and I will get no information about what the person actually does. If I care enough I can look up their supervisor's research webpage and find out that they actually fire x-rays at superconductors or something like that. I used condensed matter as an example but it's not limited to that. I've had the same conversation go "black holes"..."general relativity"..."collapsed stars."

Physicists, you should not do this. It is annoying, you're selling yourself short, and it's disrespectful to the person asking the question. If the person asking you the question has a degree (or several!) in physics or a related discipline, they'll be able to understand the elevator-description of what you do.

Sometimes I think people give these non-answers because they're embarrassed about what they work on. There is this false premise that there is "real physics" and what they're doing is not it. People working on semiconductors are embarrassed that they're not working on quantum gravity, and the people working on quantum gravity are embarrassed that they're not working on semiconductors. I once had a guy who did simulations of relativistic nuclear collisions, which is like the most physics you can get in three consecutive words, tell me he wasn't doing real physics (although he wouldn't tell me what he actually did!). It's all real physics. No matter what you are working on, you are pushing the boundaries of human knowledge, even if it seems extremely specialized or boringly incremental or removed from reality.

If somebody is asked a question, it is rude not to answer it. In a physics department, it's reasonable that you'll be discussing physics with other people who are well-read in physics. They can handle the truth. If the answer is not detailed enough, the person can ask for more information. If the answer is too detailed, the person can ask for clarification, and try to understand by asking more questions. This type of conversation generally involves two people with differing levels of knowledge on a topic finding a way to meet in the middle. When one side refuses to meet in the middle, either by asking to be spoon-fed or by withholding information, it's not fun.

Now, there is an art to knowing how technical a summary of your work to give. Generally it depends on whether you're talking to someone in the same research sub-field as you, the same science as you, other scientists in different disciplines, or non-scientists. The main thing I'm ranting about is physicists withholding information about their work from other physicists, but I imagine it happens in other fields too.

For my Ph.D. work, I'd generally tell people that I looked at DNA molecules squeezed into very small tubes, to measure how squishy the molecules are. If they asked, I'd tell them how it relates to genetic sequencing technology, and what the relevant physics governing the squishiness of DNA is. If they really wanted to know, I'd talk about screened electrostatic repulsion and conformational degeneracy and the like. But I wouldn't just mumble different permutations of "biophysics...biological physics...physical biology...physics of biological systems..."

So, when someone asks you what you work on. Don't be vague. Be proud.

Sunday, 22 November 2015

What do we know about extra dimensions?

As far as we know, we live in a world with three spatial dimensions. There are some theories that it has more than that. Here, I will discuss the experimental searches for extra dimensions, how they work and what they have found.

Spoiler alert: no evidence of extra dimensions have been found.

But just because no evidence has been found, it doesn't mean we haven't learned anything. In physics, there is much to be learned from measuring zero, because it can tell you the largest value that something could have while still evading detection. I talk a bit more about measuring zero in my article on photon masses and lifetimes. 

The searches for extra dimensions do not involve trying to draw seven lines perpendicular to each other. All of them require making some sort of theoretical assumption involving extra dimensions, seeing what that theory implies, and looking for those implications. The strength of or constraint against those implications can give information about the extra dimensions that lead to them. However, whatever that theory is, it must reconcile the fact that we appear to live in a three dimensional universe. The reconciliation is usually that the extra dimensions are really small.

Why extra dimensions?

String theory and its relatives require that spacetime have 10 or 11 or 26 dimensions in order for certain calculations not to give infinite results. A lot of work goes into figuring out how these can be "compactified" so that it still seems like we live in three dimensions. String theory, being a theory of quantum gravity, generally has its extra dimensions on the order of the Planck length. The theories I'll be talking about are theory of Large Extra Dimensions, large being relative to the extremely small Planck length. Some of the problems these attempt to solve are the hierarchy problem, that gravity is so much weaker than other forces, and the vacuum catastrophy, that the measured energy density of the universe is 100 orders of magnitude smaller than the prediction from quantum field theory.

Our three dimensional universe as the surface of a higher dimensional universe. Source.

The best-known example of such a model is the Randall-Sundrum model. Its first author, Lisa Randall, is now hypothesizing that galactic dark matter distributions may lead to mass-extinction events on Earth. That is not really relevant but it's cool. This model posits we live in a three-dimensional surface in a four-dimensional universe (actually it's one higher in both cases, because of time), and gravity can propagate through the bulk of the universe while other forces are constrained to the surface. These types of models require that these extra dimensions have some characteristic size, compared to our three spatial dimensions which are infinite. How can a dimension have a size? Well, imagine if you lived on a really long narrow tube, so narrow that it seemed like you just lived on a one-dimensional line. The second dimension, that you don't notice, has a characteristic size that's the circumference of the tube. In fact, I found a picture demonstrating that on google images.

Tests of Newtonian Gravity at Short Distances

The fact that gravity follows an inverse square behaviour is a consequence of the fact that we live in a universe with three spatial dimensions. That is good for us, because an inverse square force is one of the only kinds that can give stable orbits. The inverse-squareness of gravity is attested by the elliptical orbits of planets around the sun, but it was first measured on a terrestrial scale by Cavendish in 1798, who observed the rotation of a torsional pendulum surrounded by massive spheres of lead, which acted as their own gravitational source. I once tried this experiment with my undergraduate lab partner Bon, and it was awful.

Within the Randall-Sundrum model of extra dimensions, it is expected that gravity will behave differently over distances smaller than the size of the extra dimensions. This was posited in order to explain the discrepancy between the observed density of dark energy, and the much much much larger prediction based on electromagnetic vacuum energy. So, Kapner and friends from the Eot-Wash group* simply measured Newtonian gravity with a Cavendish-type experiment to shorter and shorter distances, shorter than anyone had measured before, down to 44 microns separation between the source and test masses. The observed dark energy density has a characteristic length-scale of 85 microns, and they managed to get below that.

"We minimized electromagnetic torques by coating the entire detector with gold and surrounding it by a gold-coated shield."
The experiment functioned in such a way that if gravity was not following an inverse square law, there would be an extra torque on the pendulum, which they could then detect. As far as I know, this is the most precise lab-scale gravity experiment there is. They found no departure from the inverse square law down to that length, but were still able to learn some things. By fitting their results to a modified gravitational Yukawa potential (Newtonian gravity plus an exponentially decaying extra part), they could find experimental bounds on the strength and characteristic length scale of the non-observed deviation. They found, for a force with equal strength to gravity, it would have to be localized to dimensions smaller than 56 microns, or else it would have been detected. So, we know that mites crawling on human hairs are not subject to extra dimensional gravitational forces.

The Large Hadron Collider

The Large Hadron Collider (LHC) at CERN on the French-Swiss border was built to smash protons (and sometimes lead nuclei) together at almost the speed of light to see what comes out. The main thing they were looking for was the Higgs Boson, which was found in 2012. It also looks for other undiscovered particles, which I call splorks, and deviations from the predictions of the Standard Model of particle physics, which may be indicative of "new physics" going on in the background. One of these new physicses is Large Extra Dimensions. How are post-collision particle entrails related to extra dimensions?

A graviton penetrating the brane that is our universe. I'm not sure if this picture makes things more or less clear. Source.

In the paper I'm focusing on, they looked at the amount of monojets deteted. A jet in particle physics is a system of quarks that keeps creating more pairs and triplets of quarks as it is pulled apart. Quarks are weird. A monojet jet. These are typically produced at the same time as Z bosons, and I gather they are less common than dijets. The ATLAS collaboration, one of the two bigger experiments at the LHC, under the alphabetic supremacy of Georges Aad, looked at how a model of large extra dimensions would lead to monojet production. In this model, we live on a three dimensional (mem)brane in a higher dimensional universe. Gravity can propagate through the bulk of the universe, while other interactions can only happen along the brane. In this scenario, quarks could be created in the collisions paired with gravitons, which are not detectable by ATLAS**. These single graviton-associated quarks would lead to monojet events, in greater number than predicted by the standard model. If this explanation seems incomplete, it is because I do not fully understand how this works.

So Georges Aad and his army of science friends looked at the monojet production data and scanned for excesses above the standard model, and tried to fit it to the extra dimensional brane quark model. They found that for two extra dimensions in this model, the upper experimental bound on their size was 28 microns. It surprised me that this is the same order of magnitude as the Newtonian gravity measurement. For larger numbers of dimension, their bound gets smaller.

There is a ridiculous amount of data produced at the LHC and a ridiculous number of ways to analyse it. For each of the many theories of extra dimensions out there, there may be multiple ways to probe it with LHC data, but I have just focused on one.

Pulsar Constraints

The Fermi Large Area Telescope is a space-borne gamma ray observatory. It gives a lot of data regarding gamma ray emission from various sources throughout the universe, including from pulsars. The collaboration wrote a paper trying to constrain a model that predicts how the gamma emission of a neutron star would be different if there were extra dimensions. In the model they consider, gravitons are massless only in the bulk universe, but gain mass in our brane. This allows them to be trapped in the gravitational potential of a neutron star, where they can decay into gamma rays, which could then be detected. For two extra dimensions, their analysis is sufficient to rule out large extra dimensions above 9 nanometers, and smaller for more dimensions. This is a much tighter bound than the LHC and gravity data.

Gravitational Radiation from Cosmic Strings

This one is premature, because cosmic strings have not been detected and may not exist, and we cannot yet detect gravitational radiation. They are not the same thing as string theory strings, they are more like the boundaries between different regions in the early universe that got shrunk as the universe homogenized, until what was left was an un-get-ridable*** string. They are analogous to grain boundaries  between different regions in a crystal.

O'Callaghan and Gregory computed the gravitational wave spectrum emitted by kinks in these strings. They did this assuming three spatial dimensions, and then assuming more. They found that these waves could exceed the detection threshold of future gravitational wave detectors, and that the signals would be weaker if there were more dimensions. This is a longshot, in regards to detection of both cosmic strings and extra dimensions, given that gravitational waves could be detected.


Even though extra dimensions have not been detected, we can still get information about how big they can be based on what we have not detected given our ability to detect things. However, the way the data is analysed depends on what model of extra dimensions is being considered.

*A combination of Lorand Eotvos and the University of Washington.
***topologically protected

The Sophomore's Spindle: All about the function x^x

$x^x$. x-to-the-power-of-x. For natural numbers, it grows as 1, 4, 27, 256, 3125. It is not the most important or useful function, but it has a few cool properties, and I'll discuss them here. It is mostly known for growing really fast, but when plot as a complex function over negative numbers it has a cool shape. A special integral of this function is known as the "Sophomore's Dream," and in one of my ill-fated math attempts around 2010, I attempted to generalize that and find its anti-derivative. There isn't much centralized information about the $x^x$ function, so I hope to compile here in this blog post.

The $x^x$ spindle, discussed below. Image source.

Other forms and names

It is generally hard to search for information about this function because the results include any time the letter x appears twice in succession. The name "self-exponential function" returns some results.

If repeated addition is multiplication, and repeated multiplication is exponentiation, repeated exponentiation is called tetration. The notation is a flipped version of exponentiation: $x^x$=$^{2}x$. So, our function here could be called second-order tetration. This also continues the property* of the number 2 that $^{2}2=2^{2}=2\times 2=2+2$.

The other common way to represent this function is as an exponential, rewriting it as $e^{\log x^x}=e^{x\log x}$. This makes it much easier to manipulate, because now only the exponent is variable rather than both the base and the exponent.


As can be seen in the numbers above, this function grows really fast, more than an order of magnitude per integer increase, which is just a way of saying it grows faster than the exponential function (because the base also increases), and is greater than the factorial of any natural number.

This function grows really fast.

In the negative: the $x^x$ spindle. 

The function can be calculated easily for positive integers, and also for negative integers, over which the function rapidly decays. However, for negative non-integers, the function's output is not always real (a simple case, $(-0.5)^{-0.5}$ is purely imaginary). In fact, it is only real for negative x if x is a rational number whose denominator is odd. To figure out how to calculate this function for negative numbers, we'll go hyperbolic and then try using logarithms. The function $e^x$ can be written as cosh(x)+sinh(x), the sum of hyperbolic cosines and sines. That means we can write our function as:


The logarithm is not unambiguously defined for negative numbers, but by exploiting Euler's identity and some logarithm rules, we can write $\log(-y)=\log(-1)+\log(y)$ and $log(-1)=log(e^{\pi i})=\pi i$. Therefore, log(-y)=log(y)+$\pi i$. This is cheating a bit, because you can multiply $\pi i$ in that exponential by any odd integer and still satisfy Euler. This is merely the first choice of infinite possibilities, which we'll stick with for now. Anyway, this means that if x is negative, then we can rewrite our function again:

$x^{x}=\cosh(x\log(-x)+\pi i x)+\sinh(x\log(-x)+\pi i x)$

Now, we use the sum formulae for sinh and cosh, which are cosh(a+b)=cosh(a)cosh(b)+sinh(a)sinh(b) and sinh(a+b)=sinh(a)cosh(b)+sinh(a)cosh(b). We also remember that cosh(ix)=cos(x) and sinh(ix)=i sin(x). If we do this expansion, simplify, and group by realness, we find:

$x^{x}=(-x)^{x}\left(\cos(\pi x)+i\sin(\pi x) \right)$

So, what happens when we plot this function in the negative domain? Its absolute value generally gets smaller, while its real and imaginary parts oscillate with a period of 2. It is purely real for integers, and purely imaginary for half-integers. Another way to plot this would be as a single curve with real and imaginary y-axes, in which case this function would trace out a spiral.

$x^x$ over negative numbers. 

However, this assumes our basic choice of the negative logarithm. We have a whole family of choices. Mark Meyerson realized something interesting, that the functions for various choices of logarithm follow the same envelope function with different frequency, such that all of them together trace out the shape a vase (which he calls the $x^x$ spindle). As more and more values of the logarithm are added, the spindle gets filled out (see the first picture).


There is no simple function that is the inverse of $x^x$. However, there is a special function that was essentially almost designed to be the inverse of this function. The Lambert W Function is defined such that x=W(x)e$^{W(x)}$. The inverse of $x^x$ is:

There are two branches of the W function, and the inverse of $x^x$ swaps over to the other branch below x=1/e, such that its inverse passes the vertical line test for each branch.

This I don't think is very interesting, it's basically saying "the function that inverts the self-exponential function is defined as the function that inverts the self-exponential function." I guess you could call it the xth root of x, which is not the same as $x^{1/x}$ in this case.


When students are first learning calculus, they learn that the derivative of a power function $x^n$ is simply $nx^{n-1}$. They also learn that the derivative of an exponential function $n^x$ is proportional to itself, with the constant of proportionality being the natural logarithm of the base: $n^{x}log(n)$, with n=e being a special case. It is not immediately obvious which rule to apply to $x^x$, although the second one is closer to being correct.

If we rewrite the function $x^{x}=e^{x\log{x}}$, its derivative can be found with the chain rule. The first step in the differentiation just gets us $e^{x\log{x}}=x^x$, and that gets multiplied the derivative of $x\log{x}$ which from the product rule is $(1)\log{x} + (x)\frac{1}{x}=1+\log{x}$. Multiplying the derivative of the innie by the derivative of the outie, we find:

By finding when this equals zero, we can find the minimum and turning point of the function. This is simply just log(x)=-1, x=1/e=0.367..., and the minimum value is 0.692... One thing about this derivative is that it increases faster than function itself, contrary to the derivative of a power function. The rate of change of the function is even more divergent than the function itself.

Integrals: The Sophomore's Dream

One of the most interesting aspects of this function crops up when you try to integrate it. It actually comes from the reciprocal cousin of the function, $x^{-x}$, but the same phenomenon applies to the function itself. It is the identity:

There is no immediately obvious reason why that should be true, but it is (it converges to roughly 1.29). The name Sophomore's Dream is an extension of the "freshman's dream," that (a+b)$^n$=$a^{n}+b^{n}$. It was first proven by Bernoulli in 1697. There is a similar identity for regular $x^x$, which is not as neat:

This is proven by expanding the function as a series**:

To find the integral, each term is integrated individually. Wikipedia, the free encyclopedia, gives a decent proof of how to integrate these terms, both in modern notation (that involves gamma functions) as well as with Bernoulli's original method. It's important that the limits of integration are zero and one, because the log(1) term kills some of the extraneous nasty terms in the antiderivative. The main step in the termwise integration involves a change of variable that turns it into the integrand that leads to the factorial function.

Around 2009 or 2010, I thought I was clever because I found a way to express the indefinite integral of the $x^x$ function, that could be evaluated at values besides zero and one. Basically it involved using something called the incomplete gamma function, which is related to the factorial, to express the integral of each Taylor term in a general form. My solution was:
However, somewhat like the inverse, this is almost tautological and doesn't add much nuance. Still, I think I was the first person to figure this out. I tried writing a paper and submitting it to the American Mathematical Monthly (which is not at all the right journal for this) and got a rejection so harsh I still haven't read it almost six years later. However, in 2014 some Spanish researchers wrote a similar paper about the self-exponential function, and they came to the same conclusion as me regarding the incomplete gamma functions. So, I'm glad somebody got it out there.

Something else that's kind of interesting involving integrals and this function: the area under $x^{-x}$ over all positive numbers is like 1.99. I'm not sure if that's a coincidence or not.


Basically none.

The most common place the $x^x$ term pops up is in the Stirling approximation to the factorial, which is useful in statistical mechanics and combinatorics. It also gives a sense of the relative magnitude of self-exponentiation and factorials: one is literally exponentially smaller than the other.

In graduate statistical mechanics, there was a question about a square box, an n x n grid, with a fluid in it. At each point x along the box, the fluid could be at some height between 0 and n. So, the degeneracy of total states this fluid could have was $n^n$.

If the fluid must be continuous and touch the bottom, it can take on $5^5$ different configurations.
If anyone knows any other applications of this function, let me know.

So just to summarize, the function $x^x$ grows really fast as has a few cool properties, but overall isn't the most useful of functions.

*The solution is 4.

**I had originally erroneously called this a Taylor series.

Tuesday, 10 November 2015

Bursting Bubbles Breach Blood-Brain Barrier: Blogger Bequeaths Belated Boasts

There was a story in the news today about the blood-brain barrier being bypassed* in order to deliver chemotherapy drugs directly to a brain tumour, using a combination of microbubbles and focused ultrasound. I worked on this project for over a year between finishing one university and starting another university (2008-2009), and it was good to see it finally in use. Even though they have achieved a medical feat, the phenomena behind it are quite physicsy.

Bubbles oscillating near red blood cells. From one of the news articles.
Focused ultrasound is what it sounds like: applying high intensity sound waves to a specific part of the body. Constructive interference allows the sound intensity to be maximized within the body rather than at the surface, and if the waves are strong enough then the tissue will start heating up as some of the acoustic energy is absorbed. It often fills the same medical niche as radiation therapy, except without the typical side effects of radiation exposure. My first university summer job involved working on the electronics for a focused ultrasound treatment for prostate cancer. Having one's prostate burned by sound waves from the inside may sound unpleasant, but it's not as unpleasant as having the whole thing removed.

High intensity focused ultrasound thermal therapy. All other images I could find involved detailed 3D renderings of the rectum. There's more to HIFU than just butts. Image source.

After graduating from university, I got a full time job in the Focused Ultrasound Lab at Sunnybrook Hospital in Toronto. There, they had developed what was essentially a helmet full of hundreds ultrasound transducers, designed for constructively targeting sound waves inside the brain. In addition to designing this device, they had to solve such problems as "How much is the skull going to refract the waves?" and "How do we avoid burning bone before flesh?" This device has since been used to therapeutically zap through the skull, but that's not what the news is about today.

Inside the transducer dome helmet array. The head goes in the middle.

My specific project involved microbubbles: really small bubbles (duh) that are used as contrast agents during diagnostic ultrasound. They are injected into the bloodstream (they are far too small to cause embolisms, about the size of red blood cells), and when an ultrasound wave hits them, they contract and expand in phase with the applied pressure wave, re-emitting sound waves as they drive the surrounding fluid with their oscillation, which can then be detected. All the videos of this on youtube suck, so out of protest I won't post one. My supervisor, Kullervo Hynynen, wanted to move beyond bubble diagnostics and into bubble therapy. His plan, as we have seen, was to use bubbles to open the blood-brain barrier and deliver drugs to the brain.

Because most of the articles I write are about physics and math, I'll remind the readers that the blood-brain barrier is not a physical separation between the brain and the arterial network, but rather refers to the impermeable network of proteins that forms around the walls of blood vessels inside the brain, that prevents molecules from getting from the bloodstream into the brain. This is useful for preventing blood contagions from affecting the brain, but makes it hard to get drugs into it (cocaine is a notable exception).

Two diagrams of how this works, to hammer the point across. Bubbles are injected into the blood stream, focused ultrasound makes them oscillate and/or collapse, that collapse opens the vessel wall.
The general plan was to use the energy absorption of ultrasound by bubbles to raise the temperature in their vicinity, as well as to create shockwaves from their collapse. It was hoped that either the increased temperature would cause the proteins making up the barrier to relax their grip, or just to violently shear them away.  At the risk of repeating what I talked about in "My Journey into the Hyperbubble," My research involved developing a theory to describe bubble oscillation inside blood vessels, then apply that to a 3D model of the blood vessels in a rat brain, figuring out how much heat would be transferred to the brain based on bubbles oscillating in those blood vessels.

From my paper, a rendering of the heat distribution inside the 3D rat brain. This rotating gif is way cooler and you can see the hot-spots in the blood vessels from the bubbles, but I'll only include a link because it'll kill somebody's data plan: HERE
My solution involved solving a modified version of the Rayleigh-Plesset equation (which is itself derivable from the Navier-Stokes equation) to simulate the bubble oscillation dynamics, calculate the power radiated from those dynamics (through a thermal damping term), and use that power as an input for the Pennes bio-heat equation, which is like the regular heat equation except with a blood flow term that we chose to ignore anyway. The idea was that the results of my simulations would inform future neurologists how much ultrasound to use and at what frequency to get the best results and not fry the person's brain.

The results of one of my simulations from 2009, showing the temperature around the bubbles in the vessels increasing over time.

After I started grad school, my particular project (the localized heating simulations, not the whole research program) didn't really go anywhere. Oh well. However, the Focused Ultrasound Lab kept working on developing this treatment. They apparently used it to treat Alzheimer's in mice.

Today, some news articles cropped up on Facebook about how this treatment has been successfully used to bypass the blood-brain barrier in humans, which is a pretty big milestone for any biomedical development process. There is not yet a peer-reviewed journal article on the topic, but from the news articles, they used this procedure to deliver a chemotherapy agent to a patient's brain tumour (without having to flood the entire body with it, one of the main issues with chemo).

I do not know whether my calculations factored into the patient's treatment. I would hazard to guess that they did not, because they never verified them experimentally and I don't have enough faith in my simulations to recommend going directly from simulation output to sonic brain zapping. However, it is a good feeling nonetheless to see something that I worked on in its early stages finally come to fruition. It is a good example of how a good old fashioned physics problem and an application of fluid dynamics, acoustics, and heat transfer starts saving lives in less than a decade.

*this one is not intentional, this topic is just really alliterative.

Sunday, 8 November 2015

The host of Daily Planet said something nice to me.

In March, I was interviewed on the Canadian Discovery Channel show Daily Planet about my falling-through-the-Earth paper. The interview is here. Later that week, the host of the show, Dan Riskin, emailed me asking for help replicating my calculations, so I told him about Newton's shell theorem and how it applied in this scenario. Last week he was answering questions on reddit, and I reminded him of his email and asked him what scientists should do to improve the state of science journalism. He said:

Yes! That was a great day.
You did a paper about a person falling into a hole in the ground and then falling all the way to the centre of the earth (the hole hypothetically goes right through the middle). You figured out how long it would take to get to the centre.
I loved this question and spent half a day trying to find the answer before I gave up. I had a terrible computer model that kept throwing my person into space.
Then you taught me that so long as you're inside a sphere the gravitational attraction of the sphere itself, beyond a radius equal to your distance from the centre, cancels out. And I think you told me Newton calculated that. Do I have that right?
You did a great job improving science outreach by doing a ridiculous but fun question and then deriving the answer. I loved that and have told many people about it. You're my hero, a bit. Do more of that.
It is often said that the word hero is thrown around far too much these days. But here, we have an unbiased external source of that appellation.

Friday, 6 November 2015

A Trick for Mentally Approximating Square Roots

You won't believe this one simple trick for calculating square roots. Calculators hate me.

If you're the kind of person who needs to quickly calculate the square root of something, whether for finding out your crows-fly distance to a destination through a city grid, or determining whether the final score of your sports game was within statistical error or not, you might find this trick handy. It is not particularly advanced or arcane, it is just linear interpolation.

It requires you to know your perfect squares, to be able to do a quick subtraction in your head, and some even simpler addition. It is effective to the point that you can do these things.

Consider some number $S=Q^2$, and you want to find Q. Unless S is a perfect square, Q will be an irrational number, so any expression of it in terms of numbers will be an approximation. First find the integer N such that $N^{2}< S<(N+1)^{2}$. e.g. if S is 70, N is 8 because 70 is between $8^2$ and $9^2$. We approximate the irrational part of the square root, (Q-N), as a fraction found by linear interpolation. The denominator of the fraction will be the distance between the two perfect squares surrounding S, $(N+1)^{2}-N^{2}=2N+1$. The numerator is simply ($S-N^2$). Thus, to approximate the square root of a number, simply calculate:

$Q\approx N+\frac{S-N^{2}}{2N+1}$
Demonstrating how this works for the square root of 70. The actual square root of 70 is marked off with a line.

So for our example of 70, N=8, S-N$^2$=70-64=6, 2N+1=17, so $Q\approx 8+\frac{6}{17}\approx 8.35$. This is roughly 0.2% away from the actual answer, about 8.36.

Comparing the approximation to the actual square root function between 9 and 25. It's pretty close.

This operation basically assumes that square roots are distributed linearly between perfect squares. This is obviously not the case, but it gets more correct as the numbers get larger. By looking at the error, we can see that the approximation is within 1% for numbers larger than 10, and is worst when the fraction is $\frac{N}{2N+1}$. The worst-case for each perfect square interval decreases inversely with the number.

The error associated with this approximation.
So how fast can this be done? For numbers below 100 it can be done mentally in one or two seconds, with practice. For bigger numbers, it'll probably take a bit longer. Obviously this gives you a fraction and not a decimal expansion. You can roughly guess a decimal expansion from the fraction, but that coarsens it a bit. In the above example, 6/17 is close to 6/18 so I could guess the decimal expansion is about .33.

Googling techniques for mental square roots, the first two are for finding the roots of large perfect squares, and then there is a Math Stack Exchange post about using a first-order Taylor expansion. This is more accurate because the Taylor series diverges as you approach the next perfect square, and I think this is faster as well.

Tuesday, 3 November 2015

Two affiliated posts: biking and DNA origami

I went on a bike ride a few weeks ago with a bunch of people and we split up and tried to reconnoiter and failed. Afterwards I used strava to figure out why. Read more here:

I also wrote an article on PhysicsForums about a cool paper that I saw yesterday, about DNA origami. It's here:

Monday, 26 October 2015

World Records in Weightlifting and Powerlifting

A few weeks ago, I looked at world records in running and swimming, to see what information I could get out of them and what I could learn about human athletic performance. In this post, I'll look at world records in weightlifting and powerlifting, and see what there is to see. Before I write this, I have a general sense of what the data will look like,  I know that bigger people tend to be stronger and men tend to be stronger than women,  but I don't yet know what nuances will emerge. The overarching question is: can we learn about an extremely complicated system, a human being, from simple data, like how much they can lift?

When I do this analysis, I make the assumption, as before, that the current world is close to the pinnacle of human achievement, and that slight improvements in a given record will not change the trends that much. This is not as true for powerlifting as it is for running.

The difference between weightlifting and powerlifting is that powerlifters lift more weight and weightlifting requires more power. Powerlifting involves the squat, bench press, and deadlift, and weightlifting involves the snatch and the clean and jerk, two ways of lifting a weight from the ground to overhead. Weightlifting used to involve the clean and press as well, but it was removed because people started leaning really far back to make the lift easier and it became impossible to judge.

From left to right: Squat, bench press, deadlift, clean and jerk, snatch.

Powerlifting data was taken from the website, which is generally well updated. It's important to note that I focus on "raw" records, meaning they don't use supportive equipment, which can vastly increase the amount of weight lifted. I don't particularly care whether they were on drugs or not; I want to know the limits of human achievement. Weightlifting data was taken from Wikipedia, with the caveat that the records were annulled in the 1990s when the weight classes were redistributed, and not all the old ones have been beaten. In the case of the "superheavyweight" category, I use the weight of the record holder rather than of the weightclass.

The naive trend we expect is based on the square-cube law: if you make a person bigger by some factor x, their weight will increase as $x^3$ but their strength only as $x^2$, so the increase in strength with respect to weight should be roughly the two-thirds power. In an article I wrote on scaling laws, I showed that men's deadlift records did follow this prediction until the athletes start getting fattier. This ignores many biomechanical effects, for example the benefits or disadvantages of having longer limbs when performing a lift.

Let's first look at the world record data for powerlifting.
Powerlifting world records.
First, we confirm what was expected: bigger athletes are stronger, and men are stronger than women. There is a trend evident in the squat and deadlift that is the same between sexes: the record increases with weight and reaches a plateau. This plateau occurs when there is a transition weight where the athletes stop getting more muscular as they get heavier, and start getting fattier. This occurs at about 242 pounds for men and 165 pounds for women. It's interesting that this plateau trend is not really evident in the bench press data; it is roughly a smooth increase (the men have an outlier at 242 lb). Why are heavy bench pressers immune to fattiness? Perhaps the reduction in bar-travel distance due to a fattier chest helps, perhaps it's due to the existence of bench-only competitions that avoid the fatigue of squatting before benching. I don't know.

Champions in the second-heaviest and heaviest weightlifting categories.

I have often heard that women have comparatively stronger lower bodies than men. Is this supported by the data? If we average the female:male ratios across all coincident weights for the three lifts, they are roughly the same for the squat and the deadlift, about 0.71 ± 0.02, whereas the bench press is a lower 0.65 ± 0.01 .

There are two fairly extreme outliers evident in the squat data: the lightest man, Andrzej Stanaszek, is about four feet tall and has considerably different biomechanics; the heaviest woman, April Mathis, is just much better than any other female raw powerlifter. Excluding Andrzej and April, there is a downward trend present in the squat ratio data: large men get better at squatting compared to large women; I believe that this is because the adiposity transition occurs at a lower weight for women.
Squat ratios.
I tried to measure a "legginess quotient" by dividing the squat records by the bench press records, but there isn't much of a trend. The average ratio is 1.27 for women and and 1.26 for men.

When I try to compare trends between different lifts across weight classes, or between sexes, I come to the conclusion that the assumption upon which I base this analysis is violated: the "raw" powerlifting records do not serve as a proxy for the pinnacle of human performance, and there is significant person-to-person variability that skews the data. This is in part based on my choice to focus on "raw" records, which typically have fewer competitors. To smooth out the stats, I will look at the records for the raw "total," which is the sum of the three lifts. This at least will allow me to more easily compare apples to apples, because each data point is just one person. It would be good to compare all three lifts for each record total, but the data isn't in a neat little package.

Powerlifting Totals, in linear and logarithmic axes. The superimposed lines are the 2/3 power law. Also I screwed up the colours.
These trends look less scattered than each of the individual lift records. Doing my favourite thing and fitting a power law to the data, excluding the heaviest category, we get a scaling exponent of 0.58±0.05 for men and 0.7 ± 0.1 for women. The naive expectation is 0.66. Another way to look at this by dividing out by body weight, and looking at how many times their own weight they lifted. This generally decreases, for reasons I mentioned above. We also see how good April Mathis is, that she rises significantly above this decreasing trend.

How many times their body weight they lifted.
When we look at weightlifting, things get a bit clearer. There are no longer fluctuations about the trend due to small numbers or sub-maximal performance. Only the largest athletes fall below the general trend due to fattiness. As a Canadian living in the US, I will now switch from pounds to kilograms as I switch from powerlifting to weightlifting.
Weightlifting record data. The men's snatch champion is a really big dude.
Something interesting is that for weightlifting, the women tend to get stronger-er with weight compared to men. For both the clean and jerk and the snatch, men's records increase with roughly the square-root of weight, whereas for women it's the .65 and .75 power respectively, which I did not expect and can't currently explain. There is a weak downward trend in the sex ratios, but nothing to get too analytical about. Looking at the ratio of snatch to clean and jerk, there is generally no trend with respect to weight, but the ratio is 0.829 ±0.007 for men and 0.801±0.007 for women, a small but statistically significant difference. I do not know the physiological reason behind this, but it strikes me as the opposite of the powerlifting sex ratio trend.

I would say that we did not learn anything too-too interesting from looking at this data. The records are handwavingly near the prediction of the square-cube law, and the adiposity transition is quite visible when that law fails. Probably the most interesting things I learned were that the bench press sex ratio is lower than the squat and deadlift ratios, and the snatch:jerk ratio is higher for men than for women, both by a small but significant amount. Generally, I think the sport of raw powerlifting has not developed enough to make firm conclusions about a lot of these trends.

Monday, 19 October 2015

Blog Article on PhysicsForums: Fun With Self-Avoiding Walks

This is the story all about how my random walk simulations got flipped, turned upside down.

On an unrelated note, I was doing some molecular dynamics simulations and one of the polymers ended up looking like some kind of undersea explorer.

Thursday, 15 October 2015

Mildly Interesting Math Finding: Areas between trivial zeros of the Riemann Zeta function.

One of the reasons I started this blog was to talk about things that I have discovered that are interesting enough to share but not interesting enough to publish. This is one of those things. It's something I found out about the Riemann Zeta function when I was bored and playing around with Maple. I apologize in advance because the LaTeX on this page looks awful.

The Riemann Zeta function, $\zeta(x)$ is the sum of every integer to the power of -x:


It is undefined at x=1, at x=2 it evaluates to $\pi^{2}/6$, and generally decreases over the positive numbers. When evaluated for complex numbers, using the necessary analytic continuation of the function, it occasionally equals zero. All known Zeta zeros of complex numbers have a real part of 1/2, and if there are no exceptions to this trend, it has profound implications for the distribution of prime numbers. This is known as the Riemann hypothesis, and its solution is worth a million dollars. Recently, there was a viral video about how $\zeta(-1)=1+2+3+4...=-1/12$, which isn't quite right.

What are less interesting than the non-trivial zeros are the trivial zeros: negative even numbers. Negative even numbers make the Zeta function equal zero because a re-writing of the equation for negative numbers includes a sine function with a period of two which has a phase of $\pi$ for negative integers.

One day I was plotting the Riemann Zeta function over negative numbers for some reason. It looks like this:

The less popular strip of the Riemann Zeta function.
After some initial adjustment, the area subtended by the curve between each of the zeros gets bigger and bigger. In fact, after this graph truncates to the left, it starts getting too big to reasonably show on a graph.

Out of curiosity, I calculated the integrals of the curve between each zero (we'll call them G), and looked at it. This grows extremely rapidly, blowing the exponential function out of the water and surpassing $x^x$ at around the 70th interval.

The numbers are too big for my regular graph-making program to handle.

I have a vague recollection of what I did with these numbers (I think it was 2010?), so I tried to re-create it when I thought of this exercise tonight. Let's look at the ratio of each interval to the one before it.

This is a pretty well-behaved function, and if you look at it on logarithmic axes you can squint a straight line into existence. This implies that the ratio is described by a power law, and if you do a naive power-law fit you get an exponent of like 1.97. That's almost quadratic! Performing a quadratic fit to this data gives the function:


There is some uncertainty on the intercept term, but the index coefficients are pretty tight. The R-squared coefficient is 1, it is basically a perfect fit.

So, this tells us something:


Coupled with the initial value for i=1, $G_{1}=.011$, we can find the magnitude of each interval iteratively.
$\int_{2(i-1)}^{2i}\zeta(x)dx=(-1)^{i}0.011 \times \prod_{n=2}^{i}\left[0.101n^{2}+0.298n-0.036\right]$

Interestingly, this product has a closed form solutions in terms of gamma functions of the roots of the quadratic polynomial describing the ratio:

$\frac{{a}^{i}\Gamma  \left( 1/2\,{\frac {2\,ia+b-\sqrt {{b}^{2}-4\,ac}}{a}}
 \right) \Gamma  \left( 1/2\,{\frac {2\,ia+b+\sqrt {{b}^{2}-4\,ac}}{a}
} \right)}{{a}^{2} \left( \Gamma  \left( 1/2\,{\frac {4\,a+b-\sqrt {{b
}^{2}-4\,ac}}{a}} \right)  \right)  \left( \Gamma  \left( 1/2\,{
\frac {4\,a+b+\sqrt {{b}^{2}-4\,ac}}{a}} \right)  \right) }$

I plugged in the fitting parameters for a, b, and c, but didn't have much success predicting the interval integrals; I could get within a factor of two for some of them. This does however explain the growth of this interval sizes: its an exponential times the square of a factorial, so it should grow roughly like $x^{x}x!$.

 Taking in all this information, I don't really know what this means, whether it's obvious or interesting or if it can be derived or what. It's hard to search for trivial zeros of the Riemann Zeta function because I just get stuff on non-trivial zeros. I don't know if this quadratic ratio behaviour is unique to the Riemann Zeta function, or holds for any sufficiently complex function. I'm treating these numbers as empirical phenomena, and it would be cool to know if there are first-principles behind them. This was something I came across stumbling blindly through math land, so if anyone has any insight I'd be happy to read it.

Friday, 2 October 2015

An empirical look at the scaling of world-record running and swimming speeds.

Rather than physics, I'm going to talk about running, and then swimming. I have run a few races in the last year or so, but I am not as knowledgeable about sports physiology as I am about physics.

Out of curiosity, one day I looked up the records for various running races, calculated their average speeds, an plotted them versus distance. Before I attempt to analyse them, let's look at the data.

Record running speed vs. race distance. Linear-log and log-log scales. Red dots are 100 m and marathon.
A few comments about the data. It comes from Wikipedia. Distances range from the 40 yard NFL combine sprint to the 24-hour record. Different races have different standards for timing. Some have official records that are kept, and some are informal. Sometimes the fastest time for a certain distance is actually a subset of a longer race, e.g. running the fastest 30 km as part of a marathon. Somewhat implicit here is the assumption that the current world records are close to the pinnacle of human possibility. Because these are average speeds, they neglect information about speed variation within a race.

If we look at the data, we see four to five different regimes. At least, I do.

Let's look at each of these individually.

1. Acceleration Zone. 0-100 m. For the shortest races, below 100 m, the average speeds are slower because a large portion of the time is accelerating. The shorter the race, the less time is spent at top speed.

2. The Usain Bolt Golden Zone. 100-200 m This is sort of a transition zone between 1 and 3, where it's long enough to reach top speed and coast at it, but not so long that the runners start to slow down. Usain Bolt holds all of these records at roughly the same speed, his fastest average race being the 150 meters which he did at 37.6 km/h. For a long time, the 200 m record was faster than the 100 m, but Usain Bolt effectively tied them up.

3. Sprint Zone. 200-1000 m.  They're trying to go as fast as they can, but it doesn't last forever. As the race gets longer, they got slower at roughly the -1/5 power of distance*. I have a feeling terms like "fast-twitch" and "anaerobic" would come into play here if the physiology were to be discussed.

4. Endurance Zone. 1.5-42 km. The races are long enough that conserving energy becomes more important than going all-out. Speed decreases with roughly the -1/13 (-0.08) power of distance. Marathon champions are slightly more than half as fast as Usain Bolt.

5. Pain Zone/Low Stats Zone. 50+ km. Few venture beyond the marathon. This is the ultimate test of human endurance, what separates us from the gazelles. Speed decreases with roughly the -1/5 power again. This may a different physiological regime than the Endurance Zone, or it could be convolved with the fact that there are much fewer people running these races so a true champion has not emerged (at the risk of stereotyping, I find it auspicious that no supermarathon record holder is Kenyan), and the races are long enough that people have to take bathroom breaks, lowering the average speed.

I mentioned low statistics. What I mean by that is that there are certain races where the record is clearly not what is humanly possible, but because comparatively fewer people run that distance, the best hasn't been achieved yet. This is responsible for the jumble in between half and full marathon speeds, and you can see an explicit example if you look at the Sprint Zone. 200, 400, 800 m and (less so) 1 km are commonly contested events, while nobody really runs 500 meters. It falls below the trend of human excellence.
500 meter glory is ripe for the taking. Also, this is a log-log plot so a straight line means a power law.
Now let's look at women's records.

Women's records compared to men's, and the ratio in speeds.
Generally, the fastest women are about 12% slower than the fastest men, with some variation. The small-sample effects are more present, especially around the marathon distance. One could try to squint out a trend in the ratio, but it is essentially constant: if you do a power law fit, it is consistent with zero, 0.002±0.003 (it's even closer to zero if you take out ultramarathons). Both the 100 m and the marathon have the same ratio, about 9% faster for men. The best and worst are both ultramarathons (6% and 20 %), indicating sample-size effects. My interpretation of this information is that whatever physiological differences separate sprinters and distance runners, they do not differ between men and women.

However, things get different when we look at swimming records. Pool events range from 50 to 1600 meters, and there are longer outdoor events, which are harder to compare because the I imagine environment conditions start to make a big difference. Looking at the data, it looks similar to running: longer is slower. It gets interesting when we compare men and women.
The two longest races are outdoors and are suspect. The first six are pool races.

As you can see, the relative advantage men have over women decreases as the race gets longer. This is pretty interesting, because it's something that's found in swimming but not running. What is the physiological reason for this? I don't know, but if I had to hypothesize, I'd say that in longer races, more energy is spent on maintaining buoyancy compared to sprints, and women are naturally more buoyant than men, and in longer races this starts to matter and tires men out comparatively more.

Now, let's get silly. My squint-analysis tells me that there is a sprint zone below 400 m. For men the scaling exponent is -.13, while for women it's -.11 (being less negative is consistent with the previous paragraph). For the endurance zone, it's -0.526 for men and -.485 for women, with an error of about .003. These numbers are about two-thirds their running counterparts, and I can't explain why, but I'm sure it's interesting. The scaling exponent of the ratio is about -.008 for all the data or -0.014 for only the pool records. If that seems ridiculously small, look at the y-axis of that ratio graph. These numbers are really small, but they are consistently nonzero; the error is roughly .001. With this information, I draw a natural conclusion: women will overtake men in a billion kilometer race.

Extrapolation is never wrong and always justified.

I am not the only person who has had these ideas: a lot of it was discussed in this paper which I didn't consult until writing everything above. They analyse sexual dimorphism in a lot of different sports, and find jumping (especially pole vault) is the most dimorphic...sexually. They also discuss these trends for running, swimming, and speed skating. They also did experiments with pigeons over hundreds of kilometers and found no evidence of sexual dimorphism.

In looking at all this information, I am basically doing what I would do with a set of experimental data where I don't really understand the underlying mechanisms (in this case, how people work). I look at plots, try to split the data into different regions, and look for trends within each region, and then try to figure out what underlying phenomena lead to those trends. Sometimes you can learn stuff this way.

If any physiologists or kinesiologists are reading this and have some ideas about the trends I've discussed, please share them.

*What this means is that in a race 32 times as long as another, the speed would be half. I would recommend reading the intro to my article on animal speeds for an overview of scaling analysis. If you want.