How Our Jugend forscht Project Turned Into a New Test of Einstein’s Theory

On February 25th we presented our project at the regional Jugend forscht competition. Our topic was theoretical physics, which is already unusual in a competition where many projects are experimental or technical. We worked on a question related to black holes and Einstein’s theory of gravity. The project is called The Kerr Trisector Closure. Continue reading

Posted in Jugend Forscht, Projects | Tagged | Leave a comment

We are presenting KTC on Jugend Forscht 2026

As mentioned in our previous updates, we are thrilled to announce that tomorrow marks the grand unveiling of our Kerr Trisector Closure. After months of dedicated development and meticulous writing, we are filled with satisfaction and readiness to embark on the next phase of our project.

The Kerr Trisector Closure

The Kerr Trisector Closure is about testing General Relativity in a deeper way than usual. Einstein’s theory says that gravity is not a force, but the bending of spacetime. A rotating black hole, according to General Relativity, is completely described by just two numbers: its mass (M) and its spin (a). These two parameters determine everything about the black hole. How objects orbit around it, how it vibrates after a merger, and how light bends near it. If the theory is correct, all observable effects around that black hole must come from the same spacetime geometry defined by those two numbers.

So far, scientists have tested General Relativity in many ways. They have studied the motion of objects spiraling into black holes (inspiral), the vibrations of black holes after collisions (ringdown), and the images of black hole shadows taken by telescopes like the Event Horizon Telescope. Each of these tests agrees with General Relativity on its own. However, they are usually analyzed separately. No one directly checks whether all three methods give the exact same mass and spin for the same black hole at the same time.

The Kerr Trisector Closure changes this. It compares three independent measurements of the same black hole: one from orbital motion, one from gravitational-wave ringdown, and one from imaging. Each method gives its own estimate of mass and spin, along with some measurement uncertainty. If General Relativity is fully correct, these three estimates should agree within those uncertainties. In other words, they should “close” to a single consistent spacetime description.

To test this, the method defines a statistical quantity called the closure statistic, $T^2$. This number measures how far apart the three measurements are, while properly accounting for their uncertainties. If $T^2$ is small, the differences between the measurements can be explained by normal experimental error, and the spacetime is considered consistent. If $T^2$ is too large, the differences are bigger than what uncertainty can explain, meaning at least one sector does not match the others. In that case, the assumption that a single Kerr spacetime describes everything would fail.

In simple terms, the Kerr Trisector Closure asks a powerful question: does one single spacetime geometry explain motion, vibration, and light around a black hole at the same time? If the answer is yes, General Relativity passes a very strict self-consistency test. If the answer is no, it would show exactly where the theory begins to break down.

Our final project files:

handout.pdf (handouts for summary)

project_jufo_26_de-3.pdf (simplified for the juries)

The_Kerr_Trisector_Closure (actual paper)

Posted in Jugend Forscht, Projects | Tagged | Leave a comment

Updates on our Jugend Forscht project (Kerr Trisector Closure)

In the current phase of the Kerr Trisector Closure (KTC) project, our work has focused on formalizing the statistical closure test and implementing it in a way that is directly applicable to real data. Since the conceptual structure of KTC is already established, the emphasis has been on robustness, diagnostics, and interpretability of inconsistencies between sectors.
Each observational sector yields an independent estimate of the Kerr parameters $\Theta = (M,a)$. We denote these as $\hat\Theta_{\mathrm{insp}} = (\hat M_{\mathrm{insp}}, \hat a_{\mathrm{insp}})$ for the inspiral sector, $\hat\Theta_{\mathrm{ring}} = (\hat M_{\mathrm{ring}}, \hat a_{\mathrm{ring}})$ for the ringdown sector, and $\hat\Theta_{\mathrm{img}} = (\hat M_{\mathrm{img}}, \hat a_{\mathrm{img}})$ for the imaging sector. Each estimate is accompanied by a covariance matrix $\Sigma_k \in \mathbb{R}^{2\times2}$, with $k \in {\mathrm{insp},\mathrm{ring},\mathrm{img}}$.
Under the Gaussian (Laplace) approximation, each sectoral posterior is modeled as $p_k(\Theta) \approx \mathcal N(\hat\Theta_k,\Sigma_k)$. The three estimates are combined into a single stacked estimator $\hat\Theta = (\hat\Theta_{\mathrm{insp}},\hat\Theta_{\mathrm{ring}},\hat\Theta_{\mathrm{img}}) \in \mathbb{R}^6$ with a full covariance matrix $C$. In the simplest case of statistically independent sectors, $C$ reduces to the block-diagonal form $C = \mathrm{blockdiag}(\Sigma_{\mathrm{insp}},\Sigma_{\mathrm{ring}},\Sigma_{\mathrm{img}})$.
Assuming General Relativity is correct (spoiler alert: obiously it is), all three sectoral estimates should correspond to a single Kerr parameter pair $\Theta^\ast$. This is expressed through the linear model $\hat\Theta = A\Theta^\ast + \varepsilon$, where $A = \mathbf 1_3 \otimes I_2$ and $\varepsilon$ is a zero-mean noise vector with covariance $C$. The best-fit common Kerr parameters are obtained via generalized least squares, $\bar\Theta = (A^\top C^{-1}A)^{-1}A^\top C^{-1}\hat\Theta$.
The closure residual is defined as $r = P\hat\Theta$, where $P = I_6 – A(A^\top C^{-1}A)^{-1}A^\top C^{-1}$ projects onto the $C^{-1}$-orthogonal complement of the model space. By construction, $r$ measures only inconsistencies between sectors and is insensitive to the overall best-fit Kerr parameters.
The Kerr Trisector Closure statistic is then defined as $T^2 = r^\top C^{-1} r$. Under the null hypothesis of perfect Kerr consistency and within the Gaussian approximation, $T^2$ follows a chi-squared distribution with $\nu = (K-1)p = 4$ degrees of freedom, where $K=3$ is the number of sectors and $p=2$ the number of parameters. Values of $T^2$ significantly larger than expected indicate a breakdown of cross-sector consistency.
An important aspect of the implementation is the diagnostic power of the closure statistic. For independent sectors, the total inconsistency can be decomposed as $T^2 = \sum_k (\delta\Theta_k)^\top \Sigma_k^{-1}\delta\Theta_k$, where $\delta\Theta_k = \hat\Theta_k – \bar\Theta$. This allows us to identify which sector dominates a potential inconsistency, rather than merely detecting its existence.
At this stage, the framework is fully implemented and tested on synthetic data. The next step is to apply the closure test to realistic forecast scenarios, incorporating expected uncertainties from multi-band gravitational-wave observations and horizon-scale imaging. This will allow us to assess the sensitivity of the Kerr Trisector Closure to sub-percent deviations in mass and spin across observational regimes.

Posted in Jugend Forscht, Projects | Leave a comment

Lecture 6: Energy and Why Things Can Move and Intro to Mathematical Reasoning

The previous lecture was long and demanding, both conceptually and mathematically. Which I intentionally did not mean to make this long, however I thought it would be a better idea to discuss mathematical reasoning and the ideas behind them. We examined gravity in detail, introduced several equations, and followed them step by step to understand why objects fall the way they do. That careful work was necessary, but it also revealed something important about the method we were using. Describing motion entirely in terms of forces and acceleration can become heavy very quickly, especially when multiple forces act or when motion changes continuously.

In Lecture 5, every conclusion required us to identify forces, combine them into a net force, and relate that net force to acceleration using $F = ma$. This approach is precise and reliable, but it is not always the most efficient way to think. Even for something as simple as a falling object, we had to consider gravitational force, possible additional forces, and how those forces affect acceleration at every moment. Physics therefore asks whether there is a different way to understand motion that captures the same behavior without tracking every detail step by step.

This question leads to the concept of energy. Energy allows us to describe what motion is possible and how motion can change without always focusing on forces directly. Instead of asking what forces act at each instant, energy-based reasoning asks a broader question: given the state of a system, what can it do? This shift in perspective does not replace force-based reasoning. It complements it. The goal of this lecture is to introduce energy as a quantitative idea that can be expressed mathematically and used consistently (atleast I hope we do). The mathematics will look different from what we used before, but the role of equations remains the same. They are not shortcuts or rules to memorize. They are precise statements about how physical quantities are related. By the end of this lecture, you should see energy not as an abstract concept, but as a practical and powerful way to understand motion, building directly on everything that came before.

In everyday language, the word energy is used loosely. We talk about having energy, losing energy, or running out of energy, often without being clear about what we mean. Physics uses the word in a much more precise way. Energy is not a substance and it is not a force. It is a numerical quantity that describes the state of a system and its ability to produce change.

One way to think about energy is as a measure of what a system can do. A moving object can collide with another object and set it into motion. An object held at a height can fall and gain speed. In both cases, something about the state of the object allows motion to occur. Energy is the quantity that keeps track of this ability in a consistent and measurable way. What makes energy especially useful is that it depends only on the state of the system, not on how that state was reached. Two objects with the same mass and the same speed have the same energy, even if one was accelerated gently and the other was accelerated suddenly. This is very different from force, which depends on interactions taking place at a particular moment.

In physics, energy is always associated with a system, not with an isolated object acting alone. A system might be a single object, two interacting objects, or a collection of many objects. The energy assigned to the system depends on how its parts are moving and how they are arranged. This way of thinking will become important when we begin to see how energy can change form while remaining conserved.

The key idea to keep in mind is simple but powerful. Energy does not tell us exactly how motion happens at every instant. It tells us what motion is possible and what limits exist. This makes energy a complementary concept to force. Forces explain how motion changes moment by moment. Energy explains what changes are allowed in the first place.

We begin with the form of energy associated with motion itself. A moving object has the ability to cause change simply because it is moving. It can push other objects, deform them, or set them into motion. Physics captures this ability using a quantity called kinetic energy.

The expression for kinetic energy is not chosen arbitrarily. It follows from the way forces change motion, and it can be understood step by step using ideas we already know. To see where it comes from, consider an object of mass $m$ that starts from rest and is pushed by a constant force in a straight line. Because a force acts, the object accelerates, and its speed increases from zero to some final value $v$.

From earlier lectures, we know that force and acceleration are related by $F = ma$. This tells us how strongly the force changes the motion, but it does not yet tell us how much motion is produced after the object has moved some distance. To answer that, we must connect force to motion over a distance. When a force acts on an object and the object moves, the force transfers energy to it. The more force applied and the farther the object moves, the more energy is transferred.

For a constant force acting along the direction of motion, the energy transferred is proportional to the force and the distance moved. At the same time, the motion produced by that force is described by acceleration. Using basic kinematics, one can show that when an object accelerates uniformly from rest to speed $v$, the distance it travels depends on $v^2$. This is why speed appears squared in the energy expression.

Putting these ideas together leads to a quantity that increases when force acts over distance and that depends on both mass and the square of the speed. The simplest expression that fits these relationships is

$$E_k = \frac{1}{2}mv^2$$

The mass $m$ appears because heavier objects require more force to reach the same speed, and the $v^2$ appears because speed grows as the object continues to accelerate over distance. The factor $\frac{1}{2}$ ensures that this expression matches exactly the energy transferred by a force according to $F = ma$. Although this reasoning involves several steps, the result is conceptually simple: kinetic energy measures how much motion an object has gained due to forces acting on it.

It is completely normal if this derivation does not feel obvious on first reading. What matters is the overall logic. Forces acting over distance create motion, heavier objects resist this change more strongly, and faster motion carries disproportionately more energy. The equation collects all of these ideas into a single, compact statement that we will now begin to use rather than re-derive.

Motion is not the only way an object can possess energy. An object can also have energy because of its position, especially when gravity is involved. This form of energy is called gravitational potential energy. It does not describe motion directly, but rather the possibility of motion. An object held above the ground has the potential to fall, and therefore has the potential to gain speed.

To make this idea precise, consider lifting an object of mass $m$ upward at a steady speed. Because the speed is constant, the object is not accelerating, which means the net force on it is zero. Gravity pulls downward with a force $F_g = mg$, so you must apply an upward force of the same magnitude to lift the object. While lifting, you exert a force over a vertical distance $h$.

When a force acts over a distance, energy is transferred. In this case, the energy transferred to the object depends on how strong the gravitational force is and how high the object is lifted. Since the force due to gravity is $mg$ and the vertical distance is $h$, the energy associated with this change in position is proportional to both. Physics captures this with the expression

$$E_p = mgh$$

This quantity is called gravitational potential energy. The symbol $h$ represents height, but it is important to understand that height is always measured relative to some chosen reference level. Only changes in height matter. Raising an object increases its potential energy, and lowering it decreases that energy by the same amount.

This expression connects directly to what we learned about gravity in the previous lecture. The same quantity $g$ that appeared as the acceleration due to gravity now appears in the expression for potential energy. This is not a coincidence. It reflects the fact that gravitational potential energy is tied to the gravitational force and to how that force acts over distance.

If this equation feels easier to accept than the one for kinetic energy, that is common. It depends linearly on mass and height, which matches everyday experience. Doubling the mass doubles the effort needed to lift an object. Lifting an object twice as high requires twice the effort. The equation simply keeps track of this relationship in a precise way.

With expressions for both kinetic energy and gravitational potential energy in hand, we can now see how energy changes form during motion. Consider an object of mass $m$ held at a height $h$ above the ground and then released from rest. At the moment it is released, the object is not moving, so its kinetic energy is zero. All of its energy is stored as gravitational potential energy, given by

$$E_p = mgh$$

As the object falls, its height decreases. This means its gravitational potential energy decreases as well. At the same time, the object’s speed increases, which means its kinetic energy increases according to

$$E_k = \frac{1}{2}mv^2$$

The crucial observation is that these two changes are connected. The loss of potential energy is exactly matched by the gain in kinetic energy. Energy is not disappearing and it is not being created out of nothing. It is changing form. At any moment during the fall, part of the energy is stored in position and part in motion.

We can express this idea mathematically by writing the total energy as the sum of kinetic and potential energy:

$$E_{\text{total}} = E_k + E_p$$

At the top, when the object is released, this becomes $E_{\text{total}} = 0 + mgh$. Just before the object reaches the ground, the height is zero, so the potential energy is zero and the total energy is $E_{\text{total}} = \frac{1}{2}mv^2 + 0$. Since the total energy is the same in both cases, we can write

$$mgh = \frac{1}{2}mv^2$$

This equation does not tell us how the object moves at each moment in time. Instead, it compares two different states of the motion. From it, we can already see that a larger height leads to a larger speed at the bottom, and that the mass appears on both sides in the same way. Once again, the mathematics reflects a simple physical idea: gravity converts stored energy into motion in a predictable and orderly way.

If this reasoning takes a moment to settle, that is completely normal. Energy arguments work by comparing situations rather than following motion step by step. With practice, this approach often feels simpler than force-based reasoning, because it allows us to understand the outcome of motion without tracking every detail along the way.

The relationship we just obtained is not a special trick for falling objects. It is an example of a general and very powerful principle called the conservation of energy. To understand it clearly, we must look carefully at what the equations are actually telling us, without rushing past the algebra.

From the previous discussion, we wrote the total mechanical energy of the object as the sum of its kinetic and gravitational potential energy:

$$E_{\text{total}} = E_k + E_p = \frac{1}{2}mv^2 + mgh$$

Now consider the object at two different moments during its motion. At the first moment, it has speed $v_1$ and height $h_1$. At a later moment, it has speed $v_2$ and height $h_2$. If gravity is the only force doing work on the object, then the total energy at these two moments must be the same. Mathematically, this is written as

$$\frac{1}{2}mv_1^2 + mgh_1 = \frac{1}{2}mv_2^2 + mgh_2$$

This equation is the mathematical statement of energy conservation for this situation. It does not describe how $v$ or $h$ change with time. Instead, it states a constraint: no matter how the motion unfolds, the combination of terms on the left must always equal the combination on the right.

Notice something important about this equation. The mass $m$ appears in every term. This means that, just as in our discussion of free fall, mass does not determine the qualitative outcome of the motion. Dividing the entire equation by $m$ gives

$$\frac{1}{2}v_1^2 + gh_1 = \frac{1}{2}v_2^2 + gh_2$$

This form makes the meaning especially clear. Changes in speed are directly linked to changes in height. If the height decreases, the speed must increase in just the right way to keep the total energy constant. No additional assumptions are required. The result follows directly from the equations.

This is what makes energy such a powerful concept. Instead of tracking forces and accelerations at every instant, we compare entire states of motion using algebraic relationships. The mathematics is rigorous, but the logic is simple: as long as no energy is added to the system and none is removed, the total energy remains the same. Understanding this idea marks a major step forward in learning how physics explains motion.

The conservation of energy allows us to reason about motion in a way that is both mathematically precise and conceptually economical. Once the total energy of a system is known at one moment, the equations restrict what the system can do at any other moment. This is a much stronger statement than it may appear at first. It means that not all imaginable motions are possible. Only those motions that respect the energy equation can actually occur.

To see this, consider again the expression

$$\frac{1}{2}mv^2 + mgh = \text{constant}$$

This equation tells us that speed and height cannot change independently. If an object moves to a lower height, the term $mgh$ decreases. In order for the sum to remain constant, the term $\frac{1}{2}mv^2$ must increase, which means the speed must increase. Likewise, if an object moves upward and gains height, its speed must decrease. These conclusions follow directly from algebra, without any reference to forces or acceleration.

This way of reasoning is especially useful because it works even when the motion itself is complicated. The path taken by the object, the time it takes to move, and the details of how the speed changes along the way are all irrelevant to the energy balance. As long as gravity is the only force doing work, the same equation applies. This is why energy methods are often preferred when the detailed motion is hard to track.

It is important to understand that energy conservation does not eliminate the need for forces. Forces are still present and still responsible for changing motion. What energy conservation does is impose a global constraint on what those changes can add up to. It tells us that, although forces can rearrange energy between different forms, they cannot create or destroy it within the system.

If this perspective feels different from earlier lectures, that is because it is. Up to now, we have followed motion step by step, asking what causes acceleration at each instant. Energy reasoning steps back and looks at the motion as a whole. Both approaches are mathematically sound, and both describe the same physical reality. Learning when and how to use each one is part of learning how physics actually works.

This lecture introduced several new equations and a different way of thinking about motion, and it is completely normal if everything does not feel clear on first reading. Energy arguments often feel unfamiliar at the beginning because they do not follow motion step by step. Instead, they compare whole situations using algebraic relationships. This takes time to get used to, and understanding usually comes gradually rather than all at once.

If the equations feel heavy, it can help to read them slowly and treat each one as a sentence. The expression $E_k = \frac{1}{2}mv^2$ says that faster motion carries more energy, and that speed matters more than mass. The expression $E_p = mgh$ says that height stores energy because gravity can turn that height into motion. The equation $\frac{1}{2}mv^2 + mgh = \text{constant}$ says that these two forms of energy trade with each other in a precise and orderly way. None of these statements are mysterious on their own. The mathematics simply holds them together.

It is also important to remember that understanding in physics is often recursive. You read an argument, move on, and then come back to it later with more experience. What once felt abstract starts to feel obvious. This is not because the equations changed, but because your intuition adjusted to them. That process is expected and healthy.

The central idea to carry forward is simple and robust. Energy provides a way to understand motion by comparing states rather than tracking every detail. The mathematics used here is not an extra layer added on top of the ideas. It is the language that makes the ideas precise and reliable. With patience and repetition, these equations stop being symbols on a page and start to feel like direct statements about how the world behaves.

Posted in Physics for the typical man, Teaching | Tagged | Leave a comment

Lecture 5: Gravity and Falling Motion

Among all the forces we encounter in everyday life, gravity is the most familiar and at the same time the most easily overlooked. It acts on every object around us, from falling stones to the air we breathe, and it does so constantly. We are so used to its presence that we rarely notice it directly. Instead, we notice its effects: objects fall when released, thrown objects curve downward, and we feel a persistent pull toward the ground beneath our feet.

Gravity does not depend on whether an object is moving or at rest. A book lying on a table, a ball held in your hand, and a stone dropped from a height are all affected by gravity in the same basic way. What changes from situation to situation is not the presence of gravity, but how other forces interact with it. When a book rests on a table, the table provides an upward force that balances gravity. When the book is released, that supporting force disappears, and gravity becomes the dominant influence on the motion.

This constant presence makes gravity an ideal force to study. Unlike pushes and pulls that appear only when something touches an object, gravity acts whether or not anything is in contact. Because it is always there, it provides a simple and reliable setting in which to apply the ideas developed so far. By studying gravity, we can see how force, mass, and acceleration work together in a clear and consistent way.

In this lecture which it follows, gravity will serve as our first complete example of how physics explains motion. We will describe gravity as a force, connect it to mass, and use the equation $F = ma$ to understand falling motion. Nothing fundamentally new will be added to the framework. Instead, familiar ideas will come together to explain something that everyone has observed, but few have examined carefully.

To understand gravity properly, we must first clear up a very common confusion between two ideas that are often treated as if they were the same: mass and weight. In everyday language, people say an object “weighs” a certain amount, when what they usually mean is how heavy it feels. Physics makes a careful distinction. Mass is a property of an object itself. It measures how much the object resists changes in its motion. Weight, on the other hand, is a force.

Weight is the force with which gravity pulls on an object. This means that weight depends not only on the object, but also on the strength of the gravitational field it is in. Near the surface of the Earth, this force is produced by the Earth’s gravity acting on the object’s mass. We describe this gravitational force using the symbol $F_g$, where the subscript reminds us that this force is due to gravity.

Experiments show that the gravitational force on an object is proportional to its mass. Doubling the mass doubles the gravitational force. Halving the mass halves the gravitational force. Physics expresses this relationship in a simple and precise way:

$$F_g = mg$$

This equation should be read carefully. The symbol $m$ represents the mass of the object, and $g$ represents the gravitational field strength near the Earth’s surface. The value of $g$ is approximately $9.8\,\text{m/s}^2$, but at this stage the exact number is not important. What matters is that, near the Earth, $g$ is the same for all objects.

This equation shows clearly why mass and weight are not the same thing. The mass $m$ is an intrinsic property of the object and does not change when the object is moved. The weight $F_g$ depends on $g$, which can change from place to place. On the Moon, for example, $g$ is smaller than on Earth, so the same object has the same mass but a smaller weight. Understanding this distinction is essential before we analyze falling motion using $F = ma$.

The equation $F_g = mg$ tells us how strongly gravity pulls on an object, but by itself it does not yet tell us how the object will move. To understand motion, we must connect this gravitational force to the change in motion it produces. This is where the equation $F = ma$ enters in a direct and meaningful way.

When an object is falling freely near the surface of the Earth, gravity is the dominant force acting on it. If we ignore air resistance and other small effects, the net force on the object is simply the gravitational force. In that case, we can write

$$F_{\text{net}} = F_g$$

Using the expressions we already know, this becomes

$$ma = mg$$

This step is not algebra for its own sake. It is a direct statement that the force causing the motion is gravity, and that the resulting acceleration must be the one produced by that force. Both sides of the equation describe the same physical situation: a mass responding to gravity.

At this point, something important happens. The mass $m$ appears on both sides of the equation. This allows us to simplify the relationship and focus on the acceleration itself. Dividing both sides by $m$ gives

$$a = g$$

What the mathematics has done here is clarify a physical idea that might otherwise seem mysterious. Gravity pulls more strongly on more massive objects, but those objects also resist changes in motion more strongly. These two effects balance perfectly, leaving an acceleration that is independent of mass. The equations do not hide this reasoning. They reveal it.

The result $a = g$ is simple, but it carries a great deal of meaning, and it is worth slowing down to understand why it is true rather than just accepting it. Starting from the idea that gravity exerts a force on an object, we wrote that force as $F_g = mg$. This already tells us two things at once: gravity pulls more strongly on objects with larger mass, and it does so in a very regular way near the Earth’s surface. There is no preference for shape, material, or state of motion. Mass alone determines how strong the gravitational pull is.

When this force acts on an object, the object responds according to $F = ma$. Writing both ideas together gives $ma = mg$. This equation does not represent a new law. It simply states that the force causing the acceleration is gravity. What makes it interesting is what happens next. The same mass that appears in the gravitational force also appears in the object’s resistance to acceleration. Because of this, the mass cancels out, leaving an acceleration that depends only on $g$.

This cancellation explains something that often feels counterintuitive. Heavier objects experience a larger gravitational force, but they are also harder to accelerate. Lighter objects experience a smaller gravitational force, but they are easier to accelerate. These two effects exactly balance. The mathematics captures this balance in a precise way, but the idea itself is simple. Gravity pulls harder on heavy objects, yet those objects resist motion changes more strongly, so the final result is the same acceleration for all.

It is important to emphasize what this conclusion depends on. The result $a = g$ applies when gravity is the only significant force acting. If other forces are present, such as air resistance or contact forces, the net force changes and so does the acceleration. For now, physics deliberately focuses on this simplified situation because it reveals the essential role of gravity without distractions.

By following the equations step by step, we have not added complexity. We have removed it. What might have seemed like a strange coincidence becomes a direct consequence of how force, mass, and acceleration are related. This is a clear example of how equations in physics do not obscure understanding, but sharpen it.

We are now in a position to define and understand free fall in a precise way. Free fall does not mean that an object is moving freely in space or that it is falling quickly. It means something very specific: the only force acting on the object is gravity. Whenever this condition is met, the motion of the object follows directly from the equations we have already developed.

To see this clearly, we start with the physical situation and translate it step by step into mathematics. If gravity is the only force acting, then the net force on the object is just the gravitational force. In symbols, this is written as

$$F_{\text{net}} = F_g$$

We already know how to express each side of this equation. The net force is related to acceleration by $F = ma$, and the gravitational force is given by $F_g = mg$. Substituting these expressions into the equation above gives

$$ma = mg$$

This equation describes a simple balance. The left-hand side tells us how the object responds to a force, while the right-hand side tells us how strongly gravity is pulling. Both sides refer to the same physical situation, so they must be equal.

At this point, the mass $m$ appears on both sides of the equation. Dividing both sides by $m$ removes it and leaves

$$a = g$$

This result is the defining feature of free fall near the Earth’s surface. The acceleration of the object is equal to $g$ and does not depend on the object’s mass. Every freely falling object accelerates in the same way, as long as gravity is the only force acting.

Now you may think it is some ordinary short derivation, however the mathematical identity behind this is insanely sophisticated. Gravity pulls more strongly on more massive objects, but those objects also resist changes in motion more strongly. These two effects cancel exactly. The mathematics does not hide this fact. It makes it unavoidable. Free fall is therefore not a special case, but a direct and natural consequence of how force, mass, and acceleration are connected.

At first, the conclusion that all objects fall with the same acceleration can feel wrong. Everyday experience seems to contradict it. A stone dropped from a height reaches the ground much sooner than a leaf, and a crumpled piece of paper falls faster than a flat one. It is natural to conclude from this that heavier objects fall faster than lighter ones. Physics does not dismiss this observation, but it explains it more carefully.

The key point is that free fall, as defined earlier, is a situation in which gravity is the only force acting. In everyday situations, this condition is rarely met. Air exerts forces on moving objects, and these forces can be significant, especially for objects with large surface areas or small masses. Air resistance acts opposite to the direction of motion and increases as an object moves faster. When air resistance is present, the net force is no longer just the gravitational force, and the acceleration is no longer equal to $g$.

The idea might sound simple, but to really understand why all objects fall with the same acceleration in free fall, we must work through some “light” mathematical arguments as it helps to look again at the equations in a simple and almost informal way. From earlier, we found that the gravitational force on an object is $F_g = mg$. This tells us that a heavier object does indeed experience a larger gravitational force. At first glance, this seems to support the idea that heavier objects should fall faster.

However, motion is not determined by force alone. The equation $a = \frac{F_{\text{net}}}{m}$ tells us that acceleration depends on how large the force is compared to the mass of the object. If we substitute the gravitational force into this expression, we get

$$a = \frac{mg}{m}$$

Now the key step becomes clear. The mass $m$ appears both in the numerator and in the denominator. Dividing by $m$ removes it, leaving

$$a = g$$

This short piece of algebra explains the situation completely. Heavier objects feel a stronger gravitational pull, but they also have more mass resisting changes in motion. These two effects increase together and cancel each other out. The result is an acceleration that is the same for all objects, regardless of their mass, as long as gravity is the only force acting.

This way of reasoning shows the value of even simple mathematics in physics. Without it, the result can seem mysterious or counterintuitive. With it, the conclusion follows naturally from a few clear steps. The equations do not contradict experience. They explain which parts of experience come from gravity itself and which parts come from additional forces like air resistance.

Once this balance is understood, the idea that all objects fall in the same way no longer feels surprising. It becomes an example of how careful reasoning and a small amount of mathematics can clarify what the world is actually doing, even when everyday intuition points in the wrong direction.

So far, we have treated free fall as an ideal situation in which gravity is the only force acting. This allowed us to arrive at the simple result $a = g$. To move closer to real situations, we now consider what happens when an additional force is present. The most important example is air resistance. Air resistance acts in the opposite direction of motion and reduces the net force on a falling object.

When air resistance is present (which is a terror to some), the net force is no longer just the gravitational force. Instead, it is the difference between gravity pulling downward and air resistance pushing upward. If we call the force due to air resistance $F_{\text{air}}$, then the net force can be written as

$$F_{\text{net}} = mg – F_{\text{air}}$$

Using $F = ma$, this gives

$$ma = mg – F_{\text{air}}$$

This equation already tells us something important without any calculation. As $F_{\text{air}}$ increases, the net force decreases, and therefore the acceleration decreases as well. The object still accelerates downward, but less strongly than it would in free fall.

Air resistance depends on factors like the object’s speed, shape, and size. As an object falls faster, $F_{\text{air}}$ increases. Eventually, it can become large enough that it balances gravity. When this happens, the net force becomes zero:

$$mg – F_{\text{air}} = 0$$

At this point, the equation $F = ma$ tells us that the acceleration must be zero. The object no longer speeds up, even though it is still moving. It continues falling at a constant speed. This constant speed is called the terminal velocity.

This simple sequence of equations shows how physics moves from an idealized situation to a more realistic one. By adding a single additional force, the behavior of the motion changes in a clear and understandable way. The mathematics remains simple, but it captures a real effect that everyone has observed, such as a skydiver falling at a steady speed or a raindrop drifting downward.

At this point, it is worth pausing and looking at the mathematics as a whole, because this is often where you begin to feel uneasy. If some of the equations do not feel immediately clear, that is not a problem. Physics is not meant to be understood in a single pass. It is normal to read an argument, move on, and then return to it later with a clearer mind. The important thing is that the equations are not hiding meaning. They are expressing it.

Everything we have done can be traced back to a small set of ideas written in symbols. Gravity produces a force on an object, written as

$$F_g = mg$$

The motion of the object responds to the net force according to

$$F_{\text{net}} = ma$$

In the simplest case, where gravity is the only force, these two statements describe the same situation. Writing them together gives

$$ma = mg$$

From this, the conclusion follows naturally:

$$a = g$$

If additional forces are present, such as air resistance, they simply appear as extra terms in the net force. For example, writing

$$ma = mg – F_{\text{air}}$$

does not change the structure of the reasoning. It only changes the result by accounting for another influence on the motion. The logic remains the same: identify the forces, add them to find the net force, and relate that net force to acceleration.

If these steps feel dense or abstract, that is expected. Understanding often comes not from pushing forward, but from returning to the same ideas and letting them settle. Each time you revisit the equations, they tend to feel less like symbols and more like statements about how the world behaves. When that happens, the mathematics stops being something to fear and becomes something you can rely on.

The key idea to hold on to is simple. Forces do not determine where an object is or how fast it is moving. They determine how that motion changes. The equations are merely a precise way of saying this. Once that idea is clear, the rest follows with patience and repetition.

Posted in Physics for the typical man, Teaching | Tagged | Leave a comment

Lecture 4: Using $F = ma$ to Understand Motion

In the previous lecture, we arrived at a compact statement that connects force, mass, and acceleration. Written as $F = ma$, it summarizes a long chain of reasoning rather than replacing it. Forces are responsible for changes in motion, mass measures resistance to those changes, and acceleration describes how motion responds. In this lecture, the goal is not to introduce new laws, but to learn how to use this one thoughtfully. The equation is no longer just a sentence written in symbols. It becomes a tool for thinking about real situations.

It is important to be clear about what this equation does and does not describe. The equation does not tell us how fast an object is moving, and it does not explain why an object has a particular velocity. Instead, it focuses entirely on change. It answers questions of the form: if a certain force acts on an object, how will its motion begin to change? This shift in focus is essential. Physics is not primarily concerned with motion as a static fact, but with how motion evolves.

One common misunderstanding is to think that a force is needed to keep an object moving. The equation $F = ma$ shows that this is not the case. If the acceleration is zero, then the force is zero, regardless of whether the object is moving or at rest. Motion at constant velocity requires no force. Forces only appear when something about the motion is changing. This idea, which may still feel unfamiliar, will be reinforced repeatedly as we apply the equation to different situations.

From this point onward, equations will appear more frequently, but their role remains the same. They are not shortcuts or rules to memorize. They are precise ways of keeping track of physical ideas. Each time an equation is used, it should be possible to translate it back into plain language. If that translation is clear, then the mathematics is doing its job.

When using $F = ma$, one of the first things that must be taken seriously is direction. In everyday language, we often talk about forces as if only their strength matters. In physics, this is not enough. A force always acts in a particular direction, and that direction matters just as much as its size. A push forward and a push backward are not the same, even if they feel equally strong. This becomes clear when we think about how forces affect motion. Acceleration is a change in velocity, and velocity itself includes direction. If a force acts forward, the resulting acceleration is forward. If the same force acts backward, the acceleration is backward. In this sense, force and acceleration are always aligned. The direction in which the force acts is the direction in which the motion begins to change. At this stage, we do not need a formal mathematical language for direction. Words like forward, backward, upward, and downward are sufficient. What matters is the habit of always asking not only how strong a force is, but also which way it acts. Ignoring direction leads to incorrect conclusions, even when the numbers appear correct.

This focus on direction prepares us for situations where several forces act at once. Once direction is taken seriously, it becomes possible to understand how different forces can either work together or oppose one another. That idea will be essential when we begin to discuss the combined effect of multiple forces acting on a single object. In real situations, an object is almost never acted on by just a single force. A book resting on a table is pulled downward by gravity while the table pushes upward on it. A car moving along a road is pushed forward by its engine, slowed down by air resistance, and held against the ground by the road. Physics therefore does not focus on individual forces in isolation, but on their combined effect.

This combined effect is called the net force. The net force is not a new kind of force. It is simply the result of taking all the forces acting on an object and considering how they work together. If several forces act in the same direction, their effects add. If forces act in opposite directions, they compete with one another. In simple situations where all forces lie along a straight line, the net force can be written symbolically as

$$F_{\text{net}} = F_1 + F_2 + F_3 + \dots$$

This expression should be read carefully. The plus signs do not always mean that forces increase the total effect. A force acting in the opposite direction is counted as negative. What matters is not how many forces there are, but what their overall effect is once direction is taken into account.

The idea of net force explains many familiar situations. If two equal forces act on an object in opposite directions, their effects cancel and the net force is zero. In that case, the object does not accelerate, even though forces are present. This often surprises people, but it follows directly from $F = ma$. When $F_{\text{net}} = 0$, the acceleration must also be zero.

Thinking in terms of net force shifts the focus away from individual causes and toward the overall result. Physics is less concerned with which forces exist and more concerned with what they do together. Once the net force is known, the resulting motion follows directly. This way of thinking will be used repeatedly, especially when we begin to analyze specific examples using equations.

The connection between net force and acceleration is the central point of $F = ma$. It is not the presence of forces by itself that matters, but whether their combined effect produces a nonzero result. An object can experience several forces at once and still move without changing its motion, provided those forces cancel each other. In such a case, the net force is zero, and the acceleration is zero as well.

This result is not a special exception. It is a direct consequence of the equation. If we write the law in the form $a = \frac{F_{\text{net}}}{m}$, we see immediately that when the net force is zero, the acceleration must be zero, regardless of the object’s mass or velocity. This is simply Newton’s First Law expressed within the broader framework of Newton’s Second Law.

It is important to notice what this does not mean. Zero acceleration does not imply zero motion. An object with zero net force can be at rest, but it can just as well be moving at constant velocity. The equation makes no distinction between these two cases, because in both situations the velocity is not changing.

This is where the idea of net force becomes especially powerful. Instead of asking whether an object is moving, physics asks whether its motion is changing. The answer to that question depends entirely on the net force. If the net force is not zero, the object accelerates. If the net force is zero, the object continues in its current state of motion.

Once this perspective is adopted, many everyday situations become clearer. Objects slow down not because motion naturally dies out, but because forces like friction and air resistance create a net force opposite the direction of motion. Removing or reducing those forces changes the net force, and therefore changes the motion. This way of thinking allows us to analyze motion in a systematic and reliable way.

To see more clearly how force and mass work together, it is useful to look again at the equation in a slightly different form. Starting from $F = ma$, we can write

$$a = \frac{F_{\text{net}}}{m}$$

This is not a new law, but the same idea expressed in a way that highlights meaning rather than calculation. Read in words, it says that acceleration depends on how much net force acts on an object and how much mass the object has. If the net force increases while the mass stays the same, the acceleration increases. If the mass increases while the net force stays the same, the acceleration decreases.

This explains many familiar experiences. It is easier to accelerate a light object than a heavy one using the same push. Pushing an empty shopping cart produces a noticeable acceleration, while pushing a fully loaded one produces a much smaller change in motion. The force from your hands may be similar in both cases, but the mass of the cart is very different.

The equation also explains why increasing force matters. If the mass is fixed and the applied force is doubled, the acceleration doubles as well. Nothing mysterious is happening. The equation is simply keeping track of proportional relationships that are already present in experience. It allows us to predict how motion will respond when forces or masses are changed.

At this stage, there is no need to insert numbers or solve problems. The purpose of the equation is to organize thinking. It tells us which quantities matter, how they are related, and how changes in one affect the others. As we continue, this way of reasoning will be applied to specific physical situations, where the usefulness of the equation becomes even more apparent.

We can now apply the equation in simple, concrete situations without turning physics into a calculation exercise. Consider a box on a smooth floor. Suppose you push the box with a constant force in a straight line. According to the relationship between force and acceleration, a constant net force produces a constant acceleration. Written symbolically, this is simply

$$a = \frac{F_{\text{net}}}{m}$$

This tells us that the box will not move at a steady speed. Instead, its speed will increase steadily as long as the force continues to act. The equation is not predicting numbers here. It is predicting behavior. Now imagine pushing the same box with twice the force while keeping everything else the same. The mass has not changed, but the net force has. From the equation, doubling $F_{\text{net}}$ means doubling $a$. The box still accelerates in the same direction, but it does so more rapidly. The equation makes this conclusion unavoidable.

Next, consider two boxes being pushed with the same force, but one box has twice the mass of the other. The equation now tells a different story. With the same $F_{\text{net}}$ and a larger $m$, the acceleration must be smaller. In fact, doubling the mass cuts the acceleration in half. Both boxes experience the same push, but the heavier one changes its motion more slowly. These examples show how equations function in physics. They do not replace intuition, but sharpen it. By identifying the force and the mass, the equation guides us to the correct qualitative outcome every time. As situations become more complex, this same reasoning will remain valid, even when the details change. It is important to be clear about what the equation $F = ma$ does and does not tell us. The equation does not explain why forces exist, and it does not describe every aspect of motion. Its role is specific. It connects force to acceleration, not to velocity or position. When the equation is used correctly, it answers questions about how motion changes, not about how motion began or where an object happens to be.

This distinction prevents a common misunderstanding. If the net force on an object is zero, the equation tells us that the acceleration is zero. It does not tell us that the object must be at rest. An object with zero net force can be moving at a constant velocity or can be completely stationary. Both situations satisfy the equation equally well because in both cases the velocity is not changing. Another important point is that the equation does not describe individual forces in isolation. It applies to the net force, the combined effect of all forces acting on the object. Focusing on a single force without considering the others can lead to incorrect conclusions. Only the total effect matters for determining acceleration.

Seen this way, $F = ma$ is not a rule to be applied mechanically, but a framework for thinking. It tells us which questions to ask and which quantities are relevant. When used with care, it prevents confusion by making clear what can change and what cannot. This clarity is what allows physics to move from description to reliable prediction.

So far, forces have been discussed in a general way, without focusing on any particular one. However, in everyday life there is one force that appears constantly and affects nearly everything: gravity. It acts on all objects with mass, regardless of their shape, material, or state of motion. Because it is always present, its effects are often taken for granted. Gravity provides a particularly clear example of how the ideas developed so far come together. Near the surface of the Earth, gravity produces a nearly constant downward force on objects. According to $F = ma$, this force leads to a constant acceleration. This means that, when other forces are negligible, objects do not simply fall at a steady speed. Their speed increases in a very specific and predictable way. In the next lecture, we will examine this force in detail and give it a precise mathematical description. We will see that the acceleration caused by gravity is the same for all objects, regardless of their mass, a result that seems counterintuitive at first but follows directly from the equation we have been using. Gravity will serve as a concrete and important example of how forces shape motion in the real world.

By focusing on a single, universal force, we will be able to apply the ideas of net force, mass, and acceleration in a focused setting. This will not introduce a new framework, but will deepen understanding of the one we already have. The equation will remain the same. What will change is how confidently we can use it.

Posted in Physics for the typical man, Teaching | Tagged | Leave a comment

Lecture 3: Forces and the Origin of Change in Motion

In the previous lecture, motion was described using position, velocity, and acceleration. Among these, acceleration stood out as something special. Acceleration appears whenever motion changes, whether an object speeds up, slows down, or changes direction. Physics treats this change in motion as a signal that something is happening to the object. The natural question then arises: what is responsible for this change?

In everyday experience, we usually answer this without much thought. Objects move or change their motion because something pushes or pulls them. A box starts moving when you push it, a ball changes direction when it hits a wall, and a car speeds up when its engine provides a driving push. Physics keeps this idea but makes it more precise by giving it a single name. Any push or pull that can change motion is called a force. A force is not something an object has by itself. It only exists when objects interact. When your hand pushes a box, your hand exerts a force on the box, and at the same time the box exerts a force on your hand. Motion changes because of these interactions. Whenever acceleration is observed, physics looks for a force as its cause. We can already express this relationship in a very simple mathematical way, not as a calculation, but as a statement of meaning. Acceleration tells us how velocity changes with time, and force tells us why it changes. In symbols, we will later write this connection in a compact form, but for now it is enough to understand the direction of the idea: acceleration does not appear without a force.

At this point, it is important to slow down and examine an assumption that most people make without noticing. We often believe that motion itself requires a force to continue. This belief comes from everyday situations where moving objects quickly slow down if they are not pushed. Physics shows that this intuition, although understandable, is not correct. Understanding why it is wrong leads to one of the most important ideas in all of physics.

Consider an object moving in a straight line at a constant speed. Everyday intuition suggests that something must be pushing it in order to keep it moving, because in ordinary experience objects tend to slow down and stop when they are no longer pushed. If you slide a book across a table and let go, it quickly comes to rest. It feels natural to conclude from this that motion itself requires a force to continue. Physics looks more carefully and notices that this conclusion comes from overlooking forces that are always present in daily life. The book does not stop because motion fades away, but because friction between the book and the table, along with resistance from the air, acts against the motion and causes the book’s velocity to decrease.

If we imagine removing these opposing forces, the behavior of motion changes in a fundamental way. An object that is already moving would continue moving at the same speed and in the same direction, with no force required to maintain that motion. Its velocity would remain constant and its acceleration would be zero. This resistance to changes in motion is called inertia. An object at rest tends to remain at rest, and an object in motion tends to remain in motion, unless an interaction causes its velocity to change. This statement is known as Newton’s First Law of Motion. In careful language, it says that when the total force acting on an object is zero, the object’s velocity does not change. In symbolic terms, zero net force corresponds to zero acceleration. This idea overturns everyday intuition, but once it is understood, it becomes the foundation on which all further discussions of force and motion are built.

Having understood that motion does not require a force to continue, we can now focus on what forces actually do. Forces are responsible for changing motion, not for sustaining it. When an object’s velocity changes, whether in speed, direction, or both, there must be a force acting on it. This connection is so consistent that physics treats it as a rule rather than a coincidence. If there is no change in velocity, then there is no acceleration, and if there is no acceleration, the combined effect of all forces acting on the object must be zero. This idea leads to a deeper question: how much does a force change motion? Clearly, not all forces have the same effect. A gentle push changes motion only slightly, while a strong push can cause a rapid change. Physics captures this relationship by linking force to acceleration in a quantitative way. At its simplest, the idea is that a larger force produces a larger acceleration, and no force produces no acceleration. Written symbolically, this relationship will later take the form of a simple equation, but for now it is enough to understand that force and acceleration are directly connected.

This relationship needs one more thing that is very important. Not everything is affected by the same force in the same way. A light object is easier to push than a heavy one. Mass is a measure of this property of matter, which is closely related to inertia. Mass tells us how hard it is for an object to move. To move something with a lot of mass at the same speed as something with a little mass, you need to use more force. Physics has a very simple rule when you put all of these ideas together. The object’s mass and the forces acting on it affect how quickly it speeds up. Things move faster when there is more force, but they move slower when there is more mass for the same force. In the next part, we’ll put this idea into a clear mathematical form and see how one equation can show how motion works when forces are acting on it.

All of these ideas can now be gathered into a single, precise statement. Forces are responsible for changes in motion, and the way an object responds to a force depends on how strongly it resists that change. That resistance is measured by the object’s mass. When a force acts on an object, it produces an acceleration, and the size of that acceleration increases with the force and decreases with the mass. Physics expresses this relationship in a compact and exact form:

$$F = ma$$

This equation does not introduce a new concept. It simply states, in symbols, what has already been said in words. The symbol $F$ represents the total force acting on an object, $m$ represents its mass, and $a$ represents the acceleration that results. A larger force produces a larger acceleration, while a larger mass makes the same force less effective at changing motion. When the force is zero, the acceleration is zero, which is exactly the content of Newton’s First Law expressed within a broader rule.

At this point, the equation should be read as a summary, not as a tool for calculation. Its real importance lies in the way it organizes our understanding. It tells us that motion changes for a reason, that this reason can be identified as a force, and that the response of an object is governed by its mass.

Posted in Physics for the typical man, Teaching | Tagged | Leave a comment

Lecture 2: Motion and Change

In everyday language, motion seems like one of the simplest ideas there is. If something changes its place, we say it is moving. A car travels along a road, a bird flies through the air, the hands of a clock slowly turn, and the Earth moves around the Sun. None of this feels strange or technical. We notice motion constantly, and we usually understand it well enough for everyday life. Physics begins by taking this familiar idea and examining it more carefully. Instead of assuming we know what motion is, physics asks a basic question: what do we actually mean when we say that something moves? This question may sound unnecessary, but many misunderstandings in physics come from answering it too quickly. When ideas are left vague, they work in simple situations but fail in more complicated ones. The first important realization is that motion is not something an object has by itself. Motion only makes sense when we compare the object to something else. This comparison is called a reference. Without a reference, the statement “this object is moving” has no clear meaning.

Consider a passenger sitting inside a moving train. From the passenger’s point of view, the seat, the floor, and the walls of the train are not moving at all. Relative to the train, the passenger is at rest. At the same time, someone standing on the ground outside the train sees the passenger moving rapidly along the tracks. Relative to the ground, the passenger is in motion. There is no contradiction here. Both descriptions are correct because they use different references. Physics does not try to decide which one is “really” true. Instead, it teaches us to be precise about what we are comparing motion to. Once the reference is clearly stated, the description of motion becomes clear as well. This idea, simple as it sounds, is one of the foundations of physics. By always asking “relative to what?”, we avoid confusion and prepare ourselves to describe motion in a way that works in all situations, not just the familiar ones.

This already tells us something important. Motion depends on perspective. Physics does not ask which perspective is the “true” one. It asks how to describe motion consistently once a perspective is chosen. Now consider position. The position of an object is simply where it is, measured relative to some reference point. If you say a book is two meters from the wall, you have described its position. If, a moment later, the book is three meters from the wall, its position has changed. That change is motion. Speed is our first attempt to describe how fast this change happens. In simple terms, speed tells us how much distance is covered in a given amount of time. Written compactly, this idea becomes

$$v = \frac{d}{t}$$

This expression is not meant to be intimidating. It does not add new information. It simply compresses a familiar idea into a short and precise form. The symbol $v$ stands for speed, $d$ stands for distance, and $t$ stands for time. In words, it says that speed tells us how much distance is covered during a certain amount of time. If an object covers a large distance in a short time, we say it has a high speed. If it covers a small distance in the same time, its speed is lower. If it takes a long time to cover a given distance, the speed is low again. All of this is already part of everyday experience. The equation just keeps track of it in a clear and unambiguous way.

At this point, it might seem that speed completely describes motion. But physics quickly discovers that something important is missing. Speed tells us how fast something moves, but it does not tell us in which direction it moves. Direction matters. A car moving east at a certain speed is not behaving in the same way as a car moving west at that same speed. Even though the numbers describing their speeds may be identical, the motions themselves are different. To account for this, physics introduces a slightly richer idea called velocity. Velocity includes both speed and direction. It answers not only the question “how fast?” but also the question “which way?” Two objects can therefore have the same speed but different velocities if they are moving in different directions.

This distinction may seem like a technical detail, but it becomes essential as soon as motion stops being perfectly steady. Physics is not mainly interested in objects that move forever at the same speed in a straight line. It is interested in what happens when motion changes. When a car speeds up, slows down, or turns a corner, its velocity is changing. Even if the speed stays the same, a change in direction still counts as a change in velocity, and physics treats this as something physically important.

This leads naturally to the idea of acceleration. Despite its intimidating reputation, acceleration simply describes how velocity changes over time. If velocity stays the same, there is no acceleration. If velocity changes in any way, there is acceleration, even when the speed remains constant but the direction changes. A common example is circular motion. An object moving in a circle at constant speed is accelerating because its direction is continuously changing, which is one of the first places where everyday intuition often fails.

Physics insists on careful language because careless language hides these distinctions. By clearly separating position, speed, velocity, and acceleration, physics gives us tools that work in every situation, not just the familiar ones. At this stage, no calculations are required. What matters is developing the habit of thinking in terms of change: where an object is, how that is changing, and whether that change itself is changing. Once this way of thinking feels natural, the mathematical side of motion becomes much less intimidating, because the equations will only keep track of ideas you already understand.

Posted in Physics for the typical man, Teaching | Tagged | Leave a comment

Lecture 1: What Is Physics, Really?

Some people think of physics as a set of problems, laws, and formulas to solve. That way of thinking is wrong. Physics is really just trying to figure out how nature works by using ideas that are as simple as they can be. You are in a good place if you have never studied physics before. You still don’t know what questions you should be asking. Physics starts with simple questions like: why do things move, why do they stop, why does light reach us so quickly, and why is the world so predictable? The first thing to know is that physics doesn’t talk about individual stories. It doesn’t care what stone you drop or who dropped it. Physics looks for patterns that happen over and over again, no matter where or when the experiment is done. We call it a law when we find a pattern like this.

A law of physics is not a command that nature obeys. It is a summary of what nature has been observed to do. If tomorrow an experiment clearly contradicted a law, the law would change. This is one reason physics is reliable. It is always willing to correct itself. People often ask whether physics explains why things exist. That is usually not its job. Physics asks a different kind of question: if the world is in a certain state now, what will happen next? The success of physics comes from how well it answers that question. Consider something very simple. You let go of an object, and it falls. This happens whether the object is a stone, a book, or a coin. Physics notices that the details do not matter very much. What matters is that the object has mass and that the Earth pulls on it. From this, physics can predict how the object will move after you let go.

This focus on prediction is the reason mathematics appears in physics. Mathematics is not there for decoration. It is a precise language for saying how quantities are related. A sentence can be misunderstood, but an equation like $v = \frac{d}{t}$ leaves no room for interpretation. It tells you exactly how speed, distance, and time are connected. At first, physics often ignores complications. Surfaces are treated as perfectly smooth, objects are treated as points, and air resistance is forgotten. This is not because physicists believe the world is simple. It is because understanding a complicated situation usually begins by understanding a simpler one. Once the simple case is clear, corrections can be added. Friction can be included. Shapes can matter. Air can push back. The remarkable thing is that even with all these complications, the basic ideas remain the same.

You already use physics constantly, even if you do not think about it. When you walk, you rely on forces between your feet and the ground. When you hear, you rely on vibrations traveling through air. When you see, you rely on light reaching your eyes after traveling across space. Learning physics does not change how the world works. It changes how clearly you can think about it. Things that once seemed mysterious start to look reasonable. Things that once felt obvious sometimes turn out to be wrong.

As we continue, the goal is not to memorize results, but to learn how to think carefully about nature. Physics rewards clear reasoning, honest questioning, and a willingness to be corrected.

Posted in Physics for the typical man, Teaching | Tagged | Leave a comment

A Dark Halo That Almost Became a Galaxy

One of the cleanest ideas in modern cosmology is also one of the easiest to overlook. According to the standard ΛCDM model, structure in the Universe forms hierarchically, with dark matter collapsing under gravity into bound halos over an enormous range of masses. Massive halos, capable of hosting large galaxies or clusters of galaxies, are comparatively rare. As one moves to lower and lower masses, however, the number of halos increases rapidly. In fact, the theoretical abundance of halos rises so steeply toward small masses that low-mass halos should dominate the cosmic population by sheer numbers. Taken at face value, this has a striking and somewhat unsettling implication. If every dark matter halo were to host a visible galaxy, the night sky would look very different from what we observe. Instead of a relatively sparse distribution of galaxies, we would expect to see an overwhelming number of faint systems, tracing the enormous population of small halos predicted by theory. The absence of such a population in observational surveys is not a minor detail. It is a central clue that tells us something fundamental about how galaxy formation works, and, just as importantly, about how it fails. The natural conclusion is that most dark matter halos do not become galaxies in any ordinary sense of the word. They either never manage to form stars, or they form so few that their stellar populations are effectively invisible with current observational techniques. In this way, the prediction that the Universe should contain far more halos than galaxies is not a problem to be fixed, but a feature to be understood. It points toward a vast, largely unseen population of dark matter structures, quietly shaping the cosmic web without ever lighting up.

This conclusion is not controversial, and it is not new. What makes it interesting is that it forces us to confront an uncomfortable mismatch between theory and observation. We see galaxies, not halos. If the theoretical prediction is right, then most halos must fail to produce anything that looks like a galaxy. They either form no stars at all, or form so few that their light is effectively undetectable. In that sense, the existence of dark halos is not a speculative idea; it is almost required by the success of the model.

The real difficulty is empirical. A halo without stars is, by design, hard to see. Dark matter emits no light, and a system that lacks stars does not glow in the usual optical or infrared bands. If such objects exist, we should not expect them to announce themselves clearly. Instead, they must be found indirectly, through whatever faint traces of ordinary matter they manage to retain. This is where the physics of reionization enters the story. When the Universe was reionized, the intergalactic medium was heated by ultraviolet radiation to temperatures of order ten thousand kelvin. Gas at these temperatures develops significant pressure support, and only sufficiently deep gravitational potential wells can confine it for long periods of time. The result is a kind of mass threshold for galaxy formation. Above this threshold, halos can hold onto gas, allow it to cool, and eventually form stars. Below it, gas is easily lost or remains too warm and diffuse to collapse.

It is useful to summarize this complicated physics with a single number: a critical halo mass $M_{\mathrm{crit}}$. At the present epoch, this scale is around $10^{9.7} M_\odot$. The precise value is not especially important for what follows. What matters is that galaxy formation becomes sharply inefficient near this mass. A small change in halo mass, or in the details of its assembly history, can mean the difference between forming a faint dwarf galaxy and forming no stars at all. This sharp transition opens up an intriguing possibility. Consider halos that lie just below the critical mass. They are not massive enough to form stars today, but they may still be massive enough to retain some of their gas. In such systems, the gas does not collapse into a rotating disk or fragment into stars. Instead, it can settle into a relatively simple configuration, supported by pressure rather than rotation, and held in place by the dark matter potential. In this regime, the behavior of the gas is governed by equilibrium rather than by violent astrophysical processes. The temperature is regulated by a balance between photoheating from the ultraviolet background and radiative cooling. Gravity tries to compress the gas inward, while pressure pushes back. When these effects balance, the gas settles into a quasi-static state, roughly spherical, dynamically cold, and largely free of the complications associated with star formation and feedback. Objects of this type have come to be known as reionization-limited H I clouds. The name is less important than the idea behind it. These systems are not arbitrary oddities, but a natural prediction of the ΛCDM framework combined with reionization physics. They are expected to be rare, because they occupy a narrow window in halo mass, but they are also expected to have distinctive observational signatures, particularly in neutral hydrogen.

From a theoretical point of view, such objects are unusually attractive. Ordinary galaxies are messy. Their gas and stars are shaped by cooling, star formation, feedback, and environmental effects, all of which complicate any attempt to infer the underlying dark matter distribution. A gas-rich but starless halo, by contrast, is closer to a clean physics problem. If one could identify such a system and measure its gas properties, one would have a direct window into the structure of a low-mass dark matter halo, largely uncontaminated by the usual astrophysical uncertainties. The question, then, is not whether such objects are allowed by theory. It is whether any of them can be found in the real Universe. With this theoretical picture in mind, it is natural to ask whether any concrete example of such a system has actually been observed. A particularly intriguing candidate has emerged in the vicinity of the nearby spiral galaxy M94: a compact neutral hydrogen cloud commonly referred to as Cloud-9. The object was first identified in 21 cm emission and subsequently confirmed with higher-resolution radio observations. What immediately sets Cloud-9 apart is that it looks like a coherent, self-contained system in H I, yet it does not obviously resemble a conventional gas-rich dwarf galaxy. One of the key observational facts is kinematic. Cloud-9 has a recession velocity of approximately $v \simeq 304\,\mathrm{km\,s^{-1}}$, essentially identical to that of M94. This makes a chance alignment with a foreground Milky Way high-velocity cloud unlikely, and strongly suggests that Cloud-9 lies at roughly the same distance as M94, about $D \simeq 4.4\,\mathrm{Mpc}$. Interpreted at this distance, Cloud-9 is compact, with an angular extent of order an arcminute, corresponding to a physical scale of roughly a kiloparsec.

The velocity structure of the neutral hydrogen is equally important. The observed 21 cm line profile is narrow, with a reported width of $W_{50} \approx 12\,\mathrm{km\,s^{-1}}$. Such a small line width immediately distinguishes Cloud-9 from most gas-rich dwarf galaxies, which typically show broader profiles due to rotation or turbulence. Here there is no clear evidence for a rotating disk. Instead, the kinematics are consistent with a dynamically cold, pressure-supported gas cloud. From the total integrated 21 cm flux, one can estimate the neutral hydrogen mass using the standard relation
$$
M_{\mathrm{HI}} = 2.36\times 10^5\, D^2 \int S(v)\,dv \, M_\odot,
$$
where $D$ is the distance in megaparsecs and $\int S(v)\,dv$ is the velocity-integrated flux in $\mathrm{Jy\,km\,s^{-1}}$. Substituting the observed values and adopting the distance of M94 yields
$$
M_{\mathrm{HI}} \sim 10^6\, M_\odot.
$$
This places Cloud-9 squarely in the regime of low-mass, gas-rich systems, comparable in H I content to some of the faintest known dwarf galaxies. At this point, one might reasonably wonder whether Cloud-9 could simply be an extreme but otherwise ordinary dwarf galaxy. However, modeling the gas as a pressure-supported system reveals a further puzzle. If the observed neutral hydrogen were the only source of gravity, the cloud would not be able to confine itself. The internal motions implied by the line width would cause the gas to disperse on relatively short timescales. The fact that Cloud-9 appears compact and long-lived implies the presence of an additional gravitational component. Interpreting this missing mass as dark matter leads to a striking inference. Simple equilibrium models, in which the gas sits in hydrostatic balance within a dark matter potential, suggest a total halo mass of order
$$
M_{\mathrm{halo}} \sim 5\times 10^9\, M_\odot.
$$
This value is not arbitrary. It lies remarkably close to the critical mass scale associated with reionization and the suppression of galaxy formation. In other words, Cloud-9 appears to sit precisely where theory predicts the transition between halos that form stars and halos that do not. If this interpretation is correct, then Cloud-9 is not merely an odd cloud of gas. It is a potential example of a dark matter halo whose mass is large enough to retain neutral hydrogen, yet small enough to have largely failed at forming stars. This makes it an unusually direct and concrete realization of the ideas discussed earlier, and immediately raises the most important question of all: if Cloud-9 really inhabits such a halo, where are its stars?

At this point the discussion becomes a detection problem in the literal statistical sense. Let us fix, as a working hypothesis, that Cloud-9 sits at the distance of M94, and consider a putative stellar component with total stellar mass $M_\star$. The observational data consist of a set of detected point sources in a small region on the sky centered near the H I maximum, together with their magnitudes in two bands (so that each source corresponds to a point in a color–magnitude diagram). The question is: for a given $M_\star$, what is the probability that a dataset of this depth would produce no statistically significant stellar overdensity at the Cloud-9 position?

To turn this into something quantitative, one needs three ingredients.
First, a model for the underlying stellar population. Concretely, one chooses an isochrone family and an initial mass function, and thereby obtains a distribution of intrinsic stellar luminosities in the observed filters, conditional on an assumed age and metallicity. In the most conservative case for detection, one takes an old, metal-poor population (for instance, age $\sim 10\,\mathrm{Gyr}$ and ${\rm [Fe/H]}\sim -2$), because younger populations would produce brighter, more easily detected stars for the same $M_\star$.
Second, one needs a model of the observational selection function. This consists of a completeness function $c(m)$ and an error model for the measured magnitudes, both of which can be calibrated by artificial-star injection and recovery tests. One may think of the selection function as defining, for any intrinsic magnitude $m$, a detection probability $c(m)\in[0,1]$ and a conditional distribution for the observed magnitude given that the star is detected.
Third, one needs a background model: even if Cloud-9 has no stars, the chosen sky region will contain some number of contaminating sources (foreground stars, unresolved background galaxies, and substructure within galaxies) that pass the point-source and quality cuts. The key point is that this background is measurable from control regions, so it can be treated as an empirically determined nuisance distribution rather than an arbitrary theoretical prior.

Once these ingredients are in place, the inference can be phrased in a way that is reasonably close to a standard hypothesis test. Fix a spatial aperture $A$ (for example, a circle of radius $r$ centered at the H I peak), and define a test statistic $N$ to be the number of detected sources within $A$ that survive the photometric quality cuts. For the purposes of obtaining a conservative upper limit, it is often enough to work with $N$ rather than with the full two-dimensional CMD distribution, because a genuine stellar population would typically increase $N$ as well as concentrate sources along the expected RGB locus.
Let $N_{\rm obs}$ be the observed value of this statistic in the Cloud-9 aperture. Let $B$ be the random variable representing the background counts in such an aperture, estimated by placing many apertures of the same size in control regions. Finally, let $S(M_\star)$ be the random variable representing the number of detected stars contributed by a stellar population of total mass $M_\star$ after applying completeness and photometric uncertainties. Then, under the hypothesis that Cloud-9 hosts a stellar population of mass $M_\star$, the total detected count is
$$
N(M_\star) \;=\; B \;+\; S(M_\star),
$$
where $B$ and $S(M_\star)$ are (to a good approximation) independent. The object of interest is then the tail probability
$$
p(M_\star) \;=\; \mathbb{P}!\left( N(M_\star) \le N_{\rm obs} \right),
$$
namely the probability that one would observe a count no larger than $N_{\rm obs}$ if the true stellar mass were $M_\star$. If $p(M_\star)$ is very small, then $M_\star$ is inconsistent with the data at high confidence. This is the mathematically cleanest way to state the problem: we are trying to find the largest $M_\star$ for which the observation remains plausible once one accounts for both observational incompleteness and background contamination.

There is a subtlety here that is easy to miss if one thinks only in terms of integrated light. For small stellar masses, the number of luminous tracer stars (such as RGB stars above a given magnitude limit) is a small integer, and therefore highly stochastic. Two stellar systems with the same $M_\star$ can produce noticeably different numbers of detectable bright stars simply because of Poisson and IMF sampling fluctuations. In the notation above, this is precisely the statement that $S(M_\star)$ is not well approximated by its mean. One really must treat $S(M_\star)$ as a full distribution, typically obtained by Monte Carlo sampling of the stellar population followed by application of the selection function. With this probabilistic framing, the “where are the stars?” question becomes sharply posed: determine the range of $M_\star$ for which $p(M_\star)$ remains non-negligible. If even $M_\star \sim 10^4 M_\odot$ yields $p(M_\star)\ll 1$ after properly accounting for background and incompleteness, then Cloud-9 cannot plausibly hide a Leo T–like stellar component. Conversely, if the data only rule out $M_\star \gtrsim 10^5 M_\odot$, then the object could still be an ultra-faint dwarf in disguise. The rest of the analysis is, essentially, an implementation of this inequality with realistic inputs.

Let us now connect the abstract tail probability
$$
p(M_\star)=\mathbb{P}!\left(B+S(M_\star)\le N_{\rm obs}\right)
$$
to what is actually measured in the Cloud-9 field. Fix an aperture $A$ consisting of a circle of radius $r=8.4”$, chosen because it corresponds to the effective radius of a Leo T analog placed at the distance of M94. Within this aperture, one can count the number of detected sources that survive a strict set of photometric quality cuts. Denote by $N_{\rm obs}$ the observed count in the Cloud-9 aperture. The raw observed number is $3$, but the H I centroid has a positional uncertainty comparable to the aperture size, so one should not treat the aperture center as exact. If one shifts the aperture center over the allowed centroid uncertainty and repeats the count, one obtains an empirical distribution of $N_{\rm obs}$ values with mean approximately $3.5$ and a dispersion of about $1$.

Next, one needs a background model. Define $B$ to be the random variable describing the number of contaminating sources per aperture. Rather than postulating a parametric form for $B$, one can estimate it empirically by placing a large number of apertures of identical size on a control region of the same dataset, processed with the same photometric pipeline and quality cuts. Operationally, this yields a background count distribution with mean approximately $3.7$ and dispersion about $2$ per aperture. The point is not the exact numbers, but the fact that the background level is measured directly and is comparable to the on-target count.

The relevant quantity is therefore not $N_{\rm obs}$ by itself, but the excess count
$$
\Delta \equiv N_{\rm obs}-B.
$$
At the Cloud-9 location, using the shifted-aperture procedure for $N_{\rm obs}$ and the control-aperture procedure for $B$, the inferred excess is
$$
\Delta \approx -0.2 \pm 2.2,
$$
which is consistent with $\Delta=0$ and, in particular, provides no evidence for a positive overdensity of point sources at the Cloud-9 position. Interpreted probabilistically, this means that any allowed stellar population must be one whose detectable contribution $S(M_\star)$ is typically of order a few stars or less, and even that only in the high tail of its stochastic distribution.

Now we incorporate the forward model for the stellar population. For each candidate stellar mass $M_\star$, one generates many Monte Carlo realizations of a stellar population (e.g., an old, metal-poor population), converts to the observed bands, and then applies the empirically measured selection function (completeness and photometric scatter) to obtain the induced distribution of the detected-star count $S(M_\star)$. Crucially, because we are in the low-mass regime where the number of bright tracer stars is a small integer, the distribution of $S(M_\star)$ must be treated directly; it is not well described by its mean alone.

With these pieces in hand, the test becomes sharp. Consider the hypothesis $H(M_\star)$ that Cloud-9 hosts a stellar population of mass $M_\star$ within the aperture. Under $H(M_\star)$, the observed count is modeled as $N=B+S(M_\star)$, and one asks whether the realized $N$ is unusually small compared to what $H(M_\star)$ predicts. For a concrete example that is astrophysically meaningful, take $M_\star=10^4\,M_\odot$. Under this hypothesis, the Monte Carlo population synthesis plus selection function yields the following strong statement: in $99.5\%$ of realizations, at least one detectable star is recovered in the aperture. Equivalently,
$$
\mathbb{P}!\left(S(10^4 M_\odot)\ge 1\right)=0.995,
\quad\text{so}\quad
\mathbb{P}!\left(S(10^4 M_\odot)=0\right)=0.005.
$$
If one were in a zero-background world, the conclusion would already be immediate: a non-detection at the Cloud-9 position would exclude $M_\star=10^4 M_\odot$ at $99.5\%$ confidence.

But we are not in a zero-background world, and that is exactly why the excess variable $\Delta$ matters. The background count is not only nonzero, it is comparable to the observed count. The correct question is therefore: can the background fluctuations plausibly mask the additional stars predicted by $M_\star=10^4 M_\odot$? The excess estimate $\Delta=-0.2\pm 2.2$ implies that, even allowing for statistical uncertainty, the maximal plausible positive overdensity in the aperture is only a small integer. If one takes a deliberately conservative upper excursion consistent with this uncertainty, one obtains an upper bound of roughly $\Delta_{\max}\simeq 2$ stars attributable to a real counterpart. Thus, a stringent (and conservative) way to state consistency is
$$
S(M_\star)\le 2,
$$
because any model that would typically produce three or more detectable stars above background would tend to generate a positive excess inconsistent with what is observed.

Under $M_\star=10^4 M_\odot$, the Monte Carlo distribution for $S(M_\star)$ places most probability mass above this conservative threshold. Concretely, only about $8.7\%$ of realizations yield $S(10^4M_\odot)\le 2$, so
$$
\mathbb{P}!\left(S(10^4M_\odot)\le 2\right)\approx 0.087.
$$
This means that even after giving the model the benefit of (i) centroid uncertainty, (ii) empirically measured background fluctuations, and (iii) a conservative tolerance for a small positive excess, a $10^4M_\odot$ stellar population remains strongly disfavored. One can summarize this as an exclusion at approximately the $1-0.087\approx 91.3\%$ level under the most conservative excess allowance, and at the $99.5\%$ level at the nominal center where the effective excess is consistent with zero and even negative.

At this stage it is also important to explain why these choices are conservative rather than aggressive. The assumed stellar population is taken to be old and metal-poor, which minimizes the number of bright, easily detected stars for a given $M_\star$. Any younger or intermediate-age component would increase detectability and therefore strengthen the exclusion. Similarly, the use of strict quality cuts and an empirically calibrated completeness function protects against over-claiming detections, but also means that some genuine stars (if present) would be lost by the pipeline, again making the inferred upper limit conservative. Finally, the background is not modeled by a convenient distribution chosen to yield a strong result; it is measured directly from the same dataset, meaning the comparison is intrinsically like-for-like.

Putting the argument in its cleanest form, the data constrain the stellar mass to be so low that the expected number of detectable RGB stars is at most of order unity. In practice, the forward modeling indicates that the largest stellar mass compatible with producing on average no more than one detectable RGB star, after incompleteness and photometric scatter, is approximately
$$
M_\star \lesssim 10^{3.5}\,M_\odot.
$$
This is far below the stellar masses of canonical gas-rich dwarfs with similar neutral hydrogen masses, and it is precisely the sort of bound one would want if the goal is to distinguish “a faint dwarf galaxy” from “a gas-bearing halo that largely failed to form stars.”

In short, once one phrases the problem as a hypothesis test for $M_\star$ using an empirically calibrated selection function and background distribution, the result is not merely “we did not see a galaxy.” The result is a quantitative inequality: any stellar counterpart must be so small that, in a dataset capable of resolving red giant branch stars at M94’s distance, the induced detectable-star count is forced into the small-integer regime. That is the sense in which Cloud-9 behaves, statistically, like a starless system. oai_citation:0‡2508.20157.pdf

Posted in Expository | Tagged , | Leave a comment