When a Correct π Formula Makes a Bad Algorithm
Leibniz’s series for π feels like a cheat code—until you try to turn “nth digit” into a real program with real error bounds and finite precision. This is a walkthrough of how 22/7, slow convergence, and floating-point limits reshape what “calculate π” actually means in code.
Posted by
Calculating π programmatically (and why 22/7 betrayed me)
I wanted to calculate π to the nth place.
My first instinct was: “Easy. Just use 22/7.” That value is famously close to π, and for quick mental math it’s fine. But when you’re writing code and tests expect actual digits, “close enough” stops being a concept and turns into red ink.
So began my small adventure into computing π.
Along the way I ran into a classic: a Taylor series identity that looks almost too neat to be real:
π/4 = 1 − 1/3 + 1/5 − 1/7 + …
It’s one of those formulas that makes you feel like you’ve found a cheat code. You can literally generate π by adding and subtracting fractions. No geometry, no circles, just a simple alternating sum.
And then I implemented it, the tests failed, and I learned why “simple” is not the same thing as “practical.”
The seductive simplicity of the Leibniz series
That series has a name (the Leibniz formula), but even without the name it’s easy to understand:
- It uses odd denominators: 1, 3, 5, 7, …
- It alternates signs: +, −, +, −, …
- Multiply the result by 4 and you get an approximation of π.
So I wrote something like this in Clojure:
(defn pi-to-nth [number]
(let [odd-numbers (filter odd? (iterate inc 1))]
(* 4.0
(apply + (map / (cycle [1 -1]) (take number odd-numbers))))
))
This felt great. It’s compact. It’s “functional.” It streams odd numbers lazily, zips them against alternating signs, divides, sums, multiplies by 4.
It also doesn’t do what my function name claims.
“Nth place” is not “number of terms”
My parameter was number, which I mentally mapped to “digits” or “places.” But I used it as “take this many odd numbers,” i.e., “this many terms in the series.”
Those are wildly different units.
If you want π to, say, 10 decimal places, you don’t want 10 terms of this series. You want however many terms produce an error smaller than 0.5 * 10^-10 (if you plan to round properly). That’s already a mismatch in thinking: digits are about error bounds, not iteration counts.
But there’s an even bigger issue:
This particular series converges painfully slowly
Even if you fix the “digits vs terms” confusion, you hit the wall of convergence.
The terms decrease like 1/(2k+1). That’s… not fast. And while the alternating nature helps (it doesn’t blow up), it still crawls. If you add a handful of terms, you don’t get “a handful of correct digits.” You get a handful of progress toward the first few digits.
That’s why it’s a great teaching series and a terrible production series.
This is also where the emotional whiplash comes from: the formula is correct, the code is correct, your expectations are what’s wrong.
Floating-point adds another layer of pain
Even if you did use a faster-converging method, you still need to be clear about what “calculate π to the nth place” means in a programming language.
Most of the time, if you’re using normal floating-point numbers (double), you’re already capped at around 15–16 decimal digits of precision. Beyond that, you’re not “calculating more digits”—you’re just rearranging rounding errors.
My code forces floating-point by multiplying by 4.0. That’s fine for approximations, but it also means:
- you won’t get exact rational arithmetic
- you can’t reliably “print more digits” than the type can represent
- you’ll start seeing weird last-digit artifacts that aren’t “wrong math,” just representation limits
So if the goal is truly “π to n decimal places,” at some point you have to switch to big-number arithmetic: big decimals, rationals, or a bignum library that can control precision.
If the goal is “a decent approximation of π as a double,” then you don’t want “to the nth place” at all—you want “within a tolerance,” like 1e-12, and you stop iterating when the improvement becomes negligible.
The more honest version of the task
This whole detour forced me to rewrite the problem statement in my head into something that actually maps to computation:
- If I want digits, I need arbitrary precision plus a stopping rule based on error.
- If I want an approximation, I need a numerically stable method plus a tolerance.
- If I want to learn, I can use anything—as long as I don’t confuse “it runs” with “it scales.”
And that’s why the tests failed. The tests weren’t judging my cleverness. They were judging whether my method could plausibly meet the precision implied by “nth place.”
A quick reality check on naming and expectations
Looking back, pi-to-nth was the trap. The name promises a direct mapping:
input: n
output: π rounded to n digits
But the implementation does:
input: number of terms
output: approximation that improves slowly and unpredictably with respect to digits
A better name would be something like:
pi-approx-leibniz(honest)pi-leibniz-terms(even more honest)- or something that takes
epsilonand iterates until the series term is smaller thanepsilon(honest and usable)
Even then, for Leibniz specifically, “usable” is relative.
Why arctan starts looking attractive
After the Leibniz series, it’s natural to go searching for something that converges faster. That’s where inverse trig functions show up—especially arctangent.
There’s a well-known identity:
π = 4 * atan(1)
So if you can compute atan(x) accurately, you can compute π.
And unlike the Leibniz series, there are arctan-based approaches that converge much faster when you pick an x that makes the series behave nicely (generally, smaller magnitude arguments lead to faster convergence for the power series of arctan). This is why you’ll see people combine a few arctan terms together rather than computing atan(1) directly with a naive series.
That direction immediately feels more “engineered”: instead of relying on an alternating harmonic-ish series, you’re looking for a method where each additional term buys you more correctness.
I’m deliberately not pretending I’ve solved that part yet, because “exploring atan” is exactly where I am too: at the point where you realize computing π is less about finding a formula and more about finding a formula that behaves well under finite precision and reasonable runtime.
What I actually liked about this failure
The best part of this whole thing is that it exposed two misconceptions I didn’t know I was carrying:
-
A correct infinite series is not automatically a good algorithm.
Some series exist mainly to prove a point, not to be executed until your laptop melts. -
Digits are about error control, not iteration counts.
“Do n iterations” is a programming instinct. “Stop when the error is small enough” is the numerical-analysis instinct. Calculating π drags you from the first world into the second.
Also, writing that Clojure snippet was genuinely fun. Lazy sequences, cycle for alternating signs, mapping division across two streams—there’s something satisfying about expressing math as transformations rather than loops. Even if the math you expressed is a slow-moving truck.
Conclusion
22/7 is a great approximation until you ask it to be π. The Leibniz series is a great formula until you ask it to deliver digits on demand. Trying to compute π is a surprisingly efficient way to learn the difference between “a mathematically correct expression” and “a practical numerical method.” Right now, arctan-based approaches feel like the next step because they hint at something the Leibniz series doesn’t: convergence that respects your time.