spraci.info

Search

Items tagged with: math

● NEWS ● #TruthOut ☞ Racial Disparities in #Science and #Math #Education Are Hurting Black Communities
Racial Disparities in Science and Math Education Are Hurting Black Communities
 

A warped perspective on math history


Yesterday I posted on
@TopologyFact

The uniform limit of continuous functions is continuous.

John Baez replied that this theorem was proved by his "advisor's advisor's
advisor's advisor's advisor's advisor." I assume he was referring to Christoph
Gudermann.

The impressive thing is not that Gudermann was able to prove this simple
theorem. The impressive thing is that he saw the need for the concept of
uniform convergence. My impression from reading the Wikipedia
article
on uniform
convergence is that Gudermann alluded to uniform convergence in passing and
didn't explicitly define it or formally prove the theorem above. He had the
idea and applied it but didn't see the need to make a fuss about it.... Show more...
 

Decomposing functions of many variables to functions of one variable


Suppose you have a computer that can evaluate and compose continuous functions
of one real variable and can do addition. What kinds of functions could you
compute with it? You could compute functions of one variable by definition,
but could you bootstrap it to compute functions of two variables?

Here's an example that shows this computer might be more useful than it seems
at first glance. We said it can do addition, but can it multiply? Indeed it
can [1].

We can decompose the function

Image/photo

into

Image/photo

where

Image/photo

So multiplica... Show more...
 

Eliminating polynomial terms


The first step in solving a cubic equation is to apply a change of variables
to reduce an equation of the form

x ³ + bx ² + cx + d = 0

to one of the form

y ³ + py + q = 0.

This process can be carried further through Tschirnhausen transformations, a
generalization of an idea going back to Ehrenfried Walther von Tschirnhaus in
1683.

For a polynomial of degree n > 4, a Tschirnhausen transformations is a
rational change of variables

y = g ( x ) / h ( x )

turning the equation

x n + a n -1 x n -1 + a n -2 x n -2 + … + a 0 = 0

into

y n + b n -4 y n -4 + b n -5 y... Show more...
 

Leapfrog integrator


The so-called "leapfrog" integrator is a numerical method for solving
differential equations of the form

Image/photo

where x is a function of t. Typically x is position and t is time.

This form of equation is common for differential equations coming from
mechanical systems. The form is more general than it may seem at first. It
does not allow terms involving first-order derivatives, but these terms can
often be eliminated via a change of variables. See this
post

for a way to eliminate first order terms from a linear ODE.

The leapfrog integrator is also known as the Störmer-Verlet method, or the
Newton-Störmer-Verlet method, or the Newton-Störmer-Verlet-leapfrog meth... Show more...
 

A Few Historical Frauds

Einstein, Bell & Edison, Coca-Cola and the Wright Brothers -- by Larry Romanoff


Image/photo

There are only two nations in the world whose existence seems to be founded primarily on #historical #myths. In the US, false historical #mythology permeates every nook and cranny of the #American #psyche, the result of more than 100 years of astonishing and unconscionable programming and #... Show more...
 

Counterexample to Dirichlet principle


Let Ω be an open set in some Euclidean space and v a real-valued function on
Ω.

Dirichlet principle


Dirichlet's integral for v , also called the Dirichlet energy of v , is

Image/photo

Among functions with specified values on the boundary of Ω, Dirichlet's
principle says that minimizing Dirichlet's integral is equivalent to solving
Laplace's equation.

In a little more detail, let g be a continuous function on the boundary ∂Ω
of the region Ω. A function u has minimum Dirichlet energy, subject to the
requirement that u = g on ∂Ω, if and only if u solves Laplace 's
equation

... Show more...
 

Morse code golf


You can read the title of this post as ((Morse code) golf) or as (Morse (code
golf)).

Morse code is a sort of approximate Huffman coding of letters: letters are
assigned symbols so that more common letters can be transmitted more quickly.
You can read about how well Morse code achieves this design objective
[here](https://www.johndcook.com/blog/2017/02/08/how-efficient-is-morse-
code/).

But digits in Morse code are kinda strange. I imagine they were an
afterthought, tacked on after encodings had been assigned to each of the
letters, and so had to avoid encodings that were already in use. Here are the
assignments:
|-------+-------|
| Digit | Code |
|-------+-------|
| 1 | .---- |
| 2 | ..--- |
| 3 | ...-- |
| 4 | ....- |
| 5 | ..... |
| 6 |
... Show more...
 

Squircle corner radius


I've written several times about the "squircle," a sort of compromise between
a square and a circle. It looks something like a square with rounded corners,
but it's not. Instead of having flat sizes (zero curvature) and circular
corners (constant positive curvature), the curvature varies
continuously
.

A natural question is just what kind of circle approximates the corners. This
post answers that question, finding the radius of curvature of the osculating
circle
.

Image/photo

The squircle has a parameter p which determines how close the curve is to a
circle or a square.

... Show more...
 

Review: Calculator Kit is Just a Few Hacks From Greatness





#featured #microcontrollers #reviews #calculator #hd44780 #kit #math #repurpose #rpncalculator #hackaday
posted by pod_feeder_v2
Review: Calculator Kit is Just a Few Hacks From Greatness
 
#NewHere looking for a platform where #intellectual #freespeech is welcomed. I will be happy to discuss #science, #math, #biotech, #astronomy, whatever up to #philosophy.
Politically leaning towards #libertarian , while trying to get out of my bubble
 

Implementing the Exponential Function


#softwarehacks #exponential #math #taylorseries #hackaday
posted by pod_feeder_v2
Implementing the Exponential Function
 

Approximating rapidly divergent integrals


A while back I ran across a paper [1]giving a trick for evaluating integrals
of the form

Image/photo

where M is large and f is an increasing function. For large M , the
integral is asymptotically

Image/photo

That is, the ratio of A ( M ) to I ( M ) goes to 1 as M goes to
infinity.

This looks like a strange variation on [Laplace's
approximation](https://www.johndcook.com/blog/2017/12/19/laplace-approx-
logistic/). And although Laplace's method is often useful in practice, no
applications of the approximation above come to mind. Any ideas... Show more...
 

Best approximation of a catenary by a parabola


A parabola and a catenary can look very similar but are not the same. The
graph of

y = x ²

is a parabola and the graph of

y = cosh( x ) = ( e x + e - x )/2

is a catenary. You've probably seen parabolas in a math class; you've seen a
catenary if you've seen the St. Louis arch.

Depending on the range and scale, parabolas and catenaries can be too similar
to distinguish visually, though over a wide range enough range the exponential
growth of the catenary becomes apparent.

For example, for x between -1 and 1, it 's possible to scale a parabola to
match a catenary so well that the graphs practically overlap. The blue curve
is a catenary and the orange curve is a parabola.

Image/photo... Show more...
 

Convex function of diagonals and eigenvalues


Sam Walters
posted an
elegant theorem on his Twitter account this morning. The theorem follows the
pattern of an equality for linear functions generalizing to an inequality for
convex functions. We'll give a little background, state the theorem, and show
an example application.

Let A be a real symmetric n × n matrix, or more generally a complex n
× n Hermitian matrix, with entries a ij. Note that the diagonal elements
a ii are real numbers even if some of the other entries are complex. (A
Hermitian matrix equals its conjugate transpose, which means the elements on
the diagonal equal their own conjugate.)

A general theorem says that A has n eigenvalues. Denote these eigenvalues
λ 1, λ... Show more...
 

Bit flipping to primes


Someone asked an interesting question on
[MathOverflow](https://mathoverflow.net/questions/363083/hamming-distance-to-
primes): given an odd number, can you always flip a bit in its binary
representation to make it prime?

It turns out the answer is no, but apparently it is very often the case an odd
number is just a bit flip away from being prime. I find that surprising.

Someone pointed out that 2131099 is not a bit flip away from a prime, and that
this may be the smallest example. The counterexample 2131099 is itself prime,
so you could ask whether an odd number is either a prime or a bit flip away
from a prime. Is this always the case? If not, is it often the case?

The MathOverflow question was stated in terms of Hamming distance, counting
the number of bits in which two bit sequences differ. It asked whether odd... Show more...
 
Image/photo

OF LANGUAGE, GEMATRIA, and SYNTHESIS


Mathematics is a language. A language, that for as much as any individual may hope to do so, demands to "say what you mean" divorced of emotional driven poetic abstracts.
Phonetic and otherwise written languages of "word", counter to the language of math, concern themselves first and foremost with abstract poetics. The definition of every word in and of itself to be found in nothing but more words and never anything concrete.

Gematria, which ascribes numerical values to correspondence with the written word and subsequently also phonetics is both a language of (more precise) "meaning" and abstract poetics. Of synthesis, the idea of any meaningful gematria existing to unite these seeming polar opposites is in itself impressive. The fact that numerous gematria language systems stand in existence, notably in any sort of meaningful "large scale" use and modularity, may be consid... Show more...
 

The shape of beams and bulkheads


After finding the NASA publication I mentioned in my previous
post
, I
poked around a while longer in the NASA Technical Reports
Server
and found a few curiosities. One was
that at one time NASA was interested in shapes that similar to the
[superellipses](https://www.johndcook.com/blog/2014/06/07/swedish-
superellipse/) and [squircles](https://www.johndcook.com/blog/2018/02/13
/squircle-curvature/) I've written about before.

A report [1]that I stumbled on was concerned with shapes with boundary
described by

... Show more...
 

NASA’s favorite ODE solver


NASA's Orbital Flight
Handbook
, published in 1963,
is a treasure trove of technical information, including a section comparing
the strengths and weaknesses of several numerical methods for solving
differential equations.

The winner was a predictor-corrector scheme known as Gauss-Jackson, a method I
have not heard of outside of orbital mechanics, but one apparently
particularly well suited to its niche.
The Gauss-Jackson second-sum method is strongly recommended for use in
either Encke or Cowell [approaches to orbit modeling]. For comparable
accuracy, it will allow step-sizes larger by factors of four or more than any
of the forth order methods. … As compared with unsummed methods of comparable
accuracy, the Gauss-Jackson method has the very important advantage that
roundoff error growth is inhibited. … The
... Show more...
 

Change of basis and Stirling numbers


Polynomials form a vector space—the sum of two polynomials is a polynomial
etc.—and the most natural basis for this vector space is powers of x :

1, x , x ², x ³, …

But the power basis is not the only possible basis, and often not the most
useful basis in application.

Falling powers


In some applications the falling powers of x are a more useful basis. For
positive integers n , the n th falling power of x is defined to be

Image/photo

Falling powers come up in
combinatorics, in the [calculus
of finite differences](https://www.j... Show more...
 

ODE solver landscape


Many methods for numerically solving ordinary differential equations are
either Runge-Kutta methods or linear multistep methods. These methods can
either be explicit or implicit.

The table below shows the four combinations of these categories and gives some
examples of each.

Image/photo

Runge-Kutta methods advance the solution of a differential equation one
step at a time. That is, these methods approximate the solution at the next
time step using only the solution at the current time step and the
differential equation itself.

Linear multistep methods approximate the solution at the next time step
using the computed solutions at the latest several time steps.

Explicit methods express the solution at the next time step as an explicit
function of othe... Show more...
 

New math for going to the moon


Image/photo

Before I went to college, I'd heard that it took new math and science for
Apollo to get to the moon. Then in college I picked up the idea that Apollo
required a lot of engineering, but not really any new math or science. Now
I've come [full circle](https://www.johndcook.com/blog/2011/01/25/coming-full-
circle/) and have some appreciation for the math research that was required
for the Apollo landings.

Celestial mechanics had been studied long before the Space Age, but that
doesn't mean the subject was complete. According to One Giant
Leap
,
In the weeks after Sputnik, one Langley [Research Center] scientist went
looking for books on orbital mechanics
... Show more...
 

Where does the seven come from?


Here's a plot of exp(6 it )/2 + exp(20 it )/3:

Image/photo

Notice that the plot has 7-fold symmetry. You might expect 6-fold symmetry
from looking at the equation. Where did the 7 come from?

I produced the plot using the code from this
post
, changing the
line defining the function to plot to
def f(t):
return exp(6j*t)/2 + exp(20j*t)/3

You can find the solution in Eliot's comment in this Twitter
thread
.

Related links


* Daily exponential sum
* Mystery curve

Image/photo

http://feedproxy.google.com/~r/TheEndeavour/~3/pbTiXx1430Q/
#johndcook #Math
Where does the seven come from?

John D. Cook: Where does the seven come from?

 

Gibbs phenomenon


I realized recently that I've written about generalized Gibbs phenomenon, but
I haven't written about its original context of Fourier series. This post will
rectify that.

The image below comes from a previous post illustrating Gibbs phenomenon for a
Chebyshev approximation to a step function.

Image/photo

Although Gibbs phenomena comes up in many different kinds of approximation, it
was first observed in Fourier series, and not by Gibbs [1]. This post will
concentrate on Fourier series, and will give an example to correct some wrong
conclusions one might draw about Gibbs phenomenon from the most commonly given
examples.

The uniform limit of continuous function is continuous, and so the Fourier
series of a function cannot converge uniformly where the function is
discontinuous. But what does the Fourier s... Show more...
 

Novel and extended floating point


My first consulting project, right after I graduated college, was developing
floating point algorithms for a microprocessor. It was fun work, coming up
with ways to save a clock cycle or two, save a register, get an extra bit of
precision. But nobody does that kind of work anymore. Or do they?

There is still demand for novel floating point work. Or maybe I should say
there is once again demand for such work.

Companies are interested in low-precision arithmetic. They may want to save
memory, and are willing to trade precision for memory. With deep neural
networks, for example, quantity is more important than quality. That is, there
are many weights to learn but the individual weights do not need to be very
precise.

And while some clients want low-precision, others want extra precision. I'm
usually skeptical when someone tells me they need extended precision because
typically they just need a better al... Show more...
 

Square roots of Gaussian integers


In [previous post](https://www.johndcook.com/blog/2020/06/09/complex-square-
root/) I showed how to compute the square root of a complex number. I gave as
an example that computed the square root of 5 + 12 i to be 3 + 2 i.

(Of course complex numbers have two square roots, but for convenience I'll
speak of the output of the algorithm as the square root. The other root is
just its negative.)

I chose z = x + iy in my example so that x ² + y ² would be a
perfect square because this simplified the exposition.That is, I designed the
example so that the first step would yield an integer. But I didn't expect
that the next two steps in the algorithm would also yield integers. Does that
always happen or did I get lucky?

It does not always happen.... Show more...
 

How to compute the square root of a complex number


Suppose you're given a complex number

z = x + i y

and you want to find a complex number

w = u + iv

such that w ² = z. If all goes well, you can compute w as follows:

ℓ = √( x ² + y ²)
u = √((ℓ + x )/2)
v = sign( y ) √((ℓ - x )/2).

For example, if z = 5 + 12 i , then ℓ = 13, u = 3, and v = 2. A quick
calculation confirms

(3 + 2 i )² = 5 + 12 i.

(That example worked out very nicely. More on why in the [next
post](https://www.johndcook.com/blog/2020/06/09/square-root-gaussian-
integer/).)

Numerical issue

... Show more...
 
#biology #animals #languageUnderstanding #math
In a Fascinating Twist, Animals That Do Math Also Understand More Language Than We Think
 

Negative space graph


Here is a plot of the first 30 Chebyshev polynomials. Notice the interesting
patterns in the white space.

Image/photo

Forman Acton famously described Chebyshev polynomials as "cosine curves with a
somewhat disturbed horizontal scale.” However, plotting cosines with
frequencies 1 to 30 gives you pretty much a solid square. Something about the
way Chebyshev polynomials disturb the horizontal scale creates the interesting
pattern in negative space.

More on Chebyshev polynomials


* Product of Chebyshev polynomials
* Chebyshev approximation
* Yogi Berra meets Chebyshev

Image/photo

http://feedproxy.google.com/~r/TheEndeavour/~3/YjSJZ3GH_48/
#johndcook #Math #Specialfunctions
Negative space graph

John D. Cook: Negative space in graph of Chebyshev polynomials

 

Relatively prime determinants


Suppose you fill two n × n matrices with random integers. What is the
probability that the determinants of the two matrices are relatively prime? By
"random integers" we mean that the integers are chosen from a finite interval,
and we take the limit as the size of the interval grows to encompass all
integers.

Let Δ( n ) be the probability that two random integer matrices of size n
have relatively prime determinants. The function Δ( n ) is a strictly
decreasing function of n.

The value of Δ(1) is known exactly. It is the probability that two random
integers are relatively prime, which is well known to be 6/π². I've probably
blogged about this before.

The limit of Δ( n ) as n goes to infinity is known as the Hafner-Sarnak-
McCurley constant [1], which has been computed to be 0.3532363719…
... Show more...
 

Sinc approximation


If a function is smooth and has thin tails, it can be well approximated by
sinc functions. These approximations are frequently used in applications, such
as signal processing and numerical integration. This post will illustrate sinc
approximation with the function exp(- x ²).

The sinc approximation for a function f ( x ) is given by

Image/photo

where sinc( x ) = sin(π x )/π x.

Do you get more accuracy from sampling more densely or by sampling over a
wider range? You need to do both. As the number of sample points n
increases, you want h to decrease something like 1/√ n and the range to
increase something like √ n.

According to [1], the best trade-off between smaller h and larger n... Show more...
 

Accurately computing a 2×2 determinant


The most obvious way to compute the determinant of a 2×2 matrix can be
numerically inaccurate. The biggest problem with computing ad - bc is that
if ad and bc are approximately equal, the subtraction could lose a lot of
precision. William Kahan developed an algorithm for addressing this problem.

Fused multiply-add (FMA)


Kahan's algorithm depends on a fused multiply-add function. This function
computes xy + z using one rounding operation at the end, where the direct
approach would use two.

In more detail, the fused multiply-add behaves as if it takes its the floating
point arguments x , y , and z and lifts them to the Platonic realm of
real numbers, calculates xy + z exactly, and then brings the result back
to the world of floating point numbers. The true value of xy... Show more...
 

Ratio of area to perimeter


Given a curve of a fixed length, how do you maximize the area inside? This is
known as the isoperimetric problem.

The answer is to use a circle. The solution was known long before it was
possible to prove; proving that the circle is optimal is surprisingly
difficult. I won't give a proof here, but I'll give an illustration.

Consider a regular polygons inscribed in a circle. What happens to the ratio
of area to perimeter as the number of sides increases? You might suspect that
the ratio increases with the number of sides, because the polygon is becoming
more like a circle. This turns out to be correct, and it's not that hard to be
precise about what the ratio is as a function of the number of sides.

For a regular polygon inscribed in a circle of radius r ,

Image/photo

and

... Show more...
 

Curse of dimensionality and integration


The curse of dimensionality refers to problems whose difficulty increases
exponentially with dimension. For example, suppose you want to estimate the
integral of a function of one variable by evaluating it at 10 points. If you
take the analogous approach to integrating a function of two variables, you
need a grid of 100 points. For a function of three variables, you need a cube
of 1000 points, and so forth.

You cannot estimate [high-dimensional
integrals](https://www.johndcook.com/blog/2015/07/19/high-dimensional-
integration/) this way. For a function of 100 variables, using a lattice with
just two points in each direction would require 2100 points.

There are much more efficient ways to approximate integrals than simply adding
up values at grid points, assuming your integrand is smooth. But when applying
any of... Show more...
 
Village school in Tibet flourishes under reforms

https://global.chinadaily.com.cn/a/202005/28/WS5ecf121ca310a8b24115906b.html

Kelsang Dekyi has relied on knowledge to change her destiny, so it's no surprise she believes education has the power to change people's lives.

#China #Chinese #Tibet #education #reform #Mandarin #math #school
 

Fundamental Theorem of Arithmetic


It's hard to understand anything from just one example. One of the reason for
studying other planets is that it helps us understand Earth. It can even be
helpful to have more examples when the examples are purely speculative, such
as xenobiology, or even known to be false, i.e.
[counterfactuals](https://www.johndcook.com/blog/bayesian-networks-causal-
inference/), though here be dragons.

The fundamental theorem of arithmetic seems trivial until you see examples of
similar contexts where it isn't true. The theorem says that integers have a
unique factorization into primes, up to the order of the factors. For example,
12 = 2² × 3. You could re-order the right hand side as 3 × 2², but you can't
change the list of prime factors and their exponents.

I was unimpressed when I first heard of fundamenta... Show more...
 

The Fundamental Theorem of Algebra


This post will take a familiar theorem in a few less familiar directions.

The Fundamental Theorem of Algebra (FTA) says that an n th degree polynomial
over the complex numbers has n roots. The theorem is commonly presented in
high school algebra, but it 's not proved in high school and it's not proved
using algebra!

You're most likely to see a proof of the Fundamental Theorem of Algebra in a
course in complex analysis. That is because the FTA depends on analytical
properties of the complex numbers, not just their algebraic properties. It is
an existence theorem that depends on the topological completeness of the
complex numbers, and so it cannot be proved from the algebraic properties of
the complex numbers alone.

(The dividing lines between areas of math, such as between algebra and
analysis, are not always objective or even useful. And for some odd reason,
some... Show more...
 

Chebyshev approximation


In the previous
post
I
mentioned that Remez algorithm computes the best polynomial
approximation to a given function f as measured by the maximum norm.
That is, for a given n, it finds the polynomial p of order n that
minimizes the absolute error

|| fp ||~∞~.

The Mathematica function MiniMaxApproximation minimizes the relative
error by minimizing

|| (fp) / f ||~∞~.

As was pointed out in the comments to the previous post, Chebyshev
approximation produces a nearly optimal approximation, coming close to
minimizing the absolute error. The Chebyshev approximation can be
computed more easily and the results are easier to understand.

To form a Chebyshev approximation, we expand a function i... Show more...
 

Remez algorithm conspicuously missing


The best polynomial approximation, in the sense of minimizing the
maximum error, can be found by the Remez algorithm. I expected
Mathematica to have a function implementing this algorithm, but
apparently it does not have one.

It has a function named MiniMaxApproximation which sounds like Remez
algorithm, and it’s close, but it’s not it.

To use this function you first have to load the FunctionApproximationspackage.
<< FunctionApproximations`
Then we can use it, for example, to find a polynomial approximation to
e^x^ on the interval [-1, 1].
MiniMaxApproximation[Exp[x], {x, {-1, 1}, 5, 0}]
This returns the polynomial
1.00003 + 0.999837 x + 0.499342 x^2 + 0.167274 x^3 + 0.0436463 x^4 +
0.00804051 x^5

And if we plot the error, the difference between e^x^ and this
polynomial, we see tha... Show more...
 

MDS codes


A maximum distance separable code, or MDS code, is a way of encoding
data so that the distance between code words is as large as possible for
a given data capacity. This post will explain what that means and give
examples of MDS codes.

Notation


A linear block code takes a sequence of k symbols and encodes it as a
sequence of n symbols. These symbols come from an alphabet of size
q. For binary codes, q = 2. But for non-trivial MDS codes, q >
2. More on that below.

The purpose of these codes is to increase the ability to detect and
correct transmission errors while not adding more overhead than
necessary. Clearly n must be bigger than k, but the overhead n-k
has to pay for itself in terms of the error detection and correction
capability it provides.

The ability of a code to detect and correct errors is measured by d,
the minimum dis... Show more...
 

Leftist Media vs Mathematics xD


#NotTheOnion #MSM #Media #FakeNews #imbeciles #stupidity #MSDNC #NewYorkTimes #NYT #LeftistMedia #Leftism #rofl #humor #fun #lol #lmao #math #maths #mathematics
 
#math #jokes
Someone asks a programmer if all odd numbers are prime.
The programmer answers:
"Wait a minute, I think I have an algorithm from Knuth on finding prime numbers... just a little bit longer, I've found the last bug... no, that's not it... ya know, I think there may be a compiler bug here - oh, did you want IEEE-998.0334 rounding or not? - was that in the spec? - hold on, I've almost got it - I was up all night working on this program, ya know... now if management would just get me that new workstation that just came out, I'd be done by now... etc., etc. ..."
 

Maximum gap between binomial coefficients


I recently stumbled on a formula for the largest gap between consecutive
items in a row of Pascal’s triangle.

For n ≥ 2,

Image/photo{.aligncenter
.size-medium width="363" height="43"}

where

Image/photo{.aligncenter
.size-medium width="154" height="39"}

For example, consider the 6th row of Pascal’s triangle, the coefficients
of (x + y)^6^.

1, 6, 15, 20, 15, 6, 1

The largest gap is 9, the gap between 6 and 15 on either side. In our
formula n = 6 and so

τ = (8 – √8)/2 = 2.5858

and so the floor of τ is 2. The equation above says the maximum gap
should be between the binomial co... Show more...
 

Sum of squared digits


Take a positive integer x, square each of its digits, and sum. Now do
the same to the result, over and over. What happens?

To find out, let’s write a little Python code that sums the squares of
the digits.
def G(x):
return sum(int(d)**2 for d in str(x))

This function turns a number into a string, and iterates over the
characters in the string, turning each one back into an integer and
squaring it.

Now let’s plot the trajectories of the iterations of G.
def iter(x, n):
for _ in range(n):
x = G(x)
return x
for x in range(1, 40):
y = [iter(x, n) for n in range(1, 12)]
plt.plot(y)

This produces the following plot.

Image/photo{.aligncenter .size-medium
width="640" height="480"}

Fo... Show more...
 

Hands-On: Smarty Cat is Junior’s First Slide Rule





#hackadaycolumns #retrocomputing #math #sliderule #sliderules #smartycat #hackaday
posted by pod_feeder_v2
Hands-On: Smarty Cat is Junior’s First Slide Rule
 

Computing the area of a thin triangle


Heron’s formula computes the area of a triangle given the length of each
side.

Image/photo{.aligncenter
.size-medium width="220" height="22"}

where

Image/photo{.aligncenter .size-medium
width="102" height="37"}

If you have a very thin triangle, one where two of the sides
approximately equal s and the third side is much shorter, a direct
implementation Heron’s formula may not be accurate. The cardinal rule of
numerical programming is to avoid subtracting nearly equal numbers, and
that’s exactly what Heron’s formula does if s is approximately equal
to two of the sides, say a and b.

William Kahan’s formula is algebraically equivalent... Show more...
 

A tale of two iterations


I recently stumbled on a paper [1]that looks at a cubic equation that
comes out of a problem in orbital mechanics:

σx³ = (1 + x

Much of the paper is about the derivation of the equation, but here I’d
like to focus on a small part of the paper where the author looks at two
ways to go about solving this equation by looking for a fixed point.

If you wanted to isolate x on the left side, you could divide by σ and
get

x = ((x + 1)² / σ)^1/3^.

If you work in the opposite direction, you could start by taking the
square root of both sides and get

x = √(σx^3^) – 1.

Both suggest starting with some guess at x and iterating. There is a
unique solution for any σ > 4 and so for our example we’ll fix σ = 5.

We define two functions to iterate, one for... Show more...
 

Perfect codes


A couple days ago I wrote about Hamming
codes
and
said that they are so-called perfect codes, i.e. codes for which
Hamming’s upper bound on the number of code words with given separation
is exact.

Not only are Hamming codes perfect codes, they’re practically the only
non-trivial perfect codes. Specifically, Tietavainen and van Lint proved
in 1973 that there are three kinds of perfect binary codes:
  • Hamming codes
  • One Golay code
  • Trivial codes
I wrote about Golay
codes
a few
months ago. There are two binary Golay codes, one with 23 bit words and
one with 24 bit words. The former is “perfect.” But odd-length words are
awkward to use, and in pract... Show more...
 
● NEWS ● #bartoszmilewski ☞ #Math is your insurance policy
 
Later posts Earlier posts