Wednesday, April 17, 2013

Splines and Bezier Curves, Part 2

Let me back up.  Formally, we give
  •  n+1 control  points:
  • Parameter t
  • Interval I: 
  • a Recursion:
    Linear Interpolation
  • 1) Insert Ak, and let it go from Pk to Pk+1 as t goes from 0 to 1.  There are n such points:
    2) In the same way, insert n−1 points Bk : 
    iThere are ni+1 points Ik:

    nThere is one point N:

    The recursion is complete.
The locus of N is the Bézier curve of order n defined by the n+1 control points Pk.

Ideally, I would like to write N as a polynomial in t, with coefficients Pk.  Examine B0.  There are at least two ways to write it:

In the first section I expanded (2) and proved that, with   :


is the kth term in the expansion of  ,  and N can be written as the product**:

The row vector on the left is the same for every curve of the same order. This is the reason I used both t and τ=(1−t).  Suppose I have a 3rd order curves on the screen.  For each value of t, calculate the left vector.  Then point at t = t1 is then
Now suppose I have k 3rd order curves.  For each value of t, there are k points to draw – one on each curve – but the left vector is calculated once.  The k points at t = t1 are given by
For example, suppose now that I choose m values of t to approximate the curves, connecting adjacent points with straight lines.  Precompute the left vector at all m  values and store it in memory.  The full multiplication (all  m× k points) is

This appears ideal.  In particular,
a) The control points Pi,j are passed directly to the 4× n matrix as coefficients. Other than forcing an update of the Pi,j in memory, moving one -- or all -- of the control points dynamically incurs no additional computational overhead.
b) The number of multiplications is fixed; the recursion has been passed to t, and reduced to a  constant table.  We may also let the left matrix scale according to the available resources, pulling values from a larger table stored to disk.
c) All multiplications can be done in parallel, including the distinct (x, y, z) components of each point.

I choose this representation for curves of arbitrary order given by the classical method of letting a fixed rod slide along intersecting lines.
For some additional properties of  these curves, I used a different approach.  So let me again go back, and consider some alternative representations.

If, instead of expanding (1), I expand    I obtain the increasingly worrisome recursion

...which I wished to avoid.  The kth term contains a full kth order polynomial in the first k+1 points. The summation is fearsome:

However, for curves of lower order, this approach is useful as long as we simplify it along the way.  Go back. It goes like this.

Begin again with the n+1 points.  To illustrate, I will use n = 3, with 4 points defining a 3rd order curve.  I have
Write a table of differences:

Then the Ai are:

And now move on to the B's (iterate the recursion):

Proceeding in this way, we obtain

And for order n we again have the binomial expansion.  The table of differences (3) takes n steps with n(n+1)/2 subtractions which, for low Order curves is no problem.

In Spreadsheet form, list the points in Column A.  Let B1  = (A2 - A1), and apply the rule to the whole column.   The computer will automatically understand Bk = Ak+1 − Ak.  At column  N, there will at last be one entry left (M2 - M1).   Then take the first row.
Multiply each entry by the coefficient and power of t from Pascal's triangle, order N-1.  For example, if there are five points, the last column is E (which is the fifth letter), and 5-1 = 4.  So, this is order 4 and  Pascal's triangle will give
1 + 4t + 6t2 + 4t3 + t4,
And the curve has parametric equation
Curve = 1•(A1) + 4t•(B1) + 6t2•(C1) + 4t3•(D1)+ t4•(E1)

I call it subtraction.  About the names given to it by mathematicians, I remain in the thrall of bewilderment.

There is a trade-off here.  We can make the Curve a linear equation with constant coefficients, in powers of t, but the subtractions get out of control as n increases.  The relationship may be most easily illustrated with matrix notation.

This example uses a different range of t, which I present momently. but the point is this: using the differences between points (d0, d1, d2), we reduce the order of the matrix by one.  If we carry on and calculate the e's, the matrix order is reduced again.  Again, for low orders this is powerful leverage, for high orders both the matrix order and the number of steps to reduce it are prohibitive.

But this is a linear system.  There must be some way to leverage the matrix.  Of course there is.  Remember that the curve reads the same both ways: in the matrix notation above, flipping the column of P's upside down  results in the same curve. And what about that matrix?  The binomial coefficients currently read in all three drections: row, column, diagonal.  Let me run the problem from both ends at once and see what I can tie up to make it smaller.  As always, the answer is to

...begin again.
Let A0  go from P0 to P1, starting at the midpoint of P0P1, and take 1 second to reach either point.
     Midpoint : (P1+ P0)/2,   Path to be crossed: (P1– P0)/2.
Then  A0 = ½((P1+ P0) +(P1– P0)t)
               =½((P0(1– t) + P1(1+t)), −1 ≤ t ≤ 1
For simplicity, let
     σ = 1–t ,   τ = 1+t
Then I have A0 = ½(P0σ + P1τ)
The next set of points follow,
     B0 = ½(A0σ + A1τ) =   ¼(P0σ² +2P1στ + P2τ²)
And the
    Curve½(B0σ + B1τ)
(6a)            =  1/8 (P0σ³ +3P1σ²τ + 3P2στ² P3τ³)
I am running the curve from both ends at once. The form is identical.  But now from the powers
     (1+t)³ = 1 + 3t + 3t² + t³
     (1+t)²(1–t) = 1 + t – t² – t³
     (1+t)(1–t)² =1 – t – t² + t³
     (1–t)³ = 1 – 3t + 3t² – t³
we can join like terms in the form (a – bt²):
where   EP3+ P0,   FP3– P0,   GP2+ P1,   H = P2– P1,

Pretending that I got this as a result of a matrix product:

I can arrange the middle matrix to suit the hardware. The important business is done.  Even in the worst case scenario (all serial operations), this is fewer operations than the algebraic statement in (a).  But  this can all be done in parallel.  For example:

In the rightmost matrix, (the column vector (1, t)), changing t to −t changes the direction of the curve:  the locus is the same, but a point traversing it in positive t will begin at P3 and end at P0.

I am still proceeding by assumption.  After experimenting with curves, I found they were less and less flexible (dependencies!) at higher orders, so I decided to use lower order ones.  I want to simplify, and I know that there are such things as vector engines. A matrix is a two dimensional container.  A polynomial of order three has four terms.  So I stopped at t3 to break the polynomial into two parts.  How else?

Now I want to be able to wave my wand and increase the order by one, without changing the curve.  To do that, I shall

...begin again.  Consider
     abcd are linear combinations of P0, P1, P2, P3.  Suppose I want to convert her to order four.   I will ask the question this way:

I'm looking for

Where A, B, C, D, α, β, γ, δ are linear combinations of five points Q0, Q1, Q2, Q3, Q4.
Distinct powers of t are linearly independent.  Comparing Mimi and Chad, we find:

This will be easier if I go back to representation (4) above.  I can then convert the result to a more linearized form such as (7).   Once the core problems are untangled, I can always perform these conversions (transformations) by matrix manipulation.  That is, there is a finite number of elementary matrix operations which will convert any one representation of the curve into another, and among them a unique triangular matrix (do not be alarmed).  This is not a conclusion drawn from matrix theory.  I already know all the representations are equivalent; I have defined them to be so.  The triangular matrix obtained by computing successive differences is unique by construction.

Now, using (4),

Using (4) and (8) to put the P's and Q's together:

...where the lonely blue Q4 over there must equal P3.  Beginning at the top and iterating through, we obtain the equalities:

Before moving back to the A's and α's, let me work this out and determine the exact form of this pattern.  Consider the general case, where I have a curve of order N-1,

and would like to convert it to a curve of order N.

I will use induction.  I suspect there is no reason for any given Q to depend on more than two of the Pi.  If that is not the case, this could get complicated. (This suspicion turns out to be correct.)

Beginning with the first four terms and remembering that Q0 = P0, solving by ordinary algebra gives:

I have three consecutive terms of the form

And I would like to know if Qk+1 is also in this form.  After some thought, I decided upon the following course. Together, Q0 and Q1 eliminate all the P0.  What Q1 and Q2 eliminate P1, and so on.  The general case I wish to prove is that I may choose any power of t, and in its coefficient, two sequential values of Q, Qi  and Qj cancel all of the Pi from both sides of the equality.
Choose an arbitrary power of t, tk, 3 < k < n.    The coefficient of tk is a linear combination of
     Q0, Q1, . . . Qk,
      P0, P1, . . . Pk
on the left and right hand sides of the (10.2), respectively.
Choose two adjacent points:

If the Pr cancel from the kth term on both sides of the equality, the induction process is complete.

For, if two consecutive Q, Qr, Qs in the form (11) cancel all of the Pr  from every term on both sides of (10.2) then nothing can prevent me from writing Qt in the form (11) as well, and then Qs and Qt will cancel all of the Ps.  Then, as long as I have in fact a sequence of consecutive Q's  in the form (11), starting at one end of the curve, then I can iterate through the whole list.
I already have the sequence Q0 .... Q3, and so I will be done.
Again, the tricky part is remembering that I am not up against any formal difficulty;  I will go in circles if I ask the algebra for an answer. I am defining a procedure -- I need to check that it is consistent.  If, for example, I was already certain that raising the order in this manner could not introduce any dependencies on the Qi not present in the original interpolation procedure (sliding points),  then I could skip this part in its entirety.

Let Ci,k be the coefficient of the ith term in the expansion of (1−n):
Then, equating the tk terms from both sides of (10.2) gives

Simplifying a bit,

From the left hand sum I am only interested in the consecutive terms Cr,kQr, Cs,k Qs:

After a bit of rearranging, and minding the equality (−1)s =  −(−1)r, this becomes

By (11),

From the right side of (12),  I am only interested in the term Cr,kPr.  This term is
The Pr's cancel, and the relationship is proved.  That is, to convert a Bézier curve of order n-1 to an identical curve of order n, invariably the new points can be given by,
   ...and initial conditions Q0 = P0, Qk = Pk−1.

If I wish, I can supplement the definition to include 0 and n in the sequence.  How I do this may depend on what I wish to do with the curves.

Okay. I probably should have done that last step with matrices.  It should be relatively easy to find the unique transformation matrix from upper-triangular   Mimi: (n−1)×(n−1)   to   Chad: n×n.  I will convert my work over to matrix representation for the last steps, but I include an example to illustrate what those steps will be.
Now go back to Mimi 3×3, = Chad 4×4  4Eva.  What are the values of A, B, C, D, α, β, γ, δ ?  From (8),
the values of Q in hand, I can distribute those values between the first and second matrix however I wish.  If I keep using representation (4), drilling down the differences between the points, I will have precomputed the recursions, and I might as well let the second matrix be zero:
I don't want the recursions, I want to linearize them.  This is the first reason to introduce the matrix in t.    I have worked out an identity case for two curves whose order differ by one: they are the same curve.  By  imposing no restrictions on the distribution of the coefficients between t0 =1 and t, I can now treat increase-of-order as a general procedure, involving a linear combination of 2x2 matices whose coefficients are the given points.  This is exactly what I want.

I should now have all the pieces necessary to carry my approach one step further.
Going back to a curve represented by simultaneous powers of (1− t) and (1+t), iterating again from  (6a),  the fourth order curve is:

Unpacking into powers of t, and temporarily setting aside the leading 1/16:
This is the expression I wanted. For the  curve equal to a curve of order three, the blue terms sum to zero.  That is, according to (8),.

The general problem is solved.  The crux is this: suppose we write this in the form of (7).  Leave the 1/16 and the vectors [1, t2], [1, t] aside and consider only the 2×2 matrix.  It is now the sum:

So, what are the values of A, B, C, D, α, β, γ, δ ?  We can still arrange them however we want as long as the values of A and δ and the sums
    α +B
    β +C
    δ +D
don't change.  So, for example, I have highlighted one pair of corresponding boxes in red.  We can dump number back and forth between the two, filling in whatever numbers we want, as long as their sum  remains −4.  So how about, say...

Does it itch?  It should.
There remain a few things to be done.  The itch should be scratched.  I need to revisit the relationship between powers of t and increase/decrease of order, now that I have settled on a default starting order and representation.  I have converted the curves to several matrix representations, but I have not fully examined the possible linear combinations of  terms in combined power of  t, (1 ± t).

In the last matrix multiplication above, I would like the linear combination of matrices and Qi in the second line to be exactly the negative of those in the first line, so the whole problem of changing the order of a curve can be reduced to a multiplication by (1-t).  Ordinarily, adding a root to a polynomial is the same thing as increasing the order.  However, we are making up a representation, and whether or not it is possible to construct one where polynomial order can be changed without affecting the curve is not a meaningful question.  Of  course it can, as I have done above.  How to do it most efficiently is the right question, and clearly the remaining task its to resolve the whole question of curve order by making an unambiguous, elementary operation which allows us to step freely between curves of arbitrary order.  In particular, we should be able to represent LOWER order curves with a linear combination of higher-order terms, so that it is inessential for us to know what order of curve we are working with.  As long as we can specify the curve we want.  Going from lower order to higher is in fact quite simple: I will concatenate curves and use  the first and second derivatives to align their tangents and conform their curvature.

The remaining details are fiddly but purely mechanical.  I therefore consider the fundamental problem of representing arbitrary splines to be SOLVED, using a pencil and notebook paper, and the methods of Newton.

This was an honest test.  I have looked nothing up.

It bothers me that mathematicians often speak as if problems cannot be solved in this way.  Difficult, abstract, modern theories are presented as the groundwork necessary for the solution of problems which simply do not obey the stated dependencies.  Stranger to me, is that I am to pretend these methods were not already known.  I find regularly that these assertions are not merely false, but embarrassing.  For example,

I say, that the second order Bezier curve is the parabola section enclosed by the Archimedes triangle whose vertices are the given control points.  That is, the that the two curves are not only identical, but given by precisely the same method of construction.
**This called a scalar or dot product.  Multiply corresponding elements of the two lists, and keep a running sum of these products.  Like this:
In the text, each entry is not a scalar: Pk has an x, y, z component.  Each component in the resulting point is a scalar product.