Monday, September 10, 2012

II. The Homogeneous Case

[Linear Differential Equations of Order n]
← Part I                                           Part III  →

II. THE HOMOGENEOUS CASE, CONSTANT COEFFICIENTS
If Q(x) = 0, the equation is homogeneous.
     (19.31)     

■ If the fk(x) are arbitrary functions of x, solutions of (19.31) are rarely expressible in terms of elementary functions.  When they are, it is generally extremely difficult to find them.  If the fk are constants, the general solution of (19.31) can always be found.

The homogeneous linear differential equation of order n with constant coefficients is thus
     
Now, the derivative of ex is ex.  Also, ex is never ever ever ever zero, so we can divide it out of any equation, for all x.  Using the Hey Wait a Minute... Postulate**, if we let  ,  then
          
Plugging this trial solution into (20.1) we  have
          
Divide through by emx,
      .
Hey, aren't those all constants?  Any value of m which satisfies this equation makes y a solution of (20.1).  Our machinations are full of win.

Definition: Equation (20.14) is the characteristic equation of (20.1).

Sooooooo..... this is a regular-ass algebraic polynomial in m.  It has at least one and not more than n distinct roots.  We have n solutions of the form   ,  with at least one, and not more than n, distinct values of n:
   
where the mk need not all be distinct.

We are confronted with three cases:
     1. All of the roots are real, and distinct
     2.  All roots real, some multiple
     3.  All roots imaginary
All other cases are linear combinations of these three.  The cases can be treated separately, and added together to form a composite solution yc.   (The proofs are in Part I).

CASE 1: Roots of the Characteristic Equation Real
If all n roots, m1, m2, . . . , mn  are distinct, (and extending the above proof), the n solutions  y1, y2, . . . , yare linearly independent.   The general solution of (20.1) is then
     

In this case, the procedure is straightforward.  Solve for the roots m, and construct the solution.  "Solve for the roots m" can be arbitrarily difficult, but it is at least a separate problem.  Rad.


CASE 2: Roots of the Characteristic Equation Real, Some Multiple
{Coming back to this, I have an elementary example of Case I to work out.  Simple but for the approach taken}

CASE 3:  Roots of the Characteristic Equation Imaginary
{To Do.}

__________________________
**Squint one eye, make a serious face, and put your hands on your hips.
At some point you may need to lean back and cross both arms over your chest.

Linear Differential Equations of Order n: I

Oppenheim is telling stories about linear differential equations; I review the material necessary to right myself.
Following  Tenenbaum and Pollard,

■ A linear differential equation of order n is an equation which can be written in the form
,
where f0(x), f1(x),  . . .  , fn(x) and Q(x) are continuous functions of x defined on a common interval I, and fn(x) is nonzero somewhere in I.   Note that y(k) is the kth derivative of y.  For example,  y''' = y(3).

Definition: Let the functions f1(x), f2(x),   . . .  , fn(x), be defined on a common interval I. Then the functions are linearly dependent  if there exist  constants c1, c2, . . . , cn, not all zero, such that

for every x in I.
     The functions are linearly independent if no such set of constants exists.


■ Theorem 19.3:  If f0(x), f1(x), . . . ,  fn(x) and Q(x) are continuous functions of x on a common interval I and fn(x) ≠ 0 when  x is in I, then
 1. The homogeneous linear differential equation
 
has n linearly independent solutions y1(x), y2(x), . . . , yn(x)
2. The linear combination of these n solutions
,
c1, c2, . . . , cn arbitrary constants,  is an n-parameter family of solutions of (a).
3. The function

where yp is a particular solution of the nonhomogeneous equation (with Q(x)  ≠ 0),  is an n-parameter family of solutions of (18.11).

(Chapter 19, pg. 211:)
"It is extremely important that you prove the statements in Exercises 5 to 7 below."

■  5. If yp is a solution of
     ,
then Ayp is a solution of (19.5) with Q(x) replaced by AQ(x).

Proof:
If yp is a solution of (19.5), we have
       
Mulitply through by A
       
And since
       
With the A's pulled into the derivatives, we have
       
And Ayp is a solution of (19.5) with Q(x) replaced by AQ(x).


■ 6. Principle of Superposition.  If yp1 is a solution of (19.5) with Q(x) replaced by Q1(x) and yp2 is a solution of (19.5) with Q(x) replaced by Q2(x), then yp = yp1+ yp2 is a solution of
          .
Proof:
Add the two equations in yp1 and yp2:

And since differentiation distributes:  (u' + v') = (u + v)',

After the substitution yp = [yp1+ yp2],
       
Done.

 7. If yp(x) =  u(x) + iv(x) is a solution of
      ,
where f0(x), . . . , fn(x) are real functions of x, then
     (a) the real part of yp, i.e. u(x), is a solution of
               ,
     (b) the imaginary part of yp, i.e. v(x), is a solution of
               .
Proof:
Writing yp(x) as u + iv,

differentiate the terms separately,

and collect real and imaginary parts on the left-hand side:

Two complex quantities are equal iff their real parts are equal and their complex parts are equal.  That is,
          
AND THE RIGHTEOUSNESS IS COMPLETE.

                             Next:  The Homogeneous Case   →
__________________________
Notes: To make the material easier for me to find, I use the numbering from Tenenbaum and Pollard's Ordinary Differential Equations, Dover Press.  If you find my personal math blog useful, I am happy to change the numbering and layout for easier navigation.  Let me know.

Thanks as always to CodeCogs for the Latex Equation Editor.

Tuesday, September 4, 2012

The Convolution Sum I

{UNDER CONSTRUCTION
 Note: I do not lik this representation of the Dirac Delta.  If I use the left-handed rectangle, I incur an offset of a single sample under certain operations.  In the limit, this extra offset vanishes, but for practical computation it is an error and requires fiddly accounting that is better resolved another way.  I think the Delta is best represented as symmetric about the origin, and sample positions centered in the rectangles.  Below I use Oppenheim's approach in Signals and Systems, but I will change it shortly }

TASK:  Reconstruct an arbitrary continuous function x(t) from discrete samples.
  • Assume x(t) is differentiable and integrable.
  • Sample at evenly spaced intervals in time:  Between two adjacent samples and k+1, the change in time is then
  • For convenience, assume a sample falls exactly at time t = 0.  Then the kth sample falls at time t = kΔ.  
  • Let each sample be represented by a rectangle of width Δ and height x(kΔ).  Again, for convenience, let the rectangles be left-handed.
  • Let a time tk fall between two adjacent samples, t = (−1)Δ  and   kΔ.
If  x(t) is continuous, there is a point in time between every two samples where the slope of x(t) is the same as the line connecting those two samples.  Between two samples, (k−1)Δ and kΔ, call this time ξk.  Formally,

And we also have, for any t,


THE DASTARDLY DEEDS:
Assumptions which permit formal approximation.
  • Let tk be any time in this same interval. Then, arguing from (1) and (2),
  • I want to write my approximation as 'x(t)' and operate on it formally, as a function.  Consider the sum

    All terms of x(t) are added together.  But for any given time t, I only need one of these terms.  I have the following curious puzzle:
              
    ...where the indicate composition, and not addition.  Similarly,
              
    The discrete terms are made continuous by holding each value until the next sample (multiplying each sample by Δt = Δ) .
  • I require:
         1.  A memory system.  I have written down evenly spaced values of x(t), and I would like to carry them around in a container.  Given a particular value of t, say = t0 I want my function to go to the sheet of paper, look up the value I have written there, and retrieve only the value x(t0) from the list.
      2. The approximation of x(t) to be formally differentiable and integrable.
Assumptions are independent of argument.  I have made an assumption which permits me arbitrary, and wrongful, inference.   What is it?**
The rest is by construction.  I will not introduce symbols and arguments to "understand" their behavior or marvel over the results.  I will define them purposefully: assigning them the properties necessary to carry out the desired tasks.
I am interested in sound.  The assumption:
    that x(t) is a composition of functions over a homogeneous medium, each of whose behavior is reducible to a a disturbance propagating at a constant rate in all directions, from a fixed point of origin

accurately describes my data.

MAKE AN INDEX FUNCTION
Let


Using the FTC, by direct manipulation of the sums, we can verify that
          
In words, u(t) is the unit step function. δ(t) is the unit impulse function, also called the Dirac Delta.
O I am a clever monster, said the Dirac Delta.  Just you wait.


Fig. 1: Unit impulse (left) and step (right) functions

Say I wish to know the value of x(t) at some time = mΔ.  Consider the expression:
         
Evaluated at t = mΔ,
         
By the definition of δΔ(t), x(t) has the value x(mΔ) for the whole interval  mΔ ≤ t < (m+1)Δ.  For all other values of t,  x(t) = 0.   I can now retrieve individual terms from the sum (3):


If I am not manipulating audio these labels are likely irrelevant.  Comprenne qui voudra. Fair warning.
Now let Δ → 0 and n → ∞:
          
For any given t, there is only one nonzero term in the sum (see below).  In the limit, the difference between the area under the approximation and the original function vanishes, if the limit exists.

   ■ kΔ is an index: at a fixed time ti, the (kΔ)th term contributes x(kΔ)δΔ(tikΔ)Δ to the total sum.

Let kΔ = τ, Δ = dτ.  Then

In the limit, as in equation (6), there is exactly one instant when the sum is nonzero, for any fixed ti.  As a dummy variable, τ is not just a formality: τ iterates through the terms of summation.  For ∫δ(t − τ)dτ all nonzero values occur when t = τ, but for convolution in general, the distinction is meaningful.

Example:
Let x(t) = u(t).
          
And we can formally integrate stepwise functions.

   ■ Proposition: x(t) can be taken out of the summation; the integral sign:


Formal Proof:
Note that, vary t as we like, δ(t − τ) is nonzero only when t = τ.  Hence,
          
The variable of integration is τ, x(t) is now constant for the integration:
          

The Long Way Around Shouldn't I check all of this?  Yes.  Consider the summation of δΔ(t) alone:

Note that
     -The sum has a single nonzero term, at k = 0.
     -The sum is a constant.  As Δ → 0, the limit of Δ(1/Δ) exists and is 1.
     -If n = 0, there is a single term, δΔ(0)Δ = 1.
     -For finite n > 0,
               
     -Letting n→ ∞ has no effect on the sum; all new terms are 0.
That is, (8) is unchanged for the simultaneous limits n→∞, Δ→0.
But,

Likewise, for the summation of δΔ(t−τ),

where kiΔ < ti < (ki+1)Δ.
Again, the sum in insensitive to the simultaneous limits n→∞, Δ→0, and

for all t.

Which is to say, with a lot of squiggles,
          

I told you I was a clever monster.  Oh yes,  said the Dirac Delta.


■ Substitution of variable:
{To DO}

____________________
** Assumption: x(t) is a function.
This is called begging the question.  The question was, are the data before me a function?  I pretend it has been answered.  Once I have decided this, all formal obstacles can be brushed aside.   In general, calling the data an unknown function is a confession.

I am satisfied that there exist phenomena which satisfy the wave equation.  The above arguments are ideally suited to the description of such phenomena, and I will use them.  I find the arguments, and the math, beautiful, often bewildering.