Sunday, July 29, 2012

Everything You Need to Know About NP-Completeness: 1

Let's play a game.
THE RULES
Number of players: 2
Materials needed: 2 pencils, scratch paper.

ORDER OF PLAY:
■To begin the game, one of the two players names a mathematical (counting) problem, and how many tries it will take her to solve it.  She may generalize the number of tries as an equation.  The problem may be as general or specific as she like.
She should write this information down on a piece of paper, and give it to the second player.
She is now Player 1.
This ends her turn.

Player 2 now has at least two choices.**
     1) She can change one and only one element of the problem by making the necessary alterations to the text.  She returns the paper to Player 1.
     2) She can challenge Player 1 to solve an instance of the problem in the stated time.
She may not do both.  This ends her turn.

Player 1
    1)  If the problem has been changed,  she must  offer a new number, or equation, to describe the amount of time it will take her to solve the revised version of the problem.
She may also Dispute the Change, as described below.
She then returns the paper to Player 2.
This ends her turn.

Play returns to Player 2, who has the same choices as her previous turn.

     2) If Player 1 has been challenged, she must then solve the instance of the problem.  If the solution is an equation, it must be of the same order as what she has written on the previous turn.
If she is successful, she WINS, and if not she LOSES.

■Ending the Game
Play repeats until  Player 2 challenges Player 1, at which point one of the players will WIN and the other will LOSE.

**I have selected two choices, to demonstrate a winning strategy.   Players 1 and 2 are free to invent any number of alternatives.  Player 2 may, for example, decide to play airplane instead.  Though it is unlikely to help Player 2 win the game, I will not argue with the merits of this choice.  If you do not know what I mean, then when you reach the end of this sentence, I ask you to stop reading, step away from the computer, and play airplane.
There is no wrong way to play.

SPECIAL RULES
  • Disupting a change:  When  Player two changes the problem, there must be agreement that only one element has been changed.  There need not be a precise definition of "element", but a successful dispute must be mathematically sound: she must name an alternative change which satisfies the following sentence.
     
    "The change made by Player 2 is composed of this change, and at least one other."She is not required to identify the other change(s).  She does not have to offer a time in which she can solve her change.  If her dispute satisfies the sentence, and the composition is mathematical, then the dispute carries.  
       a) The disupte is successful, the problem is returned to Player 2, who may agree to the change made by Player 1, or offer a new one, following the same rules as before.  In either case, this ends her turn.

    It is now Player 1's turn.
     
          If Player 2 agrees: Player 1 cannot dispute her own change. She must now solve the problem
          If Player 2 offers a new change: Play passes to Player 1 in the usual way and she has the right to dispute the change. 

       b) The dispute is unsuccessful:  Player 1 resumes her turn and must offer a solution time in the usual way.
      
    c) If no agreement can be reached, the game is DRAWN.  A draw is not a permanent state.  If one player demonstrate a resolution which is accepted by both players, play resumes.   
  • Time limits:  There are no time limits on any action.

The rules are incomplete. The game can be improved by playing it. If a rule fails to help us identify the structure of a problem, we should change it.

But the rules are just for fun; an invitation.

A WAGER
I will be player 1.  Would you like to be Player 2?  Have a seat.  See?  I even have cookies and tea.

I say that you, Player 2, can always win.
What is the winning procedure?  {My answer}
__________________________________
In the attic of the theater: Anna, in perriwinkle tights. 
Tucked in the prow of a small boat, his room meanders on and off stage.  Boys and girls in white shirts and blue caps, blue ribbons around the waist, march down the tiny stairwells; left, right, zigzag.  Pointed newspaper hats and painted wood swords.  Anna, perched in the rafters.  Two seamstresses, green eyes, and dense, blanket legs, sew shut her shoulders.  The cloth falls aside and she is again mottled pink-and-white.  And the blue, crawling up her sides.  Two cloth frowns, the seamstresses drift away.
 When time is akilter, his room hovers translucent over the crowd, he threads the muscles of his arms and legs through the pulleys, stretches his back along the walls, crouches behind the sets, each with a small network of baskets and chairs and roomless corners for him. He watches the attic, invisible.
The bows.
The cast walks on and off, and she among them.  In a clatter and clump, toy trumpets, they proceed through the left wall of his kitchen and vanish.  Anna stops.  When she puts her hands on him, the muscles unhook from the pulleys.  Concave limbs bound in circles unbind, fatten.  Into his right hand she places the slow winding and unwinding of her skin --in and out of blue--,  and breathes it into his mouth.  The two pour from finger to tongue, in the attic of the theater.  Her fingers bloom. His wrists fall open in orange circles of sleep, and in the evening their skin falls as sunlight on the faces below.  Leaf-tongued, his words spiral her thighs, and her mouth flowers around his neck, murmurs in the deepening furrow of his spine.       

....Answers

← Part I 
← Part II

{UNDER CONSTRUCTION}

From Part I :
What is the winning Procedure?
On every turn, Player 2 should undefine an element of the problem.

Example: If the problem is a version of the Traveling Salesman, each city connects to some number of other cities.  This number determines the dimension of the problem. As a first step, Player 2 should change this number to n, so that the dimension scales with the input size.  There is now no polynomial which can bound the running time.  She should then hunt for a way to unlatch the direct connection between dimension and input size.  en is still a running time.

Example 2: In general, if each of the following three elements in undefined or unbounded, or if no logical connection exists between any pair of them:
  1) the dimension of the problem
  2) the structure of the problem space
  3) the search function
the problem cannot be solved in any defined amount of time, short of enumerating all of the possible searches one at a time.

But even this rule overstates the difficulty of  identifying, constructing, and avoiding NP-Complete problems.  Any arrangement whatever of the elements of the problem where the relationship between an individual search and the existence of a solution is a memory read, or the same thing as enumerating every possible case in arbitrary order, is NP-Complete because we have offered no method for solving the problem.

Surely it cannot be that easy.
Can't it?


From Part II:

Q1:  If I have asked my question in such a way that a positive answer tells me, "yes there is such a tour", shouldn't a negative answer tell me, "no, there is no such tour"?
According to Computers and IntractabilityYes, it should, and it appears that problems of this kind violate the law of logical complement.   However, the answer is No.  Why?

A:  There are two mistakes.  First, to answer the question:
The authors have the following search query:
(1. Is there a solution to this problem?)  AND (2. is this particular instance such a solution?)

There are two questions, joined with a logical AND.  To construct the logical complement of any statement, single or composed, take the sentence, put parentheses around it, and print the words "It is not true that" in front of the whole thing.   If we do this, we have,

  • Positive result:(There is a solution to this problem) AND (this instance is such a solution)
  • Negative result:  It is not true that{  (there is a solution to this problem) AND (this instance is such a solution)  }
Both results are true statements; the negative result is the logical complement, and everything is fine.
In fact, by definition we may immediately state an important

   ■Corollary:  Invariably, the logical complement of a particular solution is the totality of all other particular solutions.

The authors wanted a negative result to give them this:
{ It is not true that  there is a solution to this problem  }  
                      AND (this instance is not such a solution)

A simple mistake.  But that is not their principal error.  They identify that this is not the result, which means they have made a mistake.  However, rather than using using logic to check themselves for error, the authors assume their argument is valid, and the structure of logic itself has magically broken.   So armed, they carry forward.

Second: The relationship between the two questions is a memory read.
In other words, the only way to answer the question, is to already know the answer.
This is a fundamental distinguishing characteristic of NP-Complete problems, and for most problems can be checked in a few seconds, without any knowledge of the theory.

Let me convince you.  Take another look at the pair of questions:
  1. Is there a solution to this problem?
  2. is this particular instance such a solution?

We search one instance at a time; only question (2) is tested.  For example, flip the order of the questions, it should be apparent there was only one honest question all along:
"Is this search a solution to the problem,  AND  Is there a solution to the problem?"

The questions are yes/no.  Testing only (2), there are only two possible cases.  What are they?
POSITIVE:  If the current search is a solution, then  there is a solution to the problem.  The two are the same question.  
NEGATIVE:  If the current seach is not a solution, do not, there is no relationship between the two questions.  We can construct such a relationship by trying every possible case, and recording the answers. If there is no solution, we won't know until we have tried every case and they all fail.

What is the logical form of question (1), in relation to the search algorithm?  There is no such question in the search algorithm.    And there is the crux.  Ask the question explicitly:

What is the exact logical procedure for answering question (1)?  The answer is recorded in advance, and read from memory.  If we know the answer to the question already, then the question can be answered.  Hilarious.  This relationship can be demonstrated in pure machine logic.  Let us construct an algorithm which performs an individual search and then asks the question,
(1. Is there a solution to this problem?)  AND (2. is this particular instance such a solution?)

After performing the search, we record the result as a temporary variable in the loop itself and then perform the AND test.  
This test will either return FALSE every time, or throw an exception.  We never test (1), and have no plans to do so.  There is no part of our algorithm which outputs a value for the first half of the AND.  The computer will never check (2), because the AND cannot be met even if (2) is true.

 How does the information reach (1)?  The only way for this test to return TRUE is if we already know we have the answer.  We test (2), by itself.  If the result is TRUE, we flip (1) to TRUE.  Now the test '(1) AND (2)?' returns TRUE.  There is no other way.  This is the only way for the two questions to return TRUE, joint with an AND: we cheat. If we already know (1) is true, we tell it so.  '(1) AND (2)' embedded in the loop itself, as part of the algorithm, will return FALSE or return an error for every problem where we have no definition of the first question.

There is an additional subtlety.  I have stored the result of (2) as a temporary variable, nested in the search loop.  Within this loop, (2) is tested.  Then the loop exits, and the result is discarded.  At the next level up, I can now test the composition  (1) AND (2).  Suppose I freely admit that the only way I am going to answer (1) is by checking the truth of (2).  Here, again, the algorithm will never give a positive answer, even if the current search is a solution, because the response, "TRUE" was discarded when the loop exited.  I must define a variable at the current loop depth (composition), which retrieves the truth value of the search, (2).  That is, the parentheses  in the statement 
(1. Is there a solution to this problem?)  AND (2. is this particular instance such a solution?)

unambiguously require the addition of memory one level above the search itself, in order to be evaluated to anything but an error message.  In a similar way, it is easy to show that we require either 
*a predefined search procedure which guarantees in advance that every solution will be tried, and defines the index at which the last possible solution will be evaluated, or
*a global memory array one level above composition, which is written to and read from for every individual search loop and evaluation of the question

it is impossible to define a termination time for the algorithm.  In both cases, the choice is entirely ours, and is completely independent of the logical structure of the problem in question.  These requirements are needed to meet the assumptions we have made independently of the problem.

And that is my point.  The dilemma is completely made up.  Throughout the book, the authors puzzle over these curious logical relationships, without ever investigating them.  They assume throughout that the relationships are a result of the mathematical problem itself, when it is demonstrably untrue.  Further, the relationships are elementary, reducing in most cases to accounting for the nesting order of loops, and where the memory reads must be, to obtain an answer to the question posed.
Referring to them as classes of problems seals the deal.  We might as well pretend we have no responsibility to solve problems at all, if we can show that it is possible to frame them in a way that we can't answer them.  And that is exactly the position the authors take.

But even vetting the results in pure logic is unnecessary.  All of these relationships are  straightforward within the Algebra.  For example, I have just plotted a function in SAGE.  It is in front of me.  For reals.  Now, I will tell you that it is NOT
    y = 2x + 14
According to Garey and Johnson (Computers and Intractability), this should also entail the conclusion that I did not plot an equation of the form  y = 2x + b.... or in fact any general form that you may choose to make up right now. 
Which is obviously ridiculous.  We don't make mistakes like this, because they don't make any sense, and that is why I love the Algebra, and use it to solve counting problems.

Question 2:  Can the law of logical complement be violated?
No.
"Logical complement" describes a dichotomy.  You make up a group, and divide it in two.  For example:
Group:  People with blue hair.
Division:  either the person owns a SAAB or the person does not own a SAAB.

Select one of the divisions, and then the logical complement is the other one.  It may or may not be useful or descriptive to do this.  A logical complement exists where and when we decide to construct one.  To say it has "broken" is exactly the same thing as saying we have changed our own definition. 
And that is its function: we can use it to make sure our arguments are consistent.

It is difficult to think in pure argument.  We lead ourselves around with the words, and believe it is the accuracy of our arguments which makes them so convincing to us.  We miss simple relationships, and carry on at arbitrary lengths into nonsense.  By propping up tests for ourselves, we can check that everything which held together before, still holds together now, independent of our conviction.

Please don't take my word for it.  Walk around with this odd little pair:
{ grbobble  }  &     NOT {grbobble } 
What can you do with it? 

For example:
Suppose I have a number of flowers, and two boxes.   I write down the name of each flower on a list, and then put each flower into one of the two boxes.  Every flower on my list is now either in Box A or Box B.   If I pick a flower from the list and look into one of the boxes, I know where the flower is.  If I don't see it, it's in the other box.  Likewise, I can list the complete contents of Box B by  looking in Box A, and crossing out the name of every flower I see.  Whatever is left on the list, in in Box B.

But,
If I know the Peoney is in the first box, that doesn't mean the second box is empty.

Everything You Need to Know About NP-Completeness: 2

          ← Part I 


A good abstraction makes it easier to think clearly about a problem, and/or makes the problem mechanically easier to solve.   Does NP-Completeness make problems easier to solve? Consider the following observations made by the authors of Computers and Intractability:
  •    Within NP-Completeness it is very difficult to distinguish between similar problems, i.e. Two problems with different running times and structure are nearly indistinguishable. The difference between an NP-Complete problems and non-NP-complete problems is difficult to discern.

  • Distinguishing types of problems cannot be reliably done using the theory.   That is, we must resort to enumerating them in an indexed list, offering different representations of the problem, and waiting for someone who solves problems to update the list.
  • The relationship between particular solution and general case becomes blurry and confusing.
The book is a seminal work in NP-Completeness theory.  The authors did not seem to think these observations were a problem.  I think they are.  Consider a classic problem called

The Traveling Salesman:  A Salesman must visit each of n cities.  The Salesman has a route:  he visits each city in turn.  If he starts in Baltimore, he will return to Baltimore once he has visited every other city exactly once.  He would like to make this route more efficient.  Can we help our salesman out?  Given a list of cities, the distances between them, and a map of how the cities connect, can we find the shortest route he can take which visits every city once, and leads back to the place he started?


 To solve the problem, say I start calculating combinations at random.  Consider two possible outcomes:
   1) On the first try I have the answer: Each city connects to each other by the two shortest connections; there is no way to have a shorter route, and this route satisfies my other criteria.  Answer in hand, I am done.
   2) Partway through, I reach a dead-end.  He is in Denver, has cities left to visit, but has already visited every city with a direct connection to Denver.  Any way he leaves, he will pass through a city for the second time, before reaching a new one.  He is trapped.

In the second case, I still don't know if such a tour is possible.  Which leads me to the following 

TWO QUESTIONS
Q1:  If I have asked my question in such a way that a positive result for a single instance tells me, "yes there is such a solution and this instance is such a solution", shouldn't a negative answer to an instance tell me, "no, there is no such solution"?

According to Computers and IntractabilityYes, it should, and it appears that problems of this kind violate the law of logical complement.

 
However, the answer is No. 
 Why?  {The answer can be found here}

■Q2:  Can the law of logical complement be violated?

Friday, July 27, 2012

The Heine-Borel Theorem, 3

                      ←  Part II  

{CLEANUP TO FOLLOW}
I swear it's got to be in here somewhere.  I think it's safe to move on now.

I say:
INTERVAL WITH OPEN END AT INFINITY
(1) The Postulate of Eudoxus:
If I say, "n is the greatest number", then you say
"n plus one", and I lose.
As stated by Euclid, the postulate itself is *cough* a series of interleaved ratios.

(2) Bob Hughes' Fishing Lemma:
"I went fishing and caught a fish this big."  He shows the size with a gesture.
Bob Hughes has one arm.

INTERVAL WITH END AT ZERO
(3) Postulate of Eudoxus:
If I say, "p is the closest point to c," then
You say, "p minus c over two",  and I lose again.

(4) Imagine a Film Reel of a woman throwing a ball. The frames are time-stamped.  Call the time on the first frame a, and the last, b.

  • (a) Say the recording begins the instant the ball leaves her hand.
    When did the ball leave her hand?      Go to the first frame:     t = a. 
  • (b) Instead, say we must find the moment the ball leaves her hand.  We find the last frame where the ball is still touching her hand.  In the next, there is sky between her fingers.  The moment lies between these two frames.
    Suppose the recording is very fast:  there are as many frames per second as we care to examine. You start on the left where there is contact and  I begin on the right, with sky, and we move toward one another.     We can make the time between the two adjacent frames   --touching the ball, then sky-- as narrow as we want.  We can make it narrower than any given number, no matter how small.

    When did the ball leave her hand?

    We may answer in several ways.  Any finite number is an approximation.  But we can make our error vanish.  I suggest the following way to answer:
  • (c) Suppose at time t0 there is a frame which appears to be the last moment the ball is touching her hand.  In every frame which follows, there is sky between her fingers and the ball.  For convenience, cut the film so this frame is first and label it t=0.
      Consider a small strip of film from t=0 to  to t = δ.   δ is very small, so we pick the halfway point and say, "the ball leaves her hand at t = δ/2".  But when we check, there is already sky at t = δ/2.   And at t =  δ/4,  δ/8.... δ/n.  We can find no n large enough to give us another image of the ball in contact with her hand.
    The ball is touching her hand at t=0.  But the first frame with sky does not exist.  We can name no number small enough to identify it.  Measuring from the right, the first frame of sky is lim{δ → 0}δ = 0.  And the instant we are looking for, squeezed between the two zeros!

    Then we can answer, When did the ball leave her hand?   with   "t = 0."

    And we have defined a way to build an interval which "begin at the instant.....".
I suspect the difference between (b) and (c) is the crux of the Heine-Borel Theorem.
Note that we must begin with the assumption of the limit point in order to measure this way.  In (b) the limit point is not known, and cannot necessarily be identified.  The procedure has no algebraic limit;  the limit remains a question of time, and our willingness to look at film.  For example, a clever rule which gives us alternating frames of sky and contact might at first seem good enough... or perhaps it closes on a nearby point instead, and we have not let it narrow in closely enough for the frames to stop alternating.  Our rule was a guess just like any other.
  In Widder's example of the open interval at 0, the limit point is known and the dilemma is pretend: measuring without the endpoint gives the algebraic limit anyway, and so we cut the first frame.
The behavior of a function near limit points is a different question.


(5) Define COVER:
I have a line segment and some opaque junk.  Let's pile some junk on top of the line segment. I say,

Definition:  the line segment is covered in junk if no point of the segment is visible. (there is at least one piece of junk on top of every point of the segment)

Definition:  the line segment is exactly covered if
     *no point is visible and
     *no point beyond the endpoints of the segment is covered by junk.
That is, [a,b],  and only [a,b], is covered in junk.

The closed interval can be exactly covered by a finite number of subintervals.  We can divide it up however we want; it is a whole measure.    It is also possible to exactly cover a whole measure using an infinite series of bits of junk.  Just divide in a way that always shrinks the distance between, but also never arrives.  In other words, just divide, period.


MISCELLANY
(6)  The Obvious:  It is impossible to measure the exact length of an open interval because, by definition,  it doesn't have one.   There is no left endpoint of (a, b].

If we remove the limit point, we can't get to it by division any more than by offset, because it's not there.
Division's nice because it keeps me from popping out on the other side.


(7) The Magician
A magician asks you to draw a card and look at it.  What's my card?  She says,  "The card you drew is the one you're holding!"
  • Choose any number, and don't tell me which one.  Let's call it r.  How far is r from the origin?  Why," r is r units from the origin."  
  • Now, choose two positive integers.  Call the smaller one p and the larger, q.  Consider the number     α = q/(q − p)How far is α from the origin?     q/(q − p) units.
  • Now, using the same p and q as above, let β =  This limit is q/(q − p).  So how far is β from the origin?  q/(q− p) units.  So α = β.
 The same number obtained in different ways.  The limit exists if we agree that nothing can stop us from adding more terms.  
Neither finite division nor continuity are the consequence of infinite division.  Buying a really fast clock and watching the digits spin by guarantees that I can get from one second to the next?  I get there if I get there.  You check.  Should I just skip ahead to limit tests?

Continuity exists is we agree that nothing can stop us from dividing.  We agree to measure continuously.  No argument can force physical objects to skip from place to place if they do not. No argument can prevent me from putting any number howsoever chosen, into the equation y = f(x).
So  f(x) is continuous.  
Unless, you know.... it isn't.
Get it?  The determining factor is not the property of the interval itself.  Proceeding by proof at this point is stupid. This is the time to go by definition.  Define the tools we need, to examine functions and retain the measurable characteristics.


(8)Slide the Subinterval:
Take a subinterval I' of some finite width.  Slide it all the way to the left.  Stop when the left edge of the subinterval covers the endpoint.

Suppose the left end of the interval  is open.
   Good luck with that.
   You can get closer and closer to the edge, but no matter where you stop, there is more interval to your left.  Don't overshoot.  The task is impossible.

Suppose the left end of the interval is closed.
   *slide*...  *click*.  The subinterval snaps into place when the left edge hits a.
Now we can continue: Holding a fixed, shrink I' to zero.  Now, scale (*cough*) it back up until the right edge coincides with b.  Holding b fixed, shrink I' to zero.

The test which fails for the open interval is offset.  Once this test fails, we cannot resolve it by division, or we could have done it with addition in the first place.  

It may seem as if division "keeps" me from popping out on the other side, but that is an illusion.  It only does so if I already know the limit, and measure my divisions from it.


(9) Diagrams:
Begin with a closed interval [a, b] and call it I.
I would like to prove that we can cover the whole interval [a, b] with a finite number of subintervals.  The center of each subinterval must be in [a,b].

Select  points on the interval and  label them in ascending order:
We may choose   c1 = a   and/or   c2 = b,  but we do not have to.
 I'll chose one of my points at random, and call it c. Regular-ass c.  It can be any of the ci.  In the picture above, see how the points are shown as filled circles. Zoom in on one of these circles.  It represents a point on the line, at the center of the circle.  Call the center C and the radius δc:

Q: Is there more than one point of the number line inside the circle?
A: Yes.    There are infinitely many as long as the radius is not zero.  As long as the radius is not zero.  For suppose I claim there is some δc > 0 which is the closest point to c, so that only c lives inside the circle.
Then you name ε = δc/2, and 0 < ε < δc, and I lose.
We can play this game forever.

So we can cover an arbitrary segment of the number line with a circle.  Ignoring the y-axis, an interval  [c−δc,  cc], with center at c.

Q: So what?
A: If we can guarantee that every point c in the whole interval a ≤ c ≤ b is the center of such a circle, then the rest follows.  Because every point is also then inside the circle of some other point.  Whatever the distance is between these two points, call it η.  Then we can cover the whole interval a ≤ c ≤ b with  (b− a)/η subintervals.

Again, we could just agree to divide and be done with it. All the rest ... ALL OF IT, is unnecessary.



So what is the Heine-Borel theorem?
 It seems that
  •  It must relate several points:   *Different functions of behave differently in the absence of an endpoint.   *Division is bounded from below by zero, but multiplication is not bounded from above.For example, in all the cases above, the interval is finite, the lower bound is zero for both a finite and infinite number of Ik, for both closed and open endpoint.  The same cannot be said for an endpoint open at infinity.
  • . . .Where we want . . .   *δc to be a function of c.   *To use the same definition when the interval involves zero.
        *To distinguish limits with intervals.
I am left with the question,
What is a correct statement of the Heine-Borel Theorem?

The Heine-Borel Theorem, 2

←  Part I                                           Part III →

Reiterating, we have from David Widder's Advanced Calculus:
" Let there correspond to each point c of the closed interval a ≤ x ≤ b a number δc and an interval Ic of length 2δc with c the center point,
 .
The Heine-Borel theorem states that a finite number of the intervals (1) can be chosen which will cover the whole interval a ≤ x ≤ b.  That is, every point of a ≤ x ≤ b will be in at least one of the above mentioned finite number of intervals (1). In order to emphasize the need for proving this result, let us give an example to show it false if the interva, (a,, b) were open instead of closed.
   Let a = 0,  b = 1, and define Ic, for 0 < c < 1, as

That is, δc = c/2. No finite set of the intervals (2) will cover the interval 0 < x < 1.  For, consider such a set, Ic1, Ic2, . . . , Icn, where 0 < c1 < c2 < . . . < cn.  Of these, the interval Ic1 reaches farthest to the left.  Hence no point to the left of c1/2 is covered by the set.  "   (pg. 170)
Now
 "Theorem 5:  To each c,  a ≤ c ≤ b corresponds an interval (1)  There exist points  c1, c2,, . . ., cn of a ≤ x ≤ b, such that every point of the interval a ≤ x ≤ b is in at least one of the intervals Ic1, Ic2, . . . , Icn.
Call a point A of the interval I, a ≤ x ≤ b, accessible if the interval a ≤ x ≤ A can be covered by a finite sequence of the intervals Ic.  Clearly, if A is accessible, every point of I to the left is also.  Hence, there must either be a point B of I dividing accessible points from inaccessible ones, or else all points of I are accessible.  (Some points are accessible, since all points of  I in Ia are covered by the single interval Ia.)  But the existence of the dividing point B, B < b, is impossible.  For, if Ic1, Ic2, . . . , Icn is a set of intervals covering (a, B − δ), δ = δB/2, then  Ic1, Ic2, . . . , Icn,   IB   covers a ≤ x ≤ B + δ, so thatthere are accessible points to the right of B.  This is a contradiction, so that b must be accessible."
         −David Widder, Advanced Calculus; Dover Press, 1989; pg. 171


OBJECTIONS
1)  Consider the closed interval [a, b].  Let c = a.  Using Definition (1), To the point a of the closed interval a ≤ x ≤ b there corresponds a number δa and an interval Ia of length 2δa with a the center point,
     
We are in a dilemma.  There are two possibilities:
   A) δa is nonzero.   Specifically,
     (a − δa)<  (Ia extends to the left of a) and
     the inequality (1) is satisfied.
   B) δa is zero, as in the example (2).

CASE A: Our cover may exceed the endpoints of [a, b].  Then we can also cover an open interval:
   Let δc be any function of c  and a such that lim|c−a|→0c) > 0.

Clearly, we may do this with an offset.  What is not so clear, is that, if we must proceed by division... .well.  I think we need to have a talk about limits and power series.

CASE B:  This allows us to cover the interval [a, b] precisely.  However,  (1) cannot be satisfied:
      0 < 0 < 0  DNE.
We also lack a definition of the δc which can give us a finite cover and a zero radius at a.  I'm happy to make one up, and pick the δc however I please, but then I'll jus cover [a, b] with *cough*  Ic :  a ≤ x ≤ b.
We need to clarify how we intend, or are allowed, to derive δc .

If, in addition, our cover is not permitted to exceed the endpoints, we must approach both endpoints by division.   But then, as mentioned before, we cannot reach either endpoint by any finite process of division.  

2)  Every point of a bounded, open interval  a < x < b satisfies (1).   It is nevertheless not possible to precisely cover an open interval with a finite number of subintervals. To me, (1) is a working definition of open.  Every point in the interval is surrounded on both sides by other points of the interval.   Exactly two points in any closed interval fail the test: the endpoints.

3) Proof by contradiction requires us to establish that the two cases are mutually exclusive.  Widder argues that, given an interval I, if we can cover a ≤ x ≤ A  with a finite  number of strips of paper, and every point in the interval satisfies (1) then we have to be able to reach b with a finite number.  Is that true?  There are several issues here:
How did we get to A in the first place?
  i) We agreed to cover a ≤ x, ≤ A.  We have already thrown out open intervals by implicit assumption;  a is not in the interval (a, b).  All other criteria are now irrelevant and can be arbitrary.
 ii) We made up point A and assumed we can reach it.  That's silly.  This is the question we were supposed to answer:  Can this be assumed?  If it can be assumed for A it can be assumed for B.  Because we are begging the question. At this point we ought to just make up that we can reach b and be done with it.

It is possible to use this argument, but not the way it is presented, so I'd like to examine it:
 iii) If we can reach A, we can reach b.  We have restated the question, as a conclusion in the positive.  Can we fix it?
   Consider the closed interval [a, A].  Suppose a point β,   a < β < A,   is accessible from a. ...  We will then prove that A is accessible.  But we used A to prove that b was accessible!  If we insist on working this way, we will again require an infinite procedure to rediscover continuity: we will divide until our distance from a is zero, and then congratulate ourselves.  That's stupid and intellectually lazy.  The limit exists, we know it, and Widder takes it for granted without checking, but runs the argument as if the procedure is finite.

IF we proceed in this way, then the procedure is infinite.  We may, at any time, just lay off a finite number of intervals. There is no proof, any more than there was a proof that the points a, b and A, and the magic bridge  making A accessible from a exist.  They all arose by assumption.  A much simpler assumption is addition.  We get to the heart of the matter right away:
Choose an arbitrary point A in the closed interval [a, b], and a corresponding IA.  Now OFFSET  IA until its leftmost edge coincides with the leftmost point of the interval.
The distinction between an open and closed interval is immediately obtained.


Now.  To establish that reaching a from A, and failing to reach a from A are not mutually exclusive, let me add
iv)"Prove that some definite point A cannot be reached from a"
By assumption, each point of the interval a < x < A is the center of an interval (1).  Now, to ensure that we do not make any leaps, let us divide the distance A - a in such a way that we are certain the next accessible point lies in the range of the previous interval.
    Case 1:  Start at a:
    The interval  (a−δ) < x < a either does not exist, or we have  a − δax ≤ a, and δa = 0. In both cases, we cannot leave a.  There is no next interval.  We will obtain the same result starting from A, and measuring from there.  So,
    Case 2: Start at A, but measure from a:
  Choose the point c = 2(A− a)/3 + a, and let δc = (c − a)/2.    The range (c−δc< x < (cc) is
     
Whew! Just made it.  Now, according to Widder, definition (1) renders it impossible, having reached c from A,  for there to exist a dividing point which prevents us from reaching a.
What could possibly go wrong?  Let let us continue.  The next interval has center at (2a +A)/3.
We have δc = (A−a)/6, and our new left endpoint (c−δc) is  (5a+A)/6.  Iterate again.  Now...
Let us say we reach a point arbitrarily close to a,  c = a+1/p,   p arbitrarily large.  We find that δc = 1/(2p), and the interval  (c−δ) < x <  (c+δ) is
     
 The dividing point which is supposed not to exist, is inescapably a.     We will never reach a.   We may now use a as the endpoint of an interval α < x < a, and find accessible points on the other side of a. Once again,  the open interval satisfies (1).   Given any point in the semi-open interval (a, b], we can name a value of c which contains the point, and a finite number of intervals which reach all the way to b.

  Of course, we are doing this to ourselves.  We are repeating Zeno's mistake.  It is essential to understand that the purple text is not a proof of anything.   It is the description of a choice: among the ways we may choose to measure, this one in particular has the stated properties.  When the argument is structured in a way that it admits all agreed methods of measure, and obtains conclusions which do not depend on the manner in which we approach it, then we can speak meaningfully of proof. Otherwise, we fail to distinguish the relationship among assumptions, facts, and argument.  Widder's error is informal: The assumption that there is a paradox which requires resolution.  There are as many ways to make the difficulty go away as we wish, all we have to do is pick one.  There is no logical proof.  For we may always proceed in this way, never-arriving.
.... and we may also always choose a larger interval.  For example, if we try I = [1, 3/2], and fail to arrive at a = 1 by division, we can instead use the interval [0, 2] and we will have no difficulty at all covering [1, 3/2] with a finite number of subdivisions.
... And we may also always skip the whole dillema and just divide the damn interval up however we want, slide subintervals left and right, and when we don't have defined endpoints infinity takes care of itself because we cannot name a leftmost and rightmost point.  DONE.

None of these are mutually contradictory.

4) He is everywhere taking continuity for granted.  GOOD.   But we all know where this is heading, right? He is going to play a little game where I am to pretend he is not taking continuity for granted, because he has never said it, and then continuity will spring forth from intervals by logical proof.
But an interval can't be defined without taking continuity for granted.  I mean,  literally, it is logically impossible.  There's no free lunch.  If you divide your way infinitely to zero from 1, it takes an infinite number of steps to get there, whether you are allowed to include the limit point or not.  If you let the cover exceed the whole interval, both open and closed intervals are covered; if you require the cover to reach the endpoint, the only way to get from one end to the other in a finite number of steps is by offset (addition and subtraction), in which case you need only ask "do I have endpoints to measure from?" in order to measure out a finite subdivision irrespective of the logic of this proof, or the properties of division.  The measure between need not even be continuous!  We imagine it so, when we have covered it, and the pretend picture in the mind is the only meaningful part of this proof.  If that's how I decide to accept continuity and identify all the singular points when I don't have it, fine.  I promise to have the courtesy not to tell you it's a proof.
(1) can't be satisfied by the endpoints of a closed interval.
←  Part I                                           Part III →