← Part II
{UNDER CONSTRUCTION}
What is the winning Procedure?
On every turn, Player 2 should undefine an element of the problem.
Example: If the problem is a version of the Traveling Salesman, each city connects to some number of other cities. This number determines the dimension of the problem. As a first step, Player 2 should change this number to n, so that the dimension scales with the input size. There is now no polynomial which can bound the running time. She should then hunt for a way to unlatch the direct connection between dimension and input size. en is still a running time.
Example 2: In general, if each of the following three elements in undefined or unbounded, or if no logical connection exists between any pair of them:
1) the dimension of the problem
2) the structure of the problem space
3) the search function
the problem cannot be solved in any defined amount of time, short of enumerating all of the possible searches one at a time.
But even this rule overstates the difficulty of identifying, constructing, and avoiding NP-Complete problems. Any arrangement whatever of the elements of the problem where the relationship between an individual search and the existence of a solution is a memory read, or the same thing as enumerating every possible case in arbitrary order, is NP-Complete because we have offered no method for solving the problem.
Surely it cannot be that easy.
Can't it?
From Part II:
Q1: If I have asked my question in such a way that a positive answer tells me, "yes there is such a tour", shouldn't a negative answer tell me, "no, there is no such tour"?
According to Computers and Intractability, Yes, it should, and it appears that problems of this kind violate the law of logical complement. However, the answer is No. Why?
A: There are two mistakes. First, to answer the question:
The authors have the following search query:
(1. Is there a solution to this problem?) AND (2. is this particular instance such a solution?)
There are two questions, joined with a logical AND. To construct the logical complement of any statement, single or composed, take the sentence, put parentheses around it, and print the words "It is not true that" in front of the whole thing. If we do this, we have,
- Positive result:(There is a solution to this problem) AND (this instance is such a solution)
- Negative result: It is not true that{ (there is a solution to this problem) AND (this instance is such a solution) }
Both results are true statements; the negative result is the logical complement, and everything is fine.
In fact, by definition we may immediately state an important
■Corollary: Invariably, the logical complement of a particular solution is the totality of all other particular solutions.
The authors wanted a negative result to give them this:
{ It is not true that there is a solution to this problem }
AND (this instance is not such a solution)
A simple mistake. But that is not their principal error. They identify that this is not the result, which means they have made a mistake. However, rather than using using logic to check themselves for error, the authors assume their argument is valid, and the structure of logic itself has magically broken. So armed, they carry forward.
Second: The relationship between the two questions is a memory read.
In other words, the only way to answer the question, is to already know the answer.
This is a fundamental distinguishing characteristic of NP-Complete problems, and for most problems can be checked in a few seconds, without any knowledge of the theory.
Let me convince you. Take another look at the pair of questions:
In fact, by definition we may immediately state an important
■Corollary: Invariably, the logical complement of a particular solution is the totality of all other particular solutions.
The authors wanted a negative result to give them this:
{ It is not true that there is a solution to this problem }
AND (this instance is not such a solution)
A simple mistake. But that is not their principal error. They identify that this is not the result, which means they have made a mistake. However, rather than using using logic to check themselves for error, the authors assume their argument is valid, and the structure of logic itself has magically broken. So armed, they carry forward.
Second: The relationship between the two questions is a memory read.
In other words, the only way to answer the question, is to already know the answer.
This is a fundamental distinguishing characteristic of NP-Complete problems, and for most problems can be checked in a few seconds, without any knowledge of the theory.
Let me convince you. Take another look at the pair of questions:
1. Is there a solution to this problem?
2. is this particular instance such a solution?
We search one instance at a time; only question (2) is tested. For example, flip the order of the questions, it should be apparent there was only one honest question all along:
"Is this search a solution to the problem, AND Is there a solution to the problem?"
The questions are yes/no. Testing only (2), there are only two possible cases. What are they?
POSITIVE: If the current search is a solution, then there is a solution to the problem. The two are the same question.
NEGATIVE: If the current seach is not a solution, do not, there is no relationship between the two questions. We can construct such a relationship by trying every possible case, and recording the answers. If there is no solution, we won't know until we have tried every case and they all fail.
What is the logical form of question (1), in relation to the search algorithm? There is no such question in the search algorithm. And there is the crux. Ask the question explicitly:
What is the exact logical procedure for answering question (1)? The answer is recorded in advance, and read from memory. If we know the answer to the question already, then the question can be answered. Hilarious. This relationship can be demonstrated in pure machine logic. Let us construct an algorithm which performs an individual search and then asks the question,
(1. Is there a solution to this problem?) AND (2. is this particular instance such a solution?)
After performing the search, we record the result as a temporary variable in the loop itself and then perform the AND test.
This test will either return FALSE every time, or throw an exception. We never test (1), and have no plans to do so. There is no part of our algorithm which outputs a value for the first half of the AND. The computer will never check (2), because the AND cannot be met even if (2) is true.
How does the information reach (1)? The only way for this test to return TRUE is if we already know we have the answer. We test (2), by itself. If the result is TRUE, we flip (1) to TRUE. Now the test '(1) AND (2)?' returns TRUE. There is no other way. This is the only way for the two questions to return TRUE, joint with an AND: we cheat. If we already know (1) is true, we tell it so. '(1) AND (2)' embedded in the loop itself, as part of the algorithm, will return FALSE or return an error for every problem where we have no definition of the first question.
There is an additional subtlety. I have stored the result of (2) as a temporary variable, nested in the search loop. Within this loop, (2) is tested. Then the loop exits, and the result is discarded. At the next level up, I can now test the composition (1) AND (2). Suppose I freely admit that the only way I am going to answer (1) is by checking the truth of (2). Here, again, the algorithm will never give a positive answer, even if the current search is a solution, because the response, "TRUE" was discarded when the loop exited. I must define a variable at the current loop depth (composition), which retrieves the truth value of the search, (2). That is, the parentheses in the statement
(1. Is there a solution to this problem?) AND (2. is this particular instance such a solution?)
unambiguously require the addition of memory one level above the search itself, in order to be evaluated to anything but an error message. In a similar way, it is easy to show that we require either
*a predefined search procedure which guarantees in advance that every solution will be tried, and defines the index at which the last possible solution will be evaluated, or
*a global memory array one level above composition, which is written to and read from for every individual search loop and evaluation of the question
it is impossible to define a termination time for the algorithm. In both cases, the choice is entirely ours, and is completely independent of the logical structure of the problem in question. These requirements are needed to meet the assumptions we have made independently of the problem.
And that is my point. The dilemma is completely made up. Throughout the book, the authors puzzle over these curious logical relationships, without ever investigating them. They assume throughout that the relationships are a result of the mathematical problem itself, when it is demonstrably untrue. Further, the relationships are elementary, reducing in most cases to accounting for the nesting order of loops, and where the memory reads must be, to obtain an answer to the question posed.
Referring to them as classes of problems seals the deal. We might as well pretend we have no responsibility to solve problems at all, if we can show that it is possible to frame them in a way that we can't answer them. And that is exactly the position the authors take.
But even vetting the results in pure logic is unnecessary. All of these relationships are straightforward within the Algebra. For example, I have just plotted a function in SAGE. It is in front of me. For reals. Now, I will tell you that it is NOT
y = 2x + 14
According to Garey and Johnson (Computers and Intractability), this should also entail the conclusion that I did not plot an equation of the form y = 2x + b.... or in fact any general form that you may choose to make up right now.
Which is obviously ridiculous. We don't make mistakes like this, because they don't make any sense, and that is why I love the Algebra, and use it to solve counting problems.
Which is obviously ridiculous. We don't make mistakes like this, because they don't make any sense, and that is why I love the Algebra, and use it to solve counting problems.
■Question 2: Can the law of logical complement be violated?
No.
"Logical complement" describes a dichotomy. You make up a group, and divide it in two. For example:
Group: People with blue hair.
Division: either the person owns a SAAB or the person does not own a SAAB.
Select one of the divisions, and then the logical complement is the other one. It may or may not be useful or descriptive to do this. A logical complement exists where and when we decide to construct one. To say it has "broken" is exactly the same thing as saying we have changed our own definition.
And that is its function: we can use it to make sure our arguments are consistent.
"Logical complement" describes a dichotomy. You make up a group, and divide it in two. For example:
Group: People with blue hair.
Division: either the person owns a SAAB or the person does not own a SAAB.
Select one of the divisions, and then the logical complement is the other one. It may or may not be useful or descriptive to do this. A logical complement exists where and when we decide to construct one. To say it has "broken" is exactly the same thing as saying we have changed our own definition.
And that is its function: we can use it to make sure our arguments are consistent.
It is difficult to think in pure argument. We lead ourselves around with the words, and believe it is the accuracy of our arguments which makes them so convincing to us. We miss simple relationships, and carry on at arbitrary lengths into nonsense. By propping up tests for ourselves, we can check that everything which held together before, still holds together now, independent of our conviction.
Please don't take my word for it. Walk around with this odd little pair:
{ grbobble } & NOT {grbobble }
What can you do with it?
For example:
For example:
Suppose I have a number of flowers, and two boxes. I write down the name of each flower on a list, and then put each flower into one of the two boxes. Every flower on my list is now either in Box A or Box B. If I pick a flower from the list and look into one of the boxes, I know where the flower is. If I don't see it, it's in the other box. Likewise, I can list the complete contents of Box B by looking in Box A, and crossing out the name of every flower I see. Whatever is left on the list, in in Box B.
But,
If I know the Peoney is in the first box, that doesn't mean the second box is empty.
But,
If I know the Peoney is in the first box, that doesn't mean the second box is empty.
No comments:
Post a Comment