How can you deal with an NP complete problem if you can't avoid it and a simple algorithm doesn't work. Well, unless P=NP, you know there's not going to be a polynomial time algorithm for your problem. Now, that statement actually allows for some loopholes, which we'll investigate in this unit and the following units. The first loophole is to try and find intelligent algorithms that allow you to search through an exponential number of solutions still on exponential time, but so efficiently that sometimes you actually get a perfect solution. The second case we'll be looking at is if you accept that your solution is not going to be the best possible one but maybe within a certain range of the best possible one if you can then find faster algorithms to solve your problem. For each of these approaches, we'll be discussing the main ideas. I'll give you some examples and of course, we'll also be discussing a bit of the theory behind them.
So far, our units have focused on showing that problems are hard or NP complete to be more precise. Actually, what I have very often observed in practice, and even in scientific papers where the office did not have a solid background in theoretical computer sciences is this. Once a problem is shown to be NP complete, many people would just tend to give up or come up with some, well honestly terrible algorithm that they've justified with. Well, the problem was NP complete, and that is why we're now going to turn our attention to count the measures against NP completeness. So, this unit and the following units are your first aid kit to help you overcome NP completeness or at least take a bit of the sting out of it because imagine you find yourself in the situation like the four computer scientist that we have gotten to know throughout this course like Alice, Bob, Carol, and Dave. Solving tough problems was part of their job. Alice was solving vertex cover for telecommunications; Bob was working in biotech developing new medicines, analyzing genes; Carol was working in finance designing secure investments; and Dave was working in logistics where he had to solve the shortest tool problem and actually there are logistics companies who developed algorithms with which they save millions everyday and also fuel. So, those problems are practically relevant although they are NP complete. So, when your company or your boss asked you to solve an NP complete problem, what should you do--should you say, "I want another problem," should you make her or him take this course so that they finally understand NP completeness, should you say "It's impossible to do that," should you say "It's proven impossible," or should you say "It's proven difficult but we can try."
Of course, this was a bit of a simple inter quiz here. Yeah, probably that's not a good idea. Making anybody take this course is a very good idea, but of course, actually you're the computer scientist so you're supposed to be the one knowing about NP completeness especially after taken this course. You'll probably shouldn't say it's impossible. A better answer would be to say, "It's been proven to be impossible," but I think that after this unit, you will agree that the best possible answer is actually to say, "Well, it's been proven difficult or extremely difficult," but we can actually try to solve this problem even though it is NP complete and I will show you how to approach that. The thing is this--there's so many NP complete problems that are practically relevant that over the past years extremely clever algorithms have been developed for these problems. They have been developed for vertex cover, for clique, for independent set, for the shortest tour problem, for SAT, and so on. Knowing about these problems can really make you standout as a problem solver because many people, as I said, will give up once they know their problem to be NP complete. You should respect NP completeness, so maybe not be all smiling when you encounter NP complete problems, although it's nice to know that they are happy but I don't know. When they're solving their problem, be respectful but there's no need to be unhappy or scared of NP completeness.
How can you deal with an NP complete problem other than avoiding it. Well, I think you should think of yourself as kind of a lawyer who has to work with the laws of NP completeness and the main law of NP completeness is this, unless P=NP, there will be no polynomial time algorithm for your NP complete problems. Now, what do good lawyers do with the law. They find loopholes and that is exactly what we are going to do for this law of NP completeness, and we're actually be looking at two loopholes. The loophole that we will be exploring in this unit is using intelligent exponential time algorithms. We're going to work in exponential time but we're going to do it in a smart way. We're going to avoid things like 2^n or 3^n, but rather we're going to have algorithm set for example I have something like 1.3^n, which is already much better and works for many practical cases. What we are then going to look at in the next units is what I like to call sloppy solutions. For all proofs of NP completeness, we always require the computer to provide the exact answer to our problem. One chance to circumvent NP completeness might be to say, "Well, we do not want the best possible solution, but we're happy with say one that is close to optimal" or for a decision problem for example, we might be saying, "Well, it's okay if we get the right answer in about 80% of the time." For each of these approaches, we will discuss the main ideas and also of course, give you some examples to make them more clear and talk a bit about the theory behind these approaches because these are techniques where you really have to understand the theory if you want to be successful in practice. So let's start out with smart exponential time algorithms.
And we are going to kick this off with a little quiz. So let's assume that we have an algorithm with a running time of 2^n n², not O(2^n)n², but really 2^n n². You can add any constant you wan if that makes you happy but not so the point of this quiz here. And now we're going to look at an algorithm that is 10 times faster, so it has a running time of 2^nn²/10. And if you think about it, finding an algorithm that is 10 times faster is actually huge in practice. So imagine for example that whenever you start your computer, it would start 10 times faster than it does now--a huge improvement, but what I now want to show you is that such a huge improvement is virtually useless when you're dealing with exponential time. So assume for example that for this algorithm here, we have a computer where the maximum input size that it can handle is 30, what I would like to know from you now is if we run this algorithm which is 10 times faster, huge improvement on the same machine what is the maximum input size we can handle then. And I would like you to give me your answer here in this box.
And the answer here is in some way surprising. Although we have a 10 times improvement, we actually cannot handle an input that is much larger. We can only handle an input where n is 33 or smaller, and the way you can figure this out is as follows-- If the maximum input size we can handle is 30, then the number of time steps this algorithm over here takes is 2³? 30², which is 966,367,641,600 time steps. So already a rather powerful computer at work here I would presume. What if we have an algorithm that is 10 times faster. Now, 2³¹31²/10 is about 206 billion, 2³²32²/10=440 billions--that is still smaller than this and 2³³33² is about 93 billion, so slightly smaller still than this one here. If you increase it to 34, then this number here will become larger than this one over here.
We've put lots of effort into our algorithm to make it 10 times faster and we have gained only very little in terms of input size that we can handle. Now, what if we were even bolder and had an algorithm that was a 100 times faster. Imagine a 100 times--imagine your computer booting up a 100 times faster that it does today or your computer game not running at 30 frames per second but at 3000 frames per second, so 100 times faster--shoot speedup. What input size can we handle now?
The answer here is almost depressing. Now we can handle a maximum input size of 36 even though we have a 100 times speedup, and the argument is basically the same as before. Again, you compare this number her for n=30 to this number here and then see when this one gets larger than this one over here. And that is why it's so important to understand algorithms and improve algorithms because if you do any optimizations that are a constant factor even if it's a huge constant factor, it doesn't really increase the performance of your algorithm if you're dealing with exponential time algorithms. So you really have to dive down and understand your algorithm and then improve it.
Now, before dug deeper into the more sophisticated algorithms, I would like to briefly tell you a little about a technique known as brute force. So what does brute force mean? Does it mean that we take a hammer and beat the answer out of our computer? Of course not. Brute force is the algorithm that Alice, Bob, and Carol were using initially. So, you remember, for example, that for Bob, we were using an algorithm like this or not using but proposing an algorithm like this at least that simply went through all possible assignments of zero and one to the vertices, or at that time, we steal common genes and weakness of this algorithm was that it had a pretty unacceptable running time as soon as you got to more than 20 or 30 vertices in the network. But this algorithm actually also has two strength. One strength from it is that it's very simple, and the other thing is that, because it's so simple, you can actually also implement it rather fast. Under certain conditions, using an algorithm like this, actually might not be that bad an idea in this method of using all possible assignments. First of all, we're not confused the hammer here. This method if going through all possible assignments without any more sophisticated techniques or sophisticated algorithmic techniques is known as brute force. And so, as I've explained to you, there are certain conditions for brute force is actually desirable. So, let's have you figure out what those conditions could be, and by brute force, of course, we try all possibilities. Could it be that when you don't have much time to design and program a more sophisticated algorithm. Could it be when the instances where the inputs for which you're trying to solve your problem are small. Say n is the size of the problem smaller than 20 or could it be when you need to solve the problem only once and the input might be a little bit bigger there. For each one, I'm going to give you two choices whether under these conditions using brute force could be okay and the first one is a maybe and second one is a definitive no. I'm not going to give you a clear yes choice because, obviously, since we're dealing with NP complete problems here, brute force is never generally advisable of course.
And, well, it's a bit of an opinion here but I would say that--actually for each three of these, you could say maybe, so if you don't have much time to design and program an algorithm, then brute force might be okay under certain conditions. For example when your instances are small and you have proper time to run the algorithm, then brute force might actually be a good idea because it's your time that you invest into programming a more sophisticated algorithm. If it's your time that counts and not computer time, doing brute force is sometimes actually a good solution. Whenever the instances are small, there's two reasons to us brute force. One is because you can implement it very fast, but the other good reason to us brute force in these cases here is that because brute force algorithms are so simple they can be programmed with very little overhead, so what I mean by that is that the constants that are involved in the O notation are rather small and for small instances, such speedups actually play a role. For small instances, it could be a good idea to use brute force, but again even for the small instances, it might not be fast enough. What you need to solve a problem only once--that might also be a condition where a brute force could be okay--that might mean that you have a lot of computational time available. Say you need to have this problem solved within the next 3 months or 6 months, maybe then brute force is kind of fast enough. Of course, in this case down here, I think that's the most debatable. You might want to try some of the more sophisticated techniques that I'm about to show you.
But first of all, let's play a little bit more with the numbers. Let's say I make the following table here. This is the number of instances you need to solve--say 1, 100, 10,000 and this is the size of the instances, and this is the size of the instances, so smaller than 10, smaller than 20, smaller than 30 or larger than 100. And now let's say you have algorithm that has running time 2^n. We'll simplify this here, so as the running time is 2^n, no O off, no polynomial involved. All I want you to get here is a filling full of numbers. Now, what I would like you to think about is in which case you see a brute force would be something to try out. If you think that you should try brute force either you're sure about it or it's at least something to think about, then I would like you to make a check mark in these boxes here and otherwise, leave the box blank, and of course, this is very subjective. If we don't agree, just click next and let's discuss where we don't agree.
So 2^n, if instances are smaller than 10 that means that the running time will be around 1000 time units or 1024 time units to be precise. If it's 20, you will have about 1 million time units. If it's 30, you will have about 1 billion time units. So 1 thousand times 1 thousand times 1 thousand. If it's larger than 100, then of course you have a huge running time here. You will have a running time that is about 10³?. It's clear that when your instances get larger than 100, you would not be able to use brute force. And on the other hand, if you just need 1000 time units, I think you don't even care if you have is running on gigahertz so it can perform several billion operations per second. Even solving 10,000 instances where each one is small is usually not an issue unless you're working in a very special environment. What about a million time units? I would clearly say well solving that once is okay. Again, your computer can perform several billion operations per second. Solving 100 instances is usually okay and even solving 10,000 instances. So 10,000 times 1 million that's 10 billion, which sounds like a lot but actually again desktop computers can handle this in most cases. Now what about this case here? Well, 1 billion is okay. We just said that over here. And we said even 10 billion was okay. So what about 100 billion? Well, this I think here is kind of debatable but I would still say it's worth a try probably. Then you have to probably wait for two or three or four days until all instances are solved or maybe even one month but in most cases that should be okay. You wouldn't even attempt to solve NP-commplete problem in real time for example. This one here I think is the most debatable. So 10,000 billion I would say in a general case you wouldn't want to do that. I mean, there might be special cases if you work for the secret service and you have a huge cluster of computers available but here you're kind of reaching the limits of brute force and here of course as we just said you've already reached that. So again, brute force can work if the instances are small or maybe even moderate but this approach is bound to fail at some point and that's when to apply intelligence instead of brute force using intelligent force.
So let's have a look back at the laws of NP completeness, and the laws of NP completeness, of course, is a book that only holds unless P=NP. If anybody ever proves P=NP, then we can recycle this book here or delete the e-book. So what do the laws of NP completeness say--they say, "No polynomial algorithm." So now it's time to think like lawyers and try to put loopholes into the statement here and there's two loopholes that you can poke into this. One is that we only say, "No polynomial time." We do not say, "Exponential time." What do I mean by that? Well, what about a running time like 2^?n. It's not exponential time--it's called sub-exponential because you don't have the n as it is appear in the exponent, but it's also not polynomial time because you have some function of n up here in the exponent. A running time like this actually conforms with the law and it has an excellent speedup. So let's think about this for a minute. Let's say n=50. If n=50, what is the speedup of 2^?n versus 2^n. Of course, we're going to do this as a quiz. Is it over a million, is it over a billion, or is it over a trillion. Of course, I only want you to select one answer here, the best possible one.
The speedup here is traumatic, just because we write the square root here. It's over a trillion and the reason is that 2?? is about 10¹? whereas 2^???, so the ^??? is about 7, You have 100 and 34 versus 10¹?, which is a speedup of over a trillion, and we're still within the laws of NP completeness using this function here. We can call this loophole 1 in the laws of NP completeness.
Now, I'll show you another function that is actually exponential time but also isn't that bad. What about 1.1^n? Absolutely not a polynomial time algorithm but still a huge speedup compared to 2^n. Let's do another quiz here--speedup of 1.1^n versus 2^n, and again, we're going to say that n=50 then you don't have to recalculate this here. You already know that it's about 10¹? and I'll give you the same choices here. Is the speedup bigger than a million, bigger than a billion, or bigger than a trillion.
So 1.1?? is about 117 which again is a speedup of over a trillion, so what we now have found is a second loophole, where the first loophole was sub-exponential time and the second loophole is small bases for the exponent. Not having a 2 here but a 1.1 or maybe a 1.2, so now of course the question is-- can we actually exposed those loopholes, so can we find algorithms that have these running times here. The answer is--yes, we can, and that is exactly the point of using intelligent force versus brute force, and I'm actually going to show you how this loophole here works. And the problem for which we're going to show this is vertex cover.
It's been awhile since we last met Alice. It was working on vertex cover. And actually, the last time we left off, Alice wasn't happy like that. Actually, Alice was quite unhappy because we proved vertex cover to be NP complete. The last time we left off, there was little hope for her to ever solve her vertex cover problem that she was working on. Now, I promised to show you an intelligent algorithm for vertex cover. At least one that's more intelligent than brute force. But in order to do that, let's first start out with a brute force algorithm. Let's say Alice is running her algorithm on a very simple graph like this. Just four vertices and five edges. What her simple algorithm or brute force algorithm was doing was it was considering all possible assignments of 0 and 1 to the vertices. For four vertices, we had a total of 16 possibilities that her algorithm needed to look at. Her algorithm starts out, for example, by saying, "Oh, let's put no vertices into the vertex cover." And of course this leaves all edges uncovered so it's not even a valid solution. Then, it would maybe start out by saying, "Oh, let's put one vertex into the vertex cover." There's actually four different ways of doing this and at least this time we have some edges covered. None of these solutions is the valid solution because we still have uncovered edges and I've drawn a red here so you can better see them. No valid solution for any of those. The algorithm continues and says, "Oh then, let's put two vertices into the vertex cover." Finally, the algorithm has found a valid vertex cover for our small network but of course it has also found many, many vertex covers that are actually invalid because not all edges are covered so again redundant, redundant, redundant. And all the algorithm proceeds and although it has already found a solution where two vertices are enough for the vertex cover, it now puts three vertices into the vertex cover which has the advantage that it will always find a valid vertex cover but the disadvantage that it's totally redundant and unnecessary work. Same thing if you put all four into the vertex cover. Valid solutions but not the best possible ones. This is what your brute force algorithm is doing and you can see why brute force is so stupid because first of all it considers many solutions where you could have immediately said, "Look, this is not valid." You've taken an edge and none of the two end points is in the vertex cover. Why even consider all the other possibilities? As soon as you have an edge that is not covered, you can basically scrap that solution and go looking for another one and the other thing that her algorithm did that wasn't very smart was once it had found a solution with two vertices it still tried all the solutions with three vertices and four vertices. It could've stopped right here because it knew that it would not be able to find any vertex cover that is smaller. And that is the difference between brute force and intelligent force. In brute force, we go through all of the possible assignments of the vertices into the vertex cover out of the vertex cover whereas what we're now going to do is we're going to avoid stupid choices. And that is the basis for intelligent force.
The way to avoid stupid choices in the algorithm and actually we can make Alice a little bit more happy because we're soon going to show her that we're not going to make her totally happy but at least neutral because we're now going to show her a better technique than brute force. And that better technique is known as a search tree and you would soon see why it's called a tree. If you were solving vertex cover by yourself, you would probably take the following approach. You would not see for all vertices together if they are in the vertex cover or not but rather you would look at a single vertex, say this one here, and then split it into two possibilities. One, you're going to say, "Yes, let's make this vertex part of the vertex cover." And the other one, you would say, "No, I do not want this vertex to be part of the vertex cover." And then, you're going to split down further so on this side here and on this side here. Let's look at this vertex here next. We're just going to draw it a little bit smaller so that we don't run out of space. We already know that this vertex here is in the vertex cover and we're now deciding for this vertex here on the left. So we'll have one case where we do put it into the vertex cover and another one where we don't put it into the vertex cover. And then for each of these cases again we can split into two possibilities. And this time we're going to look at the vertex down here. Same here. One case where we put it into the vertex cover and another case where we don't. Now here for this case already, we can stop. We have found a solution of size 3. All edges are covered. We do not even need to consider the possibilities for this vertex here. Over here, we still have to consider two possibilities so we still need to make a decision here on the right side so we can either put it into the vertex cover or not put it into the vertex cover. And so this solution here or it's not solution because it's not valid. And here again we have found a solution of size 3 or a vertex cover of size 3. So I'm just going to leave this check mark here and have this account for both. So now we still have to continue here for these four points where we left off earlier and again we're going to do the same thing so we're going to consider one possibility where this vertex down here is not in the vertex cover and again we see. Oops! This is invalid so we do not need to continue any further. The next time we're going to put this vertex into the vertex cover. And yes, we have a valid solution of size 2. Continuing over here. We're going to consider this vertex here again. So one possibility would be to not put it into the vertex cover but that is already invalid. And another one would be to put it into the vertex cover, which again leads us to further possibilities to check and we're going to check them for this one down here. So same game. Invalid solution over here. Still valid over here. And we need to consider two more cases. And you already know that this case here is going to be invalid. And over here we're finding a solution of size 3. So that's all that. Originally, we had with brute force 16 assignments of 0 and 1 to the vertices that we needed to check and now this algorithm here only considered 9 assignments. So I told you at the beginning of this course that constant factor speedups probably don't matter that much when we're dealing with exponential time algorithms. So we will have to do some further analysis on this one. And you also noticed that there could be certain other techniques that we can use for speedups. So for example here, we had already found a solution of size 2. So we probably could've even stopped our algorithm a little bit earlier over here but we're going to get into this. The main thing that I would like you to understand is the technique of the search tree and we're now going to analyze this search tree further and try to improve it. And of course, I should also tell you why it is called a search tree. And to show you this, draw lines along where the algorithm was searching and then we're going to rotate this picture by 180 degrees and now here you can see a beautiful tree with the solution as the leaves or the fruit or whatever you will. But this is why it's called a search tree. It's basically a tree that hangs upside down. I don't know what the terminology tells you about computer scientists but that's not my concern here.
Using the search tree for practical purposes, we may have already gained some efficiency. But now the question is, does it really affect the running time of the algorithm when we use O notation and worst-case running time? Or is it just something that gives us some constant factor saving some practice? It's hard to tell right now because this tree as we've considered it is highly dependent on the structure of the graph that we're analyzing and we need to think a little bit more about this. One interesting thing to consider is the following: There were only two cases when we stopped further exploration in this tree here. One case was that we had found a valid vertex cover. The other one was when we considered an edge that could not be covered any more. Actually doing it this way is not the smartest possible way because for vertex cover we already know one thing. If you look at an edge such as this one here, then this edge here has two endpoints and now we want to assign these endpoints to be in the vertex cover or to not be in the vertex cover. So far we have looked at the vertices individually, but we could also look at both vertices at the same time so not go into two different possibilities but actually go in three different possibilities. There's three cases that makes sense here of assigning the vertices to be on the vertex cover or not. So we know this edge needs to be covered somehow and there's actually just three different possibilities of doing that. One is you take this endpoint here into the vertex cover. The other one is you take this vertex here into the vertex cover. Or of course you can also take both. But you can ignore the case where you would put none of the two endpoints into the vertex cover because then you already know that your solution doesn't make sense. This of course also covers other edges as well and now our algorithm can actually very quickly come to a solution because when we look at this edge here, there's only one possible choice that remains and that is taking this vertex here into the vertex cover. All edges are covered. We have a solution of size 2. So now let's do a worst-case analysis and say that this is the edge that the algorithm considers next. In all of the three cases, the edge is uncovered so it again goes into three possibilities here, three possibilities here, and three possibilities here. Of course if we had been lucky and had chosen this edge here, the algorithm would have had a much easier choice and actually we're later going to look at such optimizations for the algorithm and again we're going to branch into the three cases that makes sense for that edge so this one here would make sense for that edge but of course it leads to an invalid solution. This one here is actually the best possible solution. And this one here is also a solution but it's a larger one, and we're going to do the same thing over here. Now, what you might be thinking is, "Oh no! Now we're trying nine assignments." But the good thing is that, we can now do a worst-case analysis of this.
I would like you to think about a few properties of constructing the search tree this way, so taking an edge that is not yet covered-- so where both endpoints are not part of the vertex cover-- and then branching into 3 different possibilities, putting 1 endpoint into the vertex cover, putting the other point into the vertex cover, or both. So will the search tree always find the smallest vertex cover? Of course it can also find larger ones, but will it always find the smallest possible one for any given graph? Or could there be special cases where the search tree is wrong and we still need to fix that? Is it that the search tree at each level--and by level I basically mean if this is the tree here, then this would be a level and this would be the next level-- is it that at each level the algorithm determines the assignment? So is a vertex in the vertex cover or not for at least 1 vertex? And this will require some thought, so this is a challenging one down here. Can we construct the search tree so that at each level of the search tree we determine the assignment of at least 2 vertices? And of course you could just guess here, but I would really like you to spend some time and think about the answers because this will be very important for you to really understand how you can build better search trees.
The algorithm is correct, or the search tree is correct, because it still tries all possibilities; it's just smarter about avoiding stupid possibilities. And what I mean by this is basically that it deliberately avoids this case here where you have 2 endpoints of an edge not being in the vertex cover, so it tries to avoid those. Are there special cases where the algorithm could be wrong? No. It still searches through all possibilities. Again, it just does so in an intelligent way to avoid making any stupid choices. Does the search tree at each level determine the assignment of at least 1 vertex? Yes. That one is definitely true because if all edges would be covered, then the algorithm would be done; the search tree would not go much further. The usual case would be that we actually do an assignment for 2 vertices, and this will also tell you something about the answer down here. So usually when we have an edge that is not covered, we will have the case that we actually determine the assignment for 2 vertices. There is 1 special case, and if you have found that special case yourself, then you have truly understood search trees. But if you haven't, then just listen closely and don't worry too much about it. There might be the following case, where you have an edge that is not yet covered but you've already made the decision for 1 of the endpoints. The question is now what to do with this one here. But this is actually a case that is rather easy because here we don't really have to make a choice anymore because we know this edge must be covered. So whenever we decide for 1 vertex that it is not going to be part of the vertex cover, then we can put all of its neighbors into the vertex cover automatically. We do not need to make a choice here anymore, and this is why we can construct the search tree so that at each level we determine the assignment of at least 2 vertices. Just to make this very clear, let me show you this again here in this example. Let's go back 1 level. When we made the choice here to not put this vertex into the vertex cover, we could immediately also have said that this vertex here must be part of the vertex cover because otherwise this edge here cannot be covered and we would already have been done. And the same thing up here. So if we decide that this vertex is not part of the vertex cover, then we immediately know that this vertex must come into the vertex cover because otherwise this edge here would not be covered, and we also know because of this edge here that we have to put this vertex here into the vertex cover, so we are also done, having found a slightly larger solution. The only thing where we still need to make a decision is this part down here. There we can still go into 3 cases, either putting that one into the vertex cover, that one, or both. And this is of course something that an algorithm can do as well. So actually, we get down to 5 assignments here, which already makes this search tree better than the one we started out with. But now comes the really cool part because once we know that we can construct the search tree so that at each level we determine the assignment of at least 2 vertices, we can do a pretty, pretty cool worst case analysis of that search tree. So let me show you.
We started out with a brute force algorithm, and the brute force algorithm basically considered 2 to the power of N assignments. Now let's look at our smart search tree or intelligent search tree and see how that is structured. We're going to do a worst case analysis here. We start out with a graph where we don't have any assignments, and then we branch into 3 different possibilities. Up here we don't know anything, so we have 0 vertices assigned. Now, since we know that we can construct it so that in the next level we will have at least 2 more vertices assigned, here we know that we have at least 2 vertices assigned. We can go 1 level deeper and again we will branch into 3 possibilities all of the time. Because we're doing a worst case analysis, of course we could already be done. Or let's say we just continue. In the next level we have again at least 2 more, so we have at least 4 vertices assigned and so on. So let's say we continue the search tree in this manner, so we will always get wider and wider and wider. Then we know 2 things. One is that the number of levels that we have can be, at most, N/2 because every time we assign at least 2 vertices. So N and a half levels is the maximum number of levels that we can have because after that, all vertices have been assigned. And another thing that we know is that while we start out with 1 possibility and each time we go 1 level deeper, again doing a worse case analysis here, the number of possibilities that we consider triples. The tree basically gets wider by a factor of 3. The width is times 3 at each level. My question for you is if you look at the lowest level of the search tree, each of these ones down here is an assignment of 0 and 1 to the vertices. What I would like you to think about is how many different assignments do we have at this level down here, level N/2? You can assume for simplicity that N is an even number, so N/2 will be some integer. I'll give you a number of choices. Is it 2 to the power of N assignments that we have down here? Is it 3 to the power of N/2 assignments that we have down here? Is it 3 to the power of N? Is it 2 to the power of 3 times N? Or is it 2 to the power of N/2 times 3? You might have to think about this for a little bit, but just keep in mind the facts. The number of assignments that we are considering triples at each level, and we have N and a half levels.
The correct answer here is 3 to the power of N/2 because if you look at this tree, it starts out here at this level, at the first level, and it has 0 assignments that it's considering. At level 1 it's considering 3 possible assignments, so it triples at each time. At level 2 it's considering 9. At level 3 it will be considering 27 and so on and so on and so on. For level X it's considering 3 to the power of X different possibilities. The maximum level is N/2, so it's 3 to the power of N/2. So the correct solution here is 3 to the power of N/2.
So, wait. Brute force, 2 to the power of N, and now we have a 3 in the base of the exponent? Well, we probably need to do a little calculation here. So what I would like you to tell me is what 3 to the power of N/2 can actually also be expressed as. So 3 to the power of N/2 is a certain number to the power of N, and I would like you to enter this number with 3 digits after the decimal and please round it up.
The answer here is 3 to the power of N/2 is the same as 3 to the power of ½ to the power of N, which is the same as the root of 3 to the power of N, which, if you use a calculator, is 1.733 to the power of N. Instead of 2 to the power of N assignments, we now only have to check 1.733 to the power of N. We have just squashed the exponent of our algorithm using a smart search tree instead of brute force.
You might not be too impressed here. So 2 to the power of N, 1.733 to the power of N, hmm, doesn't sound too good. But there's 2 things to keep in mind here. First of all, this is a worst case analysis, so in practice we might be doing actually better. And the second thing is even in a worst case analysis, because we are dealing with exponential time, this improvement here is actually huge. So let's say that we're looking at an input of size 50. Then 2 to the power of N assignments would be 10 to the power of 15, about 1.13 times 10 to the power of 15. So this is almost infeasible. Maybe with a very, very, very, very, very powerful computer, but you probably wouldn't want to do this and probably also wouldn't want to do this for more than a single graph. If, however, you have the smarter search tree, then you have 1.733 to the power of N, which is 1.733 to the power of 50, and that is about 8.71 times 10 to 11. And the difference here is a speed-up of over 1000. That speed-up will become larger as N grows larger. And the other thing is again that this down here is only a worst case analysis. In many practical cases, the search tree will be much smaller because first of all, as we saw, it's often the case that you assign more than just 2 vertices, and in that case the tree doesn't become as high. And also you can apply various other optimizations, some of which we're soon going to talk about, to make the search tree even smaller, both in practical cases and in the worst case. Now, should Alice be happy about this? Probably not yet because already for N equal to 50, the running time of the algorithm here gets rather large. So if she is solving the problem for a telecommunications network with, say, 500 vertices, it will still be out of the question to find an optimal solution using the search tree. But of course there's a little hope already because we haven't really fully explored search trees yet. Maybe it's going to get even better for her. But before we continue helping Alice, let's have a look at Bob and Carol and see if can apply smart search trees also for their problems: independent set and clique.
So let's see now if we can design a better search tree for Bob and Carol as well so that they also can look happy. As you'll remember, Bob was working on the clique problem, finding a set of vertices in a graph that all were connected as much as possible, so a set something like this. All vertices are connected to all other vertices. And Carol was looking for the opposite thing, so a number of vertices that were not connected to each other at all in a network. So I'm going to draw this dashed line here to show that there's no connection. We're actually going just to look at independent set because as you'll remember, clique and independent set are so closely connected that it's actually once you have an algorithm for independent set, it's not that difficult to design a very similar algorithm for clique. You can basically use the same techniques. So in order to design a good search tree for Carol, how do we design this? Actually, it's quite similar to vertex cover. Remember how we defined an independent set. In an independent set, no 2 vertices can be connected to each other. We can actually play the same game that we played for vertex cover in a search tree. So let's say you have 2 vertices, and we're going to call this one here v and this one u, and the 2 vertices are connected by an edge, and we're now trying to assign values of 0 and 1 to those vertices. So 1 and 0 here, 1 and 0 here, same as for vertex cover, only this time we are looking at assignments that would be valid for independent set as opposed to vertex cover.
Let's do a little quiz here. What is true for any valid--and by valid I mean with respect to independent set-- any valid 0, 1 assignment to the vertices v and u if they are connected by an edge? Is it that at most, 1 of the 2 vertices can be in the independent set-- and I'm going to write independent set as IS here just to not move out of the screen here. Is it that both v and u can be in the independent set? Is it that at least 1 of the 2 vertices must be in the independent set because we're looking for the largest possible independent set? Could it be that although we're looking for the largest possible independent set, they are both not contained in that independent set? And finally here, because both vertices are connected by an edge, is it the case that we would have to exclude both of these vertices from the independent set? Please check each one of those here that are correct.
The first answer I hope wasn't that difficult to find. This one is true. The rules of an independent set are that 2 vertices cannot be connected by an edge if they are part of the independent set, and that means if we were to put this one into the independent set as well as this one here, then we would have an error here, which automatically also answers the second question. It's not the case that both vertices can be in the independent set because they are connected to each other. This one here might have been a little bit more tricky. So at least 1 of the 2 vertices must be in the independent set. You might be inclined to think that because we are looking for a maximum size independent set. But actually, that is not the case, and I will give you a little example for that. Say we have a very small network like this one here, and here you have v, here you have u. Then this network here has an independent set of size 2. Actually, it has a number of independent sets of size 2, but there is also 1 independent set of size 2 where v and u are both excluded. This is a bit different from vertex cover because you can actually have the case that both vertices are not part of the set even though they are connected by an edge. So this is false. Both v and u could be 0. Yes, this is exactly what is also shown by this example. You can have the case that both are not part of the independent set, so this is definitely a yes here. Do we have to set both vertices to 0? No. This is also not the case because, for example, you could also have an independent set here that you construct in this way. You put u in and this one out. So now we have also found a maximum size independent set, but we have included 1 of the 2 vertices. So this one here is clearly also not true. If you take all of those observations together, it's pretty cool because this now gives us a new search tree strategy that is actually quite similar to vertex cover. So we have our 2 vertices, v and u, here. Just as we did with vertex cover, we can spread into 3 different possibilities. The first one is we take v into the independent set but not u. The second one is exactly the other way around. We do not take v but we take u. And finally, it is possible that both are excluded. As long as we find an edge between 2 vertices, v and u, for which v has no assignment yet and u has no assignment yet, then we can branch into exactly 3 possibilities. Now, what happens if either v or u already has an assignment? For vertex cover I told you what would happen then. This time I will let you figure this out.
So how can we deal with the case where either v or u already has an assignment? Do we have to modify the search tree for that? Do we have to initiate a brute force search in that case? Or is it the case that we do not need to worry at all because it's actually quite obvious what to do with those vertices? Please select whichever you think is correct.
The correct answer here is, similar to vertex cover, those cases are actually the easy ones. And I'll show you why. Either v or u already has an assignment, and we can just consider 1 of those cases. So let's say v already has an assignment, here is u, and there's 2 cases. Either we said yes, v is in the independent set or we say no, v is not in the independent set. This case up here, that's very easy to see because if v is in the independent set and there's an edge here between those vertices, then you cannot be in the independent set. So that one is very clear. What about the case down here? That's a case that requires a little more thought. By itself right now, it could either be that u is in the independent set or it's not. And we don't know, so you might have been inclined to think that we need to modify the search tree or initiate some other form of brute force search, but actually, that's not necessary and I will show you why. This vertex u here, there are 2 possibilities. If u is not connected to any other vertices-- so let's say all other vertices in the network are independent of u-- then we can put u into the independent set without having to think because we're looking for a maximum size independent set. This is the only connection that u has to other vertices. Why not just take it? It doesn't really cause any other conflicts. Now, what if u is connected to other vertices? Here there's 3 different possibilities. The first possibility is that u has some neighboring vertex-- so one that is connected by an edge-- where we already have assigned it to be in the independent set. And in that case we can immediately say no, you cannot be in the independent set. We must assume that none of these vertices here is already in the independent set. So now there is 2 different cases. One is that 1 of these vertices here has not yet received an assignment. And in that case we don't have to worry about modifying the search tree because now we're looking at a case like this here again. So we can use our standard search tree to search through assignments for these 2 vertices here. If every vertex here, on the other hand, already has an assignment, then we just discussed, because we've just dealt with the other case, that all of these assignments here must be a 0. So all of these vertices here--and there can be more, of course-- are not part of the independent set, and that means that we can take u into the independent set again. It might not have been obvious in terms of thinking it through what to do in this case, but for an algorithm we can design it in a way so that the algorithm always knows what it is to do. First of all, it looks for edges where both vertices do not have an assignment. And once it cannot find those edges anymore, then it will know how to deal with the remaining vertices. And this of course means that for independent set we can use a search tree that, just like vertex cover, always finds an assignment for at least 2 vertices. So its height, so to say, of the search tree is N/2. The size of the layers multiplies by factor 3. So the size of the overall tree is 3 to the power of N/2. For independent set we have a search tree of size 1.733 to the power of N. Just like vertex cover, the calculation here is the same. And because independent set and clique are so similar, we just have to transform the network to the inverse network, as you remember, hopefully, from the first unit. We also have a search tree of size 1.733 to the power of N for clique. Just like Alice, no reason to be super happy, but Bob and Carol can be a little happier now. And of course we're going to go on investigating.
We have only so far been able to touch on the very basic concepts of designing intelligent search trees. And research to date is much more advanced. So you might be wondering, considering the state of the art, how low can you go? What are the best search tree algorithms to date that we know of? Actually, I'm not going to tell you anything about vertex cover for now because that problem will deserve a little more attention later in this unit. I will show you the state of the art for some other problems. So for independent set there are currently 2 algorithms that could kind of be considered to be the smallest search trees. One has size 1.189 to the power of N, and the other one has size 1.211 to the power of N. It's 2 different algorithms. This here has a bit larger search tree, but the algorithm itself is probably a little bit faster in the worst case than this one here because of the polynomial factors that are involved. But this here is just the search tree size. For clique it's the same as independent set, as always, because of the close connectedness of those 2 problems. In one of the previous units you also got to know 3 SAT, which is a satisfiability problem where every clause has exactly 3 variables, and that can be solved with a search tree of 1.496 to the power of N last time I checked. This is a very advanced search tree. It's basically half a book of proofs required to show this here. Now, what about shortest tour or, more generally, the traveling salesman problem, which is shortest tour, only that you can only visit a city exactly once. This is a very interesting one because the best-- well, it's not even really a search tree--but the best algorithm has an exponential factor of 2 to the power of N. So here it's still an open research problem to find out if you can design better search algorithms for this problem. By the way, in case you're wondering, do the laws of NP completeness, as far as we know them, say anything about how low the exponent can go, so really put a limit on the space here? Say, for example, unless P equals NP, there is no algorithm for independent set with a search tree that is smaller than 1.1 to the power of N or something like that. And the answer here is we don't know. So it might be that you can solve independent set, for example, in an algorithm with 1.01 to the power of N as your search tree size. Nothing in the laws of NP completeness that would speak against that.
So what does this mean for the problem sizes that we can expect to handle? I'm going to give you a little quiz here so that you can develop a feeling for that. Assume that we want to consider, at most, 1 billion possible solutions for an NP complete problem, 1 billion different assignments of 0 and 1 to the vertices. So what I would like you to do is to figure out the maximum problem size that we can handle using these best known search trees under this condition here. We want to consider, at most, 1 billion different solutions. So please enter your answer here in these boxes. What I would like you to do for this box here is use this algorithm. For this box here I would like you to use this one here. And for comparison I would like you to figure out 2 more. One is 1.1 to the power of N, and one is 2 to the power of N. So there's actually 5 different answers here.
The general way to calculate this is using logarithm. So you say 2 to the power of N, for example, is smaller than 1 billion, which means that N must be smaller than the logarithm base 2 of 1 billion, so N must be smaller than or equal to 29, which is exactly what we're going to put here. And of course you can do the same calculation for the other ones as well. So you get here the maximum problem size we can handle would be N equals 217. Here it's 119, 108, and 51. The interesting thing to see here I think is that you see how even small changes in the base of the exponent can make a difference. So here the change is really small, from 1.21 down to 1.18, and already we could handle a problem size of 10 more. And of course this also shows you that the problem size that you can handle, at least in a worst case scenario, is about 4 times as much as you could handle if you used a normal search tree. And of course, if you could find a really, really good search tree-- and there are some problems for which you can come close to 1.1, actually-- then you can handle problem sizes that are almost twice as much as this one over here. So it's really a significant improvement. It's not dramatic in the way that now we could solve instance sizes of thousands of vertices or thousands of variables, but it's a significant improvement. But it's a significant improvement through using sophisticated algorithms.
What I already mentioned to you is that this is of course just a worse case view. So this is what you just calculated here if we were to consider a billion different possible solutions. And the numbers would of course get much higher if we said we are ready to consider 10 billion or 100 billion possible solutions, but still they would not be in the thousands. And that's actually the surprising thing because if we look how well those algorithms perform in practice, they perform dramatically better. So for independent set and clique you can usually solve instances far over 1000 vertices, and for 3 SAT there are even annual competitions in which participants solve instances with, I would say, almost in the tens of thousands of variables. This varies by competition, so there are some competitions where it's in the hundreds, in the thousands, but you can solve instances up to 10000. And the reason for that is that it seems like most of the instances or inputs to NP complete problems that we encounter in practice are, in a way, well behaved. They rarely tend to be worst cases instances that force our algorithms to run in their maximum time. And there is also, besides search trees, other very interesting techniques that you can use to further improve your running time of your algorithms. I would now like to introduce to you preprocessing as another technique to deal with NP completeness.