Advertisement
vladimirVenkov

Algorithm strategies:

Jun 25th, 2018
131
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. Algorithm strategies:
  2.  
  3. Brute force: this strategy involves going through every combination/route/element to find the optimal solution. While this will always work, it will take a lot of time and computation. Example: primitive search through an array of elements to find a specific value; the program will look through every element until it finds the value. At worst, this example will have a complexity of O(n), where "n" is the length of the array.
  4.  
  5. Backtracking: this strategy is similar to the brute force strategy, but it is smarter. If it realises a 'candidate' is not a valid solution, it 'backtracks' and abandons the attempted solution. Example: consider the "8 Queens Puzzle" that explains that you can have 8 queens on an 8x8 chessboard so that none of the queens threaten each other. The common backtracking approach would involve checking the row 0 and column 0 (0x0) entry of the chess table to see if it is 'safe' to place a queen, which it will be since there are no other queens in play. The queen is then placed, and the column variable is incremented by 1, and the row variable is reset to 0. The code the checks if it is safe to place a queen in the 0x1 entry, which it isn't (as the previous queen threatens this area), so the row is incremented by 1, and now the algorithm checks if the 1x1 entry is safe, which it isn't, so the row is incremented again. Now it checks the 2x1 entry. This entry is safe, so it places a queen here, increments column by 1 (making it now 2), and resets the row variable to 0. It now will check if the 0x2 entry is safe, and so on. If the row or column variables go past 8, and there are not 8 queens in play, then the algorithm will backtrack by removing the queen that was last placed on the board, and setting the row variable to the value of the row if the previous queen's position + 1, and the column variable to that of the previous queen's position. This process will continue until there are 8 queens on the board, and the program will terminate.
  6.  
  7. Another example for backtracking is: say you have nxn matrix, and our task is to to find the shortest path from 0,0 to n,n. This can be done with recursion. The function will take in parameters "r", "c", and "n", which represent the row (starting at 0), column (starting at 0) and the size of the nxn matrix. The base case would be when the values "r" and "c" (which stand for row and column respectively) both equal "n", meaning the algorithm has reached the end of the matrix and has found a path. The number of steps can then be returned and the function is terminated. A second case is then tested for which checks if "r" or "c" are greater than "n + 1", and if they are, the function returns. If the base case or the other checker are not satisfied, then a "step" variable is incremented by 1, and the function calls itself with parameters "r", "c" + 1, and "n". Once "c" begins to exceed "n + 1", the second checker will be satisfied, which tells the function to return once (and backtrack), so that now, another function below the first recursive call can be called. This function calls itself with parameters "r" + 1, "c", and "n".
  8.  
  9. Backtracking should be used for when the path to find the solution is more important than the solution itself, i.e. when there are multiple possible solutions to a given problem.
  10.  
  11. Branch and bound: in this problem, we are given for positions which are all connected to each other (1, 2, 3, 4). However, the distance between each of these values is different, so we are tasked to find the shortest path from 1 back to 1 (by traversing all positions).
  12.  
  13. Consider the image below: https://i.gyazo.com/e779472f9bfc2b40f05d035966872308.png
  14.  
  15. This is how the problem can be solved with the Brute force method, in which all possible paths are generated, and the path with the smallest value is chosen. This works, but maybe we are accessing paths we do not need to access?
  16.  
  17. This is another approach using the branch and bound method: https://i.gyazo.com/939d6f3a3b132c9b5a1ffaa2bf02673c.png
  18.  
  19. In this example, the program will find a path and calculate its length, and store this length as a 'lower bound'. When the next path is being calculated, every time the program traverses a position, it will check the current length of the path. If it is greater than this lower bound, then the program stops calculating for this path as the path would obviously be of less length than the lower bound. If the program finds a path that is less than the lower bound, then the lower bound is updated. The end result is that the problem is solved in less steps than brute force, which saves time, and produces a tree which when compared to the tree of the brute force method, it appears as some of its 'branches' have been cut!
  20.  
  21. NOTE: a lower bound (or upper bound. What if we have to find the maximum of something?) may be calculated earlier before the program starts running the traversals to reduce running time further.
  22.  
  23. A downside to branch and bound is, like brute force, it is sometimes slow given the size of the data.
  24.  
  25. Divide and conquer: this is where we look at the problem and break it up into smaller problems, then combine the solutions of these pieces together. This strategy can only be used if a problem is easy to divide into pieces, and only if the division of the problems is easy to do. Divide and conquer strategies may also use recursion (like in the two given examples), and this may lead to a huge recursive stack depending on the size of the problem and how much the problem is being broken down, thereby making it very inefficient in some cases. Examples: merge sort and quick sort!
  26.  
  27. Transform and conquer: essentially the process of transforming a problem into a more solvable one! For example, finding the upper/lower bound of a programs complexity would be considered a method of transform and conquer, as you are modifying the code to easily find the complexity. Another example would be finding a number in a list. While a brute force strategy may be able to already solve this by looking at every value, it would need to look at every value in the worst case. Instead, we can transform the problem by sorting the array and applying a binary search to reduce run time! Obviously, the main problem with transform and conquer is that it will not always generate the actual solution, so it is useless for programs that specifically wish to find the exact solution.
  28.  
  29. Dynamic programming: consider solving the nth Fibonacci number for input "n" (the Fibonacci numbers are the following sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, etc.). The traditional approach involves solving with recursion: there is a function that takes in input "n" and calculates the nth Fibonacci number (if n = 4 for example, then the output should be 3). The base case for this recursion checks if "n" is less than or equal to 1, and if it is, then "n" is returned. If the base case is not met, then the function returns itself with parameter n - 1 added with itself with parameter n - 2. While this function works, it may be re-computing Fibonacci numbers unnecessary. Consider the example:
  30. fib(5)
  31. /
  32. fib(4) fib(3)
  33. / /
  34. fib(3) fib(2) fib(2) fib(1)
  35. / / /
  36. fib(2) fib(1) fib(1) fib(0) fib(1) fib(0)
  37. /
  38. fib(1) fib(0)
  39.  
  40. Note how many times it computes fib(3), fib(2), fib(1) and fib(0). This is very inefficient and there is a way to fix this using dynamic programming.
  41.  
  42. A vector or a list could store the values of already computed Fibonacci values, meaning that once a Fibonacci value is calculated, that value is stored into the array. The next time it is encountered, instead of computing the value again recursively, the corresponding value from the array is used in place of that, which saves computational time and makes the program more efficient!
  43.  
  44. Another example could involve finding the factorial of a number!
  45.  
  46. NOTE: whenever the greedy algorithm is applicable, it will always be faster than the dynamic programming algorithm.
  47.  
  48. Greedy algorithms: essentially, we wanted to find an optimal solution to a problem that involves a series of choices, a greedy algorithm will always choose the optimal choice for that step in the choice tree. For example, consider that we have a number of 20, 10, and 5 dollar coins, and we wish to use the least number of coins that sum to 35 dollars. In each step, the algorithm will pick a coin and add it to the overall sum, but the coin it picks will always be the largest value coin that it less than 35 - the current sum. The current sum is 0, so 20 is added, making the current sum 20. The incrementer for the number of coins is also incremented. In the next step, the algorithm cannot pick 20 anymore as this exceeds 35 when summed with the 20 added to the sum in the previous step, so it picks the second largest, 10. In the final step, it can only pick 5. When the greedy algorithm works, it is extremely fast. However, it won't work for most problems.
  49.  
  50. Heuristic algorithms: essentially involves finding a 'shortcut' for solving a problem. This will not always find the exact solution, but it gives an approximate solution extremely quickly. They are used when a problem is to be solved quickly, or when no other methods work. Such an example is making a problem easier to solve by transforming it or altering the problems demands (much like in transform and conquer), and by making it easier to solve, it will be solved more quickly.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement