Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- PROBLEM SOLVING BY SEARCHING
- - Search is a main tool in AI, a systematic way to explore a solution space
- ex. ... y= 2x + 5
- Solution space (1,7) (0,5) ... etc..
- Solution is a 2 pair of numbers
- Solution space is infinite amount of stuff... so ...
- The Search algorithm in AI is to find the correct solution
- (A Shitty algorithm will check every single option.. so a good search alg is needed)
- - 2 Types of search
- - Blind Search
- Uninformed Search algorithm
- - Heuristic Search
- Informed Search Algorithm
- Which do you use?
- Analize the problem to get additional information...
- (no additonal information) --> Blind
- This doesn't mean check every option.. just no aditional information
- (Additional information) --> Heuristic
- You're given a map with a bunch of cities and roads
- Want to make a driving plan to get from 1 city -> last with shortest path
- If you have a map.. you can make an informed guess as to what the ideal method is
- This additional info makes for a more efficent search
- Blind would be ''any path'' Heuristic is ''optimal path''
- We're gonna do alot of trees and graph searches
- -but mainly trees.. chang likes trees
- TERMS:
- Root - Node w/ no Parents
- Leaf - Node w/ no Children
- Internal Node - Node that !(leaf)
- Graph - Tree with Loops
- A Graph can have multiple parents to a node (where as a tree is 0 or 1)
- Directed Graph vs. Undirected Graph
- A---->B directed is one way
- A-----B undirected can go back and forth
- 99% of any real world problem can be modeled as a graph
- Path - "Action plan to achieve the desired goal"
- If you have a problem to solve... we "REDUCE" problem into a search problem
- To do that.. you need..
- STATES - (states have to be complete)
- State should represent all and only relevant aspect of a problem
- ex. Finding cheapest flight plan (Harrisburg -> NYC)
- - Knowing City of airport (not enough info yet)
- - Address of airport (too little)
- - Designation of airport (now you have enough info)
- ACTIONS - (next state is determined by action and each action is discrete)
- (discrete, it's an independent action)
- GOAL TEST - (can have multiple goals)
- Graph Search ---REDUCED---> Tree search
- From a tree, you can create a search tree
- In a search tree .. each vertex is a node
- S VS S
- / \ / \
- AS BS A B
- / \ / \ / \ / \
- CAS DAS DBS GBS C D D G
- Difference is each node has ancestor information and how to get to each node
- First when converting a graph to a tree, you gotta be careful and get rid of cycles
- (Trees aren't allowed to have cycles)
- Later we will discuss.. how to avoid loops and how to avoid extra visit
- TYPES OF SEARCH - (TREES)
- | BLIND | HEURISTIC|
- ----------------------------------
- any path|depth first |BEST FIRST|
- |breadth first | |
- -----------------------------------
- optimal |uniform cost | A* | <-- 10 years a go every game engine uses A*
- | | |
- -----------------------------------
- GENERIC SEARCH ALGORITHM... THEN CUSTOMIZING SAID ALGORITHM TO BE REALLY GUD
- [note: it's easy to customize in scheme.. that's why we scheme]
- 1. Init Q = [(s)], V = [S]
- 2. If Q: empty, Search Fail <--- 1&2 inits
- else, pick a node N from Q
- 3. If state(N) is a goal, earch succeeded and return N <-- Goal test
- 4. Remove N from Q
- 5. Find all the children of state(N) not in V , and create all one-step extensions of N to each child <- extend
- 6. Add the extended paths to Q and add the chldren state(N) to V <- Merge
- 7. Goto2
- Notation
- - node: a path from start state to some state X
- - State: Most recent state of the path (node)
- - Q : list of search nodes
- - S : start state
- - V : visited states
- ex.
- S
- / \
- BS AS
- / \
- CBS DBS
- state = car (node)
- Step 3 and step 5 are both problem specific
- Step 2 and step 6 are both algorithm specific
- Q : [(CAS)(DAS)(BS)], how to add new things from the queue and how to remove current thing, is what
- determines the algorithm
- ex. Depth first, you treat Q like a stack
- ex. CAS gets kicked out to add, say FCAS and ECAS
- ex. breadth first, treat Q like a QUEUE
- ex. [(A)(B)(C)]
- A gets kicked out, add AS to back
- ex. Best first add heuristic value to the node (ex.[(11CAS)(10DAS)(5BS)])
- ex. you'd pick 11CAS because it has best value
- S
- / \
- B C
- / \
- D E
- Q:[(S)] ---> Q: [ (BS) (CS) ] ----> Q[ (EBS) (DBS) (CS) ] --> Q[ (DBS) (CS)] DEPTH FIRST
- Q:[(s)] ---> Q: [ (BS) (CS) ] ----> Q[ (CS) (DBS) (EBS) ] --> Q[ (DBS) (EBS) ] BREADTH FIRST
- HEURISTIC .. what is it?
- -Rule of thumb, not about guarenteed value, but it's hopefully helpful
- VISITED.. what is it?
- - A state is visited wehn a path to that state is added to Q
- ex. Q[(s)] V[S] --> Q[(BS) (CS)] V[S, B, C]
- EXPANDED.. what is it?
- - A state is expanded when a path to that state is picked from N and created one step extension
- ex. Q[(s)] V[S] --> Q[(BS) (CS)] V[S, B, C] -> at this point S is expanded .. but B and C are visited
- What do you use the visited list for?
- - So when you access a different node that may connect to a different node,
- you know you don't have to reaccess N and that it was already visisted
- |how to pick N from Q|How to add extensions to Q|
- ====================================================================
- DEPTH |pick first N | front of Q |
- ---------------------------------------------------------------------
- BREADTH |pick first N | back of Q |
- ---------------------------------------------------------------------
- BEST FIRST |search for best val.| anywhere in Q |
- alt |pick 1st |sort in order (best first)|
- ---------------------------------------------------------------------
- BREADTH FIRST WILL ALWAYS FIND THE SHORTEST PATH
- S
- / \
- AS BS
- / \ / \
- CAS DAS DBS GBS
- / \ / \
- CDAS GDAS CDBS GDBS
- - Depth first likes to go deep from left to right, doesn't care how deep something is
- - Breadth first likes to go level by level, will always find the shallowest (ie shortest) path
- to the node
- BEST FIRST SEARCH
- - Heuristic any path
- - [PLS REFER TO GRAPH IN 'PICNOTES-1'] *note.. G should be 0
- - When adding to the list, sort the list
- 1. Q - (10 S) V - S
- 2. Q - (2 AS) (3 BS) V - S A B
- 3. Q - (1 CAS) (3 BS) (4 DAS) V - S A B C D
- 4. Q - (3 BS) (4 DAS) V - S A B C D
- 5. Q - (0 GBS) (4 DAS) - S A B C D G
- GOAL FOUND
- UNIFORM Search
- - optimal blind search
- - we need to have a metric for how ''good'' each path is (hence optimal)
- - Usually the measure is path length
- >A---2--->G
- 4/ ^
- S 1
- 2\ |
- > B
- - Each path has an ACTUAL cost (distance, $$ doesn't matter)
- - REFER TO PICNOTES2 for this
- - first thing in list is the cost of that path
- Q - (0 S)
- Q - (2 AS) (5 BS)
- Q - (5 BS) (4 CAS) (6 DAS)
- Q - (5 BS) (6 DAS)
- Q - (6 DBS) (10 GDBS) (6 DBS) <- 2 D's with same value, they are the same doesn't matter
- Q - (9 CDBS) (8 GDBS) (10 GDBS) (10 GBS) (6 DAS)
- Q - (9 CDBS) (8 GDBS) (10 GDBS) (9 CDAS) (8 GDAS)
- Q - GOAL FOUND, G w/ 8 Value
- GDBS (GDAS works fine too but they're the same so w/e)
- You can't do goal test too early.. otherwise it might catch a less optimal path
- (if lowestValue < goalList) -> Check lowest value, get children and if all other paths
- are higher or eqal than the goalList value then you found the goal
- Notice: D was expanded 2x .. Is this okay?
- - Not really, we use another type of list to keep track of these things
- EXPANDED LISTS
- - Unlike visited lists, we aren't trying to avoid revisiting a node, we just
- don't want to expand the same node more than once ....
- A* SEARCH
- - Uses Heuristic AND actual information
- - Add extensions anywhere, pick the best path based off of (heuristic + actual)
- - Heuristic MUST be smaller than actual cost... why?
- - A wrong heuristic can make things go wrong
- Admissibile Heuristic - HEURISTIC THAT ALWAYS UNDERESTIMATES
- - - REFER TO PICNOTES3.png
- 1. Q= (0 S)
- 2. Q= (4 AS) (8 BS) *4 and 8 because Heuristic + Actual
- 3. Q= (5 CAS) (8 BS) (7 DAS) *5 and 7 because only count heuristic for the current node
- 4. Q= (8 BS) (7 DAS)
- 5. Q= (10 CDAS) (8 GDAS) (8 BS)
- Pick GDAS -> GOAL FOUND
- But.. I wanna make Optimal Search and A* more efficient .. how do?
- - it's inefficent because you expand the same node more than once (visiting != expanding)
- - Expanded is when you pull something out of the queue and finding its children and add children to qu
- - You might have to touch a same node again.. but you don't need to expand it again... EX
- - We need to modify the algorithm a bit here...
- 1. Initalize Q=[(0 S)] E= []
- 2. If empty(Q) - Fail
- else.. pick best N from Q
- 3. if state(N) is goal return N
- 4. Remove N from Q
- 5. If state(N) is in E goto 2
- else.. add N to E
- 6. Find all children of state(N) not in E
- create one step extension
- 7. add extension to Q --> If state is already in Q, keep the bettter one
- 8. Goto 2
- UNIFORM COST w/ Extended List
- -PICNOTES3 minus H values here
- 1. Q= (0 S) E = []
- 2. Q= (2 AS) (5 BS) E = S
- 3. Q= (4 CAS) (6 DAS) (5 BS) E= S A
- 4. Q= (6 DAS) (5 BS) E = S A C
- 5. Q= (6 DAS) (6 DBS) (10 GBS) E = S A C B *BUT WE DON'T KEEP DBS / DAS whichever does'nt matter*
- 5ACTUAL. Q = (6 DAS) (10 GBS) E = S A C B
- 6. Q= (8 GDAS) E = S A C B D *we don't add CDAS since C was expanded already .. and we kick 10 GBS out
- since 8 GDAS is lower value*
- 7. GOAL FOUND 8 GDAS
- A* w/ EXTENDED LIST - PICNOTES4
- 1. Q= (0 S) E=[]
- 2. Q= (101 AS) (3 BS) E = S
- 3. Q= (101 AS) (94 CBS) E = S B
- 4. Q= (101 AS) (104 GCBS) E= S B C
- 5. Q= (104 GCBS) E =S B C A
- GOAL: GCBS: 104!
- But w8... GCAS is better (102)...
- This is caused by B's Heuristic being way too small fucking the algorithm up
- If B was Heuristic was more accurate.. we wouldn't have this problem
- - Heuristic MUST be consistant otherwise the A* becomes fucked
- - To be consistant you need....
- a <= b + c
- b <= a + c
- c <= a + b
- ie.. 2 heuristics + actual cost must form a triangle
- But how do we fix this?
- - Get a consistency (monotonicity) condition
- - H(G) = 0
- - H(A) <= C(A,B) + H(B)
- So in PICNOTES4 .. change H(C)-> 100, H(B)-> 90
- With these changes.. the algorithm does this
- 1. Q= (0 S) E=[]
- 2. Q= (101 AS) (92 BS) E = S
- 3. Q= (104 CBS) (101 AS) E = S B
- 4. Q= (102 CAS) E = S B A *we kicked 104CBS out because 102CAS is better
- 5. Q = (102 GCAS) E= S B A C
- GOAL: S->A->C->G with a value of 102
- HEURISTIC HAS TO BE CONSISTENT AND MONOTOCITY
- EFFICIENCY
- - We wanna make shit go fast in terms of time (using V or E)
- - But what about efficency with Space?
- - Say you have a tree, with each node having 1Byte and each node having 10 children
- LEVEL 0 - 1 node - 1 byte
- LEVEL 1 - 10 nodes - 10 bytes
- LEVEL 2 - 100 nodes - 100 bytes (10^2)
- You see where this is going...
- if you do breadth first search you really start getting fucked
- ... your ram is gonna be nibbled up pretty quick
- SOOOOOOOOOOOOOOO
- ITERATIVE DEEPENING Depth FIRST SEARCH
- - DFS is more space efficient where was breadth is faster
- **********************************************
- * *
- * |h(A) - h(B)| <= c(A,B) * <-- Rule for good heuristcs
- **********************************************
- BINARY SEARCH TREE EFFICIENCY
- - Much more efficent than a normal Search Tree
- - #nodes
- level 0 - 1
- 1 - 2
- 2 - 2^2
- 3 - 2^3
- You get the point .. at level D the # of nodes is 2^d - 1
- IDDFS
- - Init D=1
- - Do DFS to max depth of D
- - D = D + 1
- - Goto 2
- Searches depth first of a subtree, making the depth of sub tree more and more
- BUT WAIT .. you ask.. you search the same tree over and over again this is gross
- This shit literally does double the work of a breadth first search
- ... Not really.. about the same time but it saves a lot of space in memory
- BUT .. The work saved from going over the tree again and again is more saved than
- the worst case in a breadth first search
- So.. Say we have a problem we don't know how to solve
- - But we can form a solution space with possible solutions
- - We search the solution space systematically to find the best solutions
- -Worst case, is checking every solution.. and if the space is huge... there's a problem
- - Fuck man.. what if it's infinite.. we can't search that..
- So to solve this... we have OPTIMIZATION / Local Search
- - We don't look for a path to the goal, we don't care about the path, just wanna get there
- - N Queen Problem
- -4 x 4 grid .. with 4 queens and none of them are in check
- - Say we have a solution space.. it's continous (read: infinite)
- - we calculate the cost for each solution, creating a 3d cost function (like a mountain / hill)
- - you move around the x y space and looking at the z (cost) to reduce the cost
- - when you reach a point where you are at the least cost point .. you found it
- "HILL CLIMBING SEARCH"
- HILL CLIMBING SEARCH
- - 1 current init state
- 2 find neighbor with the lowest cost
- 3 if cost(current) <= cost(neighbor)
- - DONE
- 4 else current = neighbor
- 5. goto 2
- there's a problem with this algorithm though
- - Flat surface with a decline somewhere (you won't move off flat area and stop)
- - Getting trapped in a local minimum
- - this is the big boy problem
- SIMULATED ANNEALING
- - Word comes from metalurgy.. to smooth out the metal
- 1 Init T <- init temperature
- X <- init state (location)
- V <- Energy(x) (cost)
- 2. Do While T > finalTemperature
- Do N times
- x' <- move(x)
- v' <- Energy(x')
- if v' < v then
- x <- x'
- else
- x <- x' w/ prob < 1
- T = d*T , d < 1
- P = e^(-(v'-v)/k^t) <--- prob
- e = natural e
- as temperature goes up... the cloesr the function goes to 1
- as temperature goes down, the function goes closer to 0
- This aleaviates getting caught on small bumps .. but you can still get caught in a local min.
- - you bounce around more than hill climb.. if the local min is small enough you can bounce out
- In reality.. You don't care to much about the potential bigger minimums.. you can't
- guarentee they exist.. so you shouldn't go looking to deep
- you just have to mitigate the unimportant tiny bumbs (which is what annealing does)
- BUT I WANNA FIND THE REALLY GOOD MIMIMUMS
- WELL THEN JUST USE A BUNCH OF FUCKERS TO FIND IT DUH
- instead of just using 1 agent... use a bunch of them.. one of them might find the best solution
- So that creates.. the genetic algorithm
- GENETIC ALGORITHM
- - Tries to Simulate evolution
- - Mutation, Natural Selection, cross over, population and fitness
- - we need to simulate these things
- ALGORITHM: - Can't give a specific algorithm since each algorithm is a design thing
- - Create Chromosomes
- 1. Init. Population
- - Select a few from population and duplicate them and apply mutation to them
- - Cross over the mutation
- - Apply fitness algorithm to kill some bad ones and keep (most of) the good ones
- *ENCODING -> depends on problem but oyu need to econde the dna
- ex. PATH
- - City: A B C D
- - Paths: A->B->C->D *encoded as* 00 01 10 11
- A->C->B->D *econded as* 00 10 01 11
- - Binary Encoding
- A: 00
- B: 01
- C: 10
- D: 11
- - The binary encoding is the population .. so we need to do the genetic operations
- - Reporducing = just copying the bits
- - Mutation all you gotta do is modify one chromosome randomly
- - 00 01 10 11 ----[randomy pick a bit.. say 4] ---> 00 01 00 11
- (# mutation varies from one program to another)
- - Crossover: pick a random spot (say.. 3) and swap the parts
- - 00 01 10 11 amd 00 11 10 01 ----> 00 01 10 01 and 00 01 10 11
- - Natural selection - Probablity that a chromosome survives to the next generation
- - Prob(Ci) = (Q(Ci)) / SIGMA(Q(Cj))
- - Q is qualitity (ie. Fitness Measure)
- -> Fitness measure is problem specific
- SIMPLE GENETIC ALGORITHM
- 1. Init Population of 10 individuals
- 2. Select 6 out of 10 *6 is the ''mating'' population*
- 3. Apply Crossover in 3 pairs
- 4. Apply Random mutation on some
- 5. Add offspring to population
- 6. Apply Natural selection on population, Goto 1
- Ex. FUNCTION OPTIMIZATION
- F(x) = x^2, [0,31]
- 1. init population of 4 using binary encoding
- -> need 5 bits because 32 possible solutions .. ex. 01100 (for 12)
- inital populations --- Fitness of each (apply F(x)) ------Select 4 to mate
- 01101 169 -> 14.4% 1
- 11000 576 -> 49.2% 2
- 01000 64 -> 5.5% 0
- 10011 + 361 -> 30.9% 1
- ----------
- SIGMA= (1170)
- 2. Determine fitness of each individual (above.. using Prob(Ci) equation)
- 3. Select 4 individuals using pprobablilty
- 4. Create Mating Group from those 4 and apply cross over
- 5. Apply random mutation
- 6. Get Fitness of each
- 4 5 6
- 01101 --> 01100 --> 01100 --> 144
- 11000 --> 11001 --> 11001 --> 625
- 11000 --> 10000 --> 10010 --> 729
- 10011 --> 11011 --> 11011 --> 324
- AVERAGE: 439 <-- better than population average before (293)
- 7. Add back to population
- 01101
- 11000
- 01000
- 10011
- 01100
- 11001
- 10010
- 11011
- 8. Groom population w/ Fitness algorithm
- You can apply this to a hill climb algorithm
- Make a program that gets really fucking good at making an expression that generates the number 43
- CSP -> Constraint Satisfaction Problem
- ex. Scheduling Classes
- Need to take 10 courses in next 2 years but taht has the following constraints
- - Need to Satisfy Pre-reqs
- - Semesters offered
- - no overlap
- B
- /|\
- A< | D
- \C
- We use similar graphs as a normal search graph.. but the Vertexs represent a variable
- - Each variable has a certain Domain of variables Da, Db, Dc, Dd
- - Each Path is the constraints
- - Goal: to find the value of each variable such that all constraints are satisfied
- ex. N-Queen Problem for example
- 4X4 - each spot on the board is numbered 1->16
- Vi = Position on board (1 -> 16)
- Di = {Y, N}
- Constraints: V1-V2, V1-V3.. so on so forth it would be annoying to manually do this..
- - So CSP woul suck eggs to model this like this..
- Vi = Column on board (column1 = v1, etc.)
- Di = Rows on Board (row1 = D1, etc.)
- Constraint: 1 Queen per column and per row and per diagonal
- - V1-V2, V1-V3, V1-V4
- You need to model this way to save graph space
- ex. Graph Coloring: PicNotes5
- - Given a small amount of color, assign a color to each region, no 2 neighbors have same color
- - How many colors do you need?
- Vi - Regions
- Di - Colors
- Constraints
- - A-B A-E B-E B-C B-D E-D C-D
- CSP is really good for assembly line, airports, compilers using the registers
- BOOLEAN SATISFIABILITY PROBLEM (SAT)
- Given a conjuctive normal for w/ 3 variables per clause
- (A or B or C) and (A or C or D) and ....
- clause clause
- Can you assign a True or false to make the whole statement true?
- NP - Complete (no efficent algorithm for this)
- Solving CSP
- - Arc Consistency - vi ---> Vj
- - Arc(vi, vj) is constistent iff ForAll x in Domaini, there exists a y in the Domainj
- - Ex. Graph Coloring
- V1 --> V2 ----> V3
- R/G R/G G
- ^ this arch is inconsistant
- V1 --> v2 --> v3
- r/g r/g b
- This is consistant
- Constraint Propagation
- Vi ------> Vj
- Remove values from Di that do not have consistant partner in Dj
- * Given a constraints Graph w/ n arcs w/ Di values
- - How many Arc consistances do I need to Check to make each arch consistant
- - Graph w/ N arcs, with each node having M values
- - N*M^3 ... thisi s literally unusable (large scale.. anyway)
- *ex.
- V1 (R,G,B)
- / \
- (R,G)V2 -- V3 (G)
- ARC | REMOVE
- ---------------
- v1 - v2 | no
- v1 - v3 | G from D1
- v2 - v3 | G from D2
- -----------------------------------
- v1 - v2 | R from D1
- v1 - v3 | no
- v2 - v3 | no
- -----------------------------------
- v1 - v2 | no
- v1 - v3 | no
- v2 - v3 | no
- we good now..
- V1 = B, V2 = R, V3 = G
- * ex2.
- V1 (B G)
- / \
- (R G) V2-----v3 (R G)
- Multiple possible solutions... so we need to find the solution ... with a SEARCH
- - Depth First Search
- V1 (RGB)
- / \
- (RG)V2 --- V3 (RG)
- Search tree... Level 1 = V1 assignment, Level 2 = V2 asignment.. etc etc. and depth first search
- (btw. the arch can't be the same as the next arc like V1 can't be G if V2 is gonna be G)
- Only Way to increase search efficiency is to reduce the width of the tree
- How do you do that?
- Apply Constraint Propriation!
- *
- R / | \
- *
- R /|G
- (Don't expand this node) X *
- /R\G
- X X
- Basically, if that node will fail, don't expand it, making the tree much smaller, BackTracking w/ Forward Checking
- EX. PICNOTES6
- - 1 = R/Y
- 2 = G/B/Y
- 3 = R/B/Y
- 1. For this, you would pick the most CONSTRAINED VARIABLE (so you would pick 1) <- "Reduce branching factor"
- 2. You could pick Yellow, since we wanna leave as many possible combinations for the other squares <- "least constrainting value"
- this is called...
- BackTracking + Forward Tracking w/ Dynamic ordering
- _______________________________________EXAM ABOVE THIS POINT (1) ____________________________________________________
- GAME: Well DEFINED game... nigga the fuck is chang talking about
- Like chess
- - 2 person game, well defined rules
- - Chess players
- - 1950's: Shanon -> Father of Information Theory *very important nigga.. probably not a nigga tho*
- - Came up with Game Theory: Principles w/ Modern game playing
- - 1953: Turing:
- - 1996 IBM DEEP BLUE: Deep Blue vs. Kastrov .. Deep Blue lost (Philly)
- - 1997 2nd Match (NY) ... Deep Blue won
- - 1998 3rd Match (Kastrov was pissed) .. IBM didn't wanna have a rematch
- Kastrov was pissed still
- - IBM WATSON - Literally won jeapordy
- - Game playing
- - application of search
- - state - board configuration
- - action - legal moves
- - goal - winning configuration
- - heuristic - scoring function (produces % chance of winning)
- - c*#Queens + c*#Kings .. etc.
- - Path is not important
- - Search Tree: Tic Tac Toe
- [ ][ ][ ]
- [ ][ ][ ] Init-state
- [ ][ ][ ]
- |
- | Next level is all possible "my" moves
- []
- / \
- This level is possible opponent moves
- The Goal is to: Search this path to pick the best move that will make it most likely to win
- MIN MAX ALGORITHM (Shanon 1949):
- - Look ahead in the game tree to depth of N (N-Ply)
- - Evaluate score (heuristic) of each leaf node
- ex. 2 ply
- [] <-- current state (my turn)
- / | \
- []-5 []1 []7 <-- Possible my Turns (pick one to maximize your score)
- / \ / \ / \
- [] [] [] [] [] [] <-- Adversary's move (minmizing your score)
- Heur: -3 -5 2 1 7 8
- Oppenent will try to minimize your score so you have to choose a path that
- will minimize the damage he can do .. by minimizing
- The silly thing is ... it gets inefficient the more legal moves you can make
- If the branching factor is like.. 20.. at depth of 10 .. you have 20^10 leaves
- and then you just kill yourself
- ... soo you speed that shit up by making the tree smaller.. but ya can't do that now can ya
- so need.. Algorithm Alpha Beta Prunning
- ALPHA BETA PRUNNING -> Optimization of min-max by reducing # of leaves to evaluate
- ex ..
- []2
- / \
- []2 1[]
- / \ / \
- [] [] [] []
- 2 7 1 7
- So you're guarenteed at least 2 after evaluating the first branch there..
- so you evaluate that the next leaf would be 1 .. you just don't evalute the sibling node to it
- so you just don't go there since you can get at least 2
- Actual Algorithm
- [] alpha = -infinity---> update to 2, beta= infinity ---> 2, infinity
- /
- [] -infinity, infinity<-- update to 2---> -infinity, 2
- /
- []2
- basically, if you find a beta value smaller than your alpha value, you stop that branch
- - you have to evalute at LEAST the first branch
- Minmax normal = O(b^d) #leaves ... minmax prunning = O(b^(d/2))
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement