Advertisement
SageTheWizard

CMPSC441 - Exam 1 Notes

Oct 11th, 2018
187
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 24.54 KB | None | 0 0
  1. PROBLEM SOLVING BY SEARCHING
  2. - Search is a main tool in AI, a systematic way to explore a solution space
  3.  
  4. ex. ... y= 2x + 5
  5. Solution space (1,7) (0,5) ... etc..
  6. Solution is a 2 pair of numbers
  7. Solution space is infinite amount of stuff... so ...
  8. The Search algorithm in AI is to find the correct solution
  9. (A Shitty algorithm will check every single option.. so a good search alg is needed)
  10. - 2 Types of search
  11. - Blind Search
  12. Uninformed Search algorithm
  13. - Heuristic Search
  14. Informed Search Algorithm
  15. Which do you use?
  16. Analize the problem to get additional information...
  17. (no additonal information) --> Blind
  18. This doesn't mean check every option.. just no aditional information
  19. (Additional information) --> Heuristic
  20. You're given a map with a bunch of cities and roads
  21. Want to make a driving plan to get from 1 city -> last with shortest path
  22. If you have a map.. you can make an informed guess as to what the ideal method is
  23. This additional info makes for a more efficent search
  24. Blind would be ''any path'' Heuristic is ''optimal path''
  25. We're gonna do alot of trees and graph searches
  26. -but mainly trees.. chang likes trees
  27.  
  28. TERMS:
  29. Root - Node w/ no Parents
  30. Leaf - Node w/ no Children
  31. Internal Node - Node that !(leaf)
  32.  
  33. Graph - Tree with Loops
  34. A Graph can have multiple parents to a node (where as a tree is 0 or 1)
  35. Directed Graph vs. Undirected Graph
  36. A---->B directed is one way
  37. A-----B undirected can go back and forth
  38.  
  39. 99% of any real world problem can be modeled as a graph
  40. Path - "Action plan to achieve the desired goal"
  41.  
  42. If you have a problem to solve... we "REDUCE" problem into a search problem
  43. To do that.. you need..
  44. STATES - (states have to be complete)
  45. State should represent all and only relevant aspect of a problem
  46. ex. Finding cheapest flight plan (Harrisburg -> NYC)
  47. - Knowing City of airport (not enough info yet)
  48. - Address of airport (too little)
  49. - Designation of airport (now you have enough info)
  50. ACTIONS - (next state is determined by action and each action is discrete)
  51. (discrete, it's an independent action)
  52. GOAL TEST - (can have multiple goals)
  53.  
  54. Graph Search ---REDUCED---> Tree search
  55.  
  56. From a tree, you can create a search tree
  57. In a search tree .. each vertex is a node
  58.  
  59. S VS S
  60. / \ / \
  61. AS BS A B
  62. / \ / \ / \ / \
  63. CAS DAS DBS GBS C D D G
  64.  
  65. Difference is each node has ancestor information and how to get to each node
  66.  
  67. First when converting a graph to a tree, you gotta be careful and get rid of cycles
  68. (Trees aren't allowed to have cycles)
  69. Later we will discuss.. how to avoid loops and how to avoid extra visit
  70.  
  71. TYPES OF SEARCH - (TREES)
  72. | BLIND | HEURISTIC|
  73. ----------------------------------
  74. any path|depth first |BEST FIRST|
  75. |breadth first | |
  76. -----------------------------------
  77. optimal |uniform cost | A* | <-- 10 years a go every game engine uses A*
  78. | | |
  79. -----------------------------------
  80.  
  81. GENERIC SEARCH ALGORITHM... THEN CUSTOMIZING SAID ALGORITHM TO BE REALLY GUD
  82. [note: it's easy to customize in scheme.. that's why we scheme]
  83. 1. Init Q = [(s)], V = [S]
  84. 2. If Q: empty, Search Fail <--- 1&2 inits
  85. else, pick a node N from Q
  86. 3. If state(N) is a goal, earch succeeded and return N <-- Goal test
  87. 4. Remove N from Q
  88. 5. Find all the children of state(N) not in V , and create all one-step extensions of N to each child <- extend
  89. 6. Add the extended paths to Q and add the chldren state(N) to V <- Merge
  90. 7. Goto2
  91.  
  92. Notation
  93. - node: a path from start state to some state X
  94. - State: Most recent state of the path (node)
  95. - Q : list of search nodes
  96. - S : start state
  97. - V : visited states
  98. ex.
  99. S
  100. / \
  101. BS AS
  102. / \
  103. CBS DBS
  104. state = car (node)
  105.  
  106. Step 3 and step 5 are both problem specific
  107. Step 2 and step 6 are both algorithm specific
  108.  
  109. Q : [(CAS)(DAS)(BS)], how to add new things from the queue and how to remove current thing, is what
  110. determines the algorithm
  111. ex. Depth first, you treat Q like a stack
  112. ex. CAS gets kicked out to add, say FCAS and ECAS
  113. ex. breadth first, treat Q like a QUEUE
  114. ex. [(A)(B)(C)]
  115. A gets kicked out, add AS to back
  116. ex. Best first add heuristic value to the node (ex.[(11CAS)(10DAS)(5BS)])
  117. ex. you'd pick 11CAS because it has best value
  118.  
  119. S
  120. / \
  121. B C
  122. / \
  123. D E
  124. Q:[(S)] ---> Q: [ (BS) (CS) ] ----> Q[ (EBS) (DBS) (CS) ] --> Q[ (DBS) (CS)] DEPTH FIRST
  125. Q:[(s)] ---> Q: [ (BS) (CS) ] ----> Q[ (CS) (DBS) (EBS) ] --> Q[ (DBS) (EBS) ] BREADTH FIRST
  126.  
  127. HEURISTIC .. what is it?
  128. -Rule of thumb, not about guarenteed value, but it's hopefully helpful
  129. VISITED.. what is it?
  130. - A state is visited wehn a path to that state is added to Q
  131. ex. Q[(s)] V[S] --> Q[(BS) (CS)] V[S, B, C]
  132. EXPANDED.. what is it?
  133. - A state is expanded when a path to that state is picked from N and created one step extension
  134. ex. Q[(s)] V[S] --> Q[(BS) (CS)] V[S, B, C] -> at this point S is expanded .. but B and C are visited
  135.  
  136. What do you use the visited list for?
  137. - So when you access a different node that may connect to a different node,
  138. you know you don't have to reaccess N and that it was already visisted
  139.  
  140. |how to pick N from Q|How to add extensions to Q|
  141. ====================================================================
  142. DEPTH |pick first N | front of Q |
  143. ---------------------------------------------------------------------
  144. BREADTH |pick first N | back of Q |
  145. ---------------------------------------------------------------------
  146. BEST FIRST |search for best val.| anywhere in Q |
  147. alt |pick 1st |sort in order (best first)|
  148. ---------------------------------------------------------------------
  149.  
  150. BREADTH FIRST WILL ALWAYS FIND THE SHORTEST PATH
  151. S
  152. / \
  153. AS BS
  154. / \ / \
  155. CAS DAS DBS GBS
  156. / \ / \
  157. CDAS GDAS CDBS GDBS
  158. - Depth first likes to go deep from left to right, doesn't care how deep something is
  159. - Breadth first likes to go level by level, will always find the shallowest (ie shortest) path
  160. to the node
  161.  
  162. BEST FIRST SEARCH
  163. - Heuristic any path
  164. - [PLS REFER TO GRAPH IN 'PICNOTES-1'] *note.. G should be 0
  165. - When adding to the list, sort the list
  166. 1. Q - (10 S) V - S
  167. 2. Q - (2 AS) (3 BS) V - S A B
  168. 3. Q - (1 CAS) (3 BS) (4 DAS) V - S A B C D
  169. 4. Q - (3 BS) (4 DAS) V - S A B C D
  170. 5. Q - (0 GBS) (4 DAS) - S A B C D G
  171. GOAL FOUND
  172.  
  173. UNIFORM Search
  174. - optimal blind search
  175. - we need to have a metric for how ''good'' each path is (hence optimal)
  176. - Usually the measure is path length
  177.  
  178. >A---2--->G
  179. 4/ ^
  180. S 1
  181. 2\ |
  182. > B
  183.  
  184. - Each path has an ACTUAL cost (distance, $$ doesn't matter)
  185. - REFER TO PICNOTES2 for this
  186. - first thing in list is the cost of that path
  187.  
  188. Q - (0 S)
  189. Q - (2 AS) (5 BS)
  190. Q - (5 BS) (4 CAS) (6 DAS)
  191. Q - (5 BS) (6 DAS)
  192. Q - (6 DBS) (10 GDBS) (6 DBS) <- 2 D's with same value, they are the same doesn't matter
  193. Q - (9 CDBS) (8 GDBS) (10 GDBS) (10 GBS) (6 DAS)
  194. Q - (9 CDBS) (8 GDBS) (10 GDBS) (9 CDAS) (8 GDAS)
  195. Q - GOAL FOUND, G w/ 8 Value
  196. GDBS (GDAS works fine too but they're the same so w/e)
  197. You can't do goal test too early.. otherwise it might catch a less optimal path
  198. (if lowestValue < goalList) -> Check lowest value, get children and if all other paths
  199. are higher or eqal than the goalList value then you found the goal
  200. Notice: D was expanded 2x .. Is this okay?
  201. - Not really, we use another type of list to keep track of these things
  202.  
  203. EXPANDED LISTS
  204. - Unlike visited lists, we aren't trying to avoid revisiting a node, we just
  205. don't want to expand the same node more than once ....
  206.  
  207.  
  208.  
  209.  
  210. A* SEARCH
  211. - Uses Heuristic AND actual information
  212. - Add extensions anywhere, pick the best path based off of (heuristic + actual)
  213. - Heuristic MUST be smaller than actual cost... why?
  214. - A wrong heuristic can make things go wrong
  215. Admissibile Heuristic - HEURISTIC THAT ALWAYS UNDERESTIMATES
  216. - - REFER TO PICNOTES3.png
  217.  
  218. 1. Q= (0 S)
  219. 2. Q= (4 AS) (8 BS) *4 and 8 because Heuristic + Actual
  220. 3. Q= (5 CAS) (8 BS) (7 DAS) *5 and 7 because only count heuristic for the current node
  221. 4. Q= (8 BS) (7 DAS)
  222. 5. Q= (10 CDAS) (8 GDAS) (8 BS)
  223. Pick GDAS -> GOAL FOUND
  224.  
  225. But.. I wanna make Optimal Search and A* more efficient .. how do?
  226. - it's inefficent because you expand the same node more than once (visiting != expanding)
  227. - Expanded is when you pull something out of the queue and finding its children and add children to qu
  228. - You might have to touch a same node again.. but you don't need to expand it again... EX
  229. - We need to modify the algorithm a bit here...
  230.  
  231. 1. Initalize Q=[(0 S)] E= []
  232. 2. If empty(Q) - Fail
  233. else.. pick best N from Q
  234. 3. if state(N) is goal return N
  235. 4. Remove N from Q
  236. 5. If state(N) is in E goto 2
  237. else.. add N to E
  238. 6. Find all children of state(N) not in E
  239. create one step extension
  240. 7. add extension to Q --> If state is already in Q, keep the bettter one
  241. 8. Goto 2
  242.  
  243.  
  244. UNIFORM COST w/ Extended List
  245. -PICNOTES3 minus H values here
  246. 1. Q= (0 S) E = []
  247. 2. Q= (2 AS) (5 BS) E = S
  248. 3. Q= (4 CAS) (6 DAS) (5 BS) E= S A
  249. 4. Q= (6 DAS) (5 BS) E = S A C
  250. 5. Q= (6 DAS) (6 DBS) (10 GBS) E = S A C B *BUT WE DON'T KEEP DBS / DAS whichever does'nt matter*
  251. 5ACTUAL. Q = (6 DAS) (10 GBS) E = S A C B
  252. 6. Q= (8 GDAS) E = S A C B D *we don't add CDAS since C was expanded already .. and we kick 10 GBS out
  253. since 8 GDAS is lower value*
  254. 7. GOAL FOUND 8 GDAS
  255.  
  256.  
  257. A* w/ EXTENDED LIST - PICNOTES4
  258. 1. Q= (0 S) E=[]
  259. 2. Q= (101 AS) (3 BS) E = S
  260. 3. Q= (101 AS) (94 CBS) E = S B
  261. 4. Q= (101 AS) (104 GCBS) E= S B C
  262. 5. Q= (104 GCBS) E =S B C A
  263. GOAL: GCBS: 104!
  264. But w8... GCAS is better (102)...
  265. This is caused by B's Heuristic being way too small fucking the algorithm up
  266. If B was Heuristic was more accurate.. we wouldn't have this problem
  267. - Heuristic MUST be consistant otherwise the A* becomes fucked
  268. - To be consistant you need....
  269. a <= b + c
  270. b <= a + c
  271. c <= a + b
  272. ie.. 2 heuristics + actual cost must form a triangle
  273. But how do we fix this?
  274. - Get a consistency (monotonicity) condition
  275. - H(G) = 0
  276. - H(A) <= C(A,B) + H(B)
  277. So in PICNOTES4 .. change H(C)-> 100, H(B)-> 90
  278. With these changes.. the algorithm does this
  279. 1. Q= (0 S) E=[]
  280. 2. Q= (101 AS) (92 BS) E = S
  281. 3. Q= (104 CBS) (101 AS) E = S B
  282. 4. Q= (102 CAS) E = S B A *we kicked 104CBS out because 102CAS is better
  283. 5. Q = (102 GCAS) E= S B A C
  284. GOAL: S->A->C->G with a value of 102
  285. HEURISTIC HAS TO BE CONSISTENT AND MONOTOCITY
  286.  
  287. EFFICIENCY
  288. - We wanna make shit go fast in terms of time (using V or E)
  289. - But what about efficency with Space?
  290. - Say you have a tree, with each node having 1Byte and each node having 10 children
  291. LEVEL 0 - 1 node - 1 byte
  292. LEVEL 1 - 10 nodes - 10 bytes
  293. LEVEL 2 - 100 nodes - 100 bytes (10^2)
  294. You see where this is going...
  295. if you do breadth first search you really start getting fucked
  296. ... your ram is gonna be nibbled up pretty quick
  297. SOOOOOOOOOOOOOOO
  298. ITERATIVE DEEPENING Depth FIRST SEARCH
  299. - DFS is more space efficient where was breadth is faster
  300. **********************************************
  301. * *
  302. * |h(A) - h(B)| <= c(A,B) * <-- Rule for good heuristcs
  303. **********************************************
  304.  
  305. BINARY SEARCH TREE EFFICIENCY
  306. - Much more efficent than a normal Search Tree
  307. - #nodes
  308. level 0 - 1
  309. 1 - 2
  310. 2 - 2^2
  311. 3 - 2^3
  312. You get the point .. at level D the # of nodes is 2^d - 1
  313. IDDFS
  314. - Init D=1
  315. - Do DFS to max depth of D
  316. - D = D + 1
  317. - Goto 2
  318. Searches depth first of a subtree, making the depth of sub tree more and more
  319. BUT WAIT .. you ask.. you search the same tree over and over again this is gross
  320. This shit literally does double the work of a breadth first search
  321. ... Not really.. about the same time but it saves a lot of space in memory
  322. BUT .. The work saved from going over the tree again and again is more saved than
  323. the worst case in a breadth first search
  324.  
  325. So.. Say we have a problem we don't know how to solve
  326. - But we can form a solution space with possible solutions
  327. - We search the solution space systematically to find the best solutions
  328. -Worst case, is checking every solution.. and if the space is huge... there's a problem
  329. - Fuck man.. what if it's infinite.. we can't search that..
  330. So to solve this... we have OPTIMIZATION / Local Search
  331. - We don't look for a path to the goal, we don't care about the path, just wanna get there
  332. - N Queen Problem
  333. -4 x 4 grid .. with 4 queens and none of them are in check
  334. - Say we have a solution space.. it's continous (read: infinite)
  335. - we calculate the cost for each solution, creating a 3d cost function (like a mountain / hill)
  336. - you move around the x y space and looking at the z (cost) to reduce the cost
  337. - when you reach a point where you are at the least cost point .. you found it
  338. "HILL CLIMBING SEARCH"
  339.  
  340. HILL CLIMBING SEARCH
  341. - 1 current init state
  342. 2 find neighbor with the lowest cost
  343. 3 if cost(current) <= cost(neighbor)
  344. - DONE
  345. 4 else current = neighbor
  346. 5. goto 2
  347. there's a problem with this algorithm though
  348. - Flat surface with a decline somewhere (you won't move off flat area and stop)
  349. - Getting trapped in a local minimum
  350. - this is the big boy problem
  351. SIMULATED ANNEALING
  352. - Word comes from metalurgy.. to smooth out the metal
  353. 1 Init T <- init temperature
  354. X <- init state (location)
  355. V <- Energy(x) (cost)
  356. 2. Do While T > finalTemperature
  357. Do N times
  358. x' <- move(x)
  359. v' <- Energy(x')
  360. if v' < v then
  361. x <- x'
  362. else
  363. x <- x' w/ prob < 1
  364. T = d*T , d < 1
  365. P = e^(-(v'-v)/k^t) <--- prob
  366. e = natural e
  367. as temperature goes up... the cloesr the function goes to 1
  368. as temperature goes down, the function goes closer to 0
  369. This aleaviates getting caught on small bumps .. but you can still get caught in a local min.
  370. - you bounce around more than hill climb.. if the local min is small enough you can bounce out
  371. In reality.. You don't care to much about the potential bigger minimums.. you can't
  372. guarentee they exist.. so you shouldn't go looking to deep
  373. you just have to mitigate the unimportant tiny bumbs (which is what annealing does)
  374. BUT I WANNA FIND THE REALLY GOOD MIMIMUMS
  375. WELL THEN JUST USE A BUNCH OF FUCKERS TO FIND IT DUH
  376. instead of just using 1 agent... use a bunch of them.. one of them might find the best solution
  377. So that creates.. the genetic algorithm
  378.  
  379. GENETIC ALGORITHM
  380. - Tries to Simulate evolution
  381. - Mutation, Natural Selection, cross over, population and fitness
  382. - we need to simulate these things
  383.  
  384. ALGORITHM: - Can't give a specific algorithm since each algorithm is a design thing
  385. - Create Chromosomes
  386. 1. Init. Population
  387. - Select a few from population and duplicate them and apply mutation to them
  388. - Cross over the mutation
  389. - Apply fitness algorithm to kill some bad ones and keep (most of) the good ones
  390.  
  391. *ENCODING -> depends on problem but oyu need to econde the dna
  392. ex. PATH
  393. - City: A B C D
  394. - Paths: A->B->C->D *encoded as* 00 01 10 11
  395. A->C->B->D *econded as* 00 10 01 11
  396. - Binary Encoding
  397. A: 00
  398. B: 01
  399. C: 10
  400. D: 11
  401. - The binary encoding is the population .. so we need to do the genetic operations
  402. - Reporducing = just copying the bits
  403. - Mutation all you gotta do is modify one chromosome randomly
  404. - 00 01 10 11 ----[randomy pick a bit.. say 4] ---> 00 01 00 11
  405. (# mutation varies from one program to another)
  406. - Crossover: pick a random spot (say.. 3) and swap the parts
  407. - 00 01 10 11 amd 00 11 10 01 ----> 00 01 10 01 and 00 01 10 11
  408. - Natural selection - Probablity that a chromosome survives to the next generation
  409. - Prob(Ci) = (Q(Ci)) / SIGMA(Q(Cj))
  410. - Q is qualitity (ie. Fitness Measure)
  411. -> Fitness measure is problem specific
  412. SIMPLE GENETIC ALGORITHM
  413. 1. Init Population of 10 individuals
  414. 2. Select 6 out of 10 *6 is the ''mating'' population*
  415. 3. Apply Crossover in 3 pairs
  416. 4. Apply Random mutation on some
  417. 5. Add offspring to population
  418. 6. Apply Natural selection on population, Goto 1
  419. Ex. FUNCTION OPTIMIZATION
  420. F(x) = x^2, [0,31]
  421. 1. init population of 4 using binary encoding
  422. -> need 5 bits because 32 possible solutions .. ex. 01100 (for 12)
  423. inital populations --- Fitness of each (apply F(x)) ------Select 4 to mate
  424. 01101 169 -> 14.4% 1
  425. 11000 576 -> 49.2% 2
  426. 01000 64 -> 5.5% 0
  427. 10011 + 361 -> 30.9% 1
  428. ----------
  429. SIGMA= (1170)
  430. 2. Determine fitness of each individual (above.. using Prob(Ci) equation)
  431. 3. Select 4 individuals using pprobablilty
  432. 4. Create Mating Group from those 4 and apply cross over
  433. 5. Apply random mutation
  434. 6. Get Fitness of each
  435. 4 5 6
  436. 01101 --> 01100 --> 01100 --> 144
  437. 11000 --> 11001 --> 11001 --> 625
  438.  
  439. 11000 --> 10000 --> 10010 --> 729
  440. 10011 --> 11011 --> 11011 --> 324
  441. AVERAGE: 439 <-- better than population average before (293)
  442. 7. Add back to population
  443. 01101
  444. 11000
  445. 01000
  446. 10011
  447. 01100
  448. 11001
  449. 10010
  450. 11011
  451. 8. Groom population w/ Fitness algorithm
  452.  
  453. You can apply this to a hill climb algorithm
  454.  
  455. Make a program that gets really fucking good at making an expression that generates the number 43
  456.  
  457. CSP -> Constraint Satisfaction Problem
  458. ex. Scheduling Classes
  459. Need to take 10 courses in next 2 years but taht has the following constraints
  460. - Need to Satisfy Pre-reqs
  461. - Semesters offered
  462. - no overlap
  463. B
  464. /|\
  465. A< | D
  466. \C
  467. We use similar graphs as a normal search graph.. but the Vertexs represent a variable
  468. - Each variable has a certain Domain of variables Da, Db, Dc, Dd
  469. - Each Path is the constraints
  470. - Goal: to find the value of each variable such that all constraints are satisfied
  471. ex. N-Queen Problem for example
  472. 4X4 - each spot on the board is numbered 1->16
  473. Vi = Position on board (1 -> 16)
  474. Di = {Y, N}
  475. Constraints: V1-V2, V1-V3.. so on so forth it would be annoying to manually do this..
  476. - So CSP woul suck eggs to model this like this..
  477.  
  478. Vi = Column on board (column1 = v1, etc.)
  479. Di = Rows on Board (row1 = D1, etc.)
  480. Constraint: 1 Queen per column and per row and per diagonal
  481. - V1-V2, V1-V3, V1-V4
  482. You need to model this way to save graph space
  483.  
  484. ex. Graph Coloring: PicNotes5
  485. - Given a small amount of color, assign a color to each region, no 2 neighbors have same color
  486. - How many colors do you need?
  487. Vi - Regions
  488. Di - Colors
  489. Constraints
  490. - A-B A-E B-E B-C B-D E-D C-D
  491. CSP is really good for assembly line, airports, compilers using the registers
  492.  
  493. BOOLEAN SATISFIABILITY PROBLEM (SAT)
  494.  
  495. Given a conjuctive normal for w/ 3 variables per clause
  496. (A or B or C) and (A or C or D) and ....
  497. clause clause
  498. Can you assign a True or false to make the whole statement true?
  499. NP - Complete (no efficent algorithm for this)
  500. Solving CSP
  501. - Arc Consistency - vi ---> Vj
  502. - Arc(vi, vj) is constistent iff ForAll x in Domaini, there exists a y in the Domainj
  503. - Ex. Graph Coloring
  504. V1 --> V2 ----> V3
  505. R/G R/G G
  506. ^ this arch is inconsistant
  507. V1 --> v2 --> v3
  508. r/g r/g b
  509. This is consistant
  510. Constraint Propagation
  511. Vi ------> Vj
  512. Remove values from Di that do not have consistant partner in Dj
  513. * Given a constraints Graph w/ n arcs w/ Di values
  514. - How many Arc consistances do I need to Check to make each arch consistant
  515. - Graph w/ N arcs, with each node having M values
  516. - N*M^3 ... thisi s literally unusable (large scale.. anyway)
  517. *ex.
  518. V1 (R,G,B)
  519. / \
  520. (R,G)V2 -- V3 (G)
  521.  
  522. ARC | REMOVE
  523. ---------------
  524. v1 - v2 | no
  525. v1 - v3 | G from D1
  526. v2 - v3 | G from D2
  527. -----------------------------------
  528. v1 - v2 | R from D1
  529. v1 - v3 | no
  530. v2 - v3 | no
  531. -----------------------------------
  532. v1 - v2 | no
  533. v1 - v3 | no
  534. v2 - v3 | no
  535.  
  536. we good now..
  537. V1 = B, V2 = R, V3 = G
  538.  
  539.  
  540. * ex2.
  541. V1 (B G)
  542. / \
  543. (R G) V2-----v3 (R G)
  544. Multiple possible solutions... so we need to find the solution ... with a SEARCH
  545. - Depth First Search
  546.  
  547. V1 (RGB)
  548. / \
  549. (RG)V2 --- V3 (RG)
  550. Search tree... Level 1 = V1 assignment, Level 2 = V2 asignment.. etc etc. and depth first search
  551. (btw. the arch can't be the same as the next arc like V1 can't be G if V2 is gonna be G)
  552. Only Way to increase search efficiency is to reduce the width of the tree
  553. How do you do that?
  554. Apply Constraint Propriation!
  555. *
  556. R / | \
  557. *
  558. R /|G
  559. (Don't expand this node) X *
  560. /R\G
  561. X X
  562. Basically, if that node will fail, don't expand it, making the tree much smaller, BackTracking w/ Forward Checking
  563.  
  564. EX. PICNOTES6
  565. - 1 = R/Y
  566. 2 = G/B/Y
  567. 3 = R/B/Y
  568. 1. For this, you would pick the most CONSTRAINED VARIABLE (so you would pick 1) <- "Reduce branching factor"
  569. 2. You could pick Yellow, since we wanna leave as many possible combinations for the other squares <- "least constrainting value"
  570. this is called...
  571. BackTracking + Forward Tracking w/ Dynamic ordering
  572. _______________________________________EXAM ABOVE THIS POINT (1) ____________________________________________________
  573. GAME: Well DEFINED game... nigga the fuck is chang talking about
  574. Like chess
  575. - 2 person game, well defined rules
  576. - Chess players
  577. - 1950's: Shanon -> Father of Information Theory *very important nigga.. probably not a nigga tho*
  578. - Came up with Game Theory: Principles w/ Modern game playing
  579. - 1953: Turing:
  580. - 1996 IBM DEEP BLUE: Deep Blue vs. Kastrov .. Deep Blue lost (Philly)
  581. - 1997 2nd Match (NY) ... Deep Blue won
  582. - 1998 3rd Match (Kastrov was pissed) .. IBM didn't wanna have a rematch
  583. Kastrov was pissed still
  584. - IBM WATSON - Literally won jeapordy
  585. - Game playing
  586. - application of search
  587. - state - board configuration
  588. - action - legal moves
  589. - goal - winning configuration
  590. - heuristic - scoring function (produces % chance of winning)
  591. - c*#Queens + c*#Kings .. etc.
  592. - Path is not important
  593. - Search Tree: Tic Tac Toe
  594. [ ][ ][ ]
  595. [ ][ ][ ] Init-state
  596. [ ][ ][ ]
  597. |
  598. | Next level is all possible "my" moves
  599. []
  600. / \
  601. This level is possible opponent moves
  602. The Goal is to: Search this path to pick the best move that will make it most likely to win
  603.  
  604. MIN MAX ALGORITHM (Shanon 1949):
  605. - Look ahead in the game tree to depth of N (N-Ply)
  606. - Evaluate score (heuristic) of each leaf node
  607. ex. 2 ply
  608. [] <-- current state (my turn)
  609. / | \
  610. []-5 []1 []7 <-- Possible my Turns (pick one to maximize your score)
  611. / \ / \ / \
  612. [] [] [] [] [] [] <-- Adversary's move (minmizing your score)
  613. Heur: -3 -5 2 1 7 8
  614. Oppenent will try to minimize your score so you have to choose a path that
  615. will minimize the damage he can do .. by minimizing
  616.  
  617. The silly thing is ... it gets inefficient the more legal moves you can make
  618. If the branching factor is like.. 20.. at depth of 10 .. you have 20^10 leaves
  619. and then you just kill yourself
  620. ... soo you speed that shit up by making the tree smaller.. but ya can't do that now can ya
  621. so need.. Algorithm Alpha Beta Prunning
  622.  
  623. ALPHA BETA PRUNNING -> Optimization of min-max by reducing # of leaves to evaluate
  624. ex ..
  625. []2
  626. / \
  627. []2 1[]
  628. / \ / \
  629. [] [] [] []
  630. 2 7 1 7
  631. So you're guarenteed at least 2 after evaluating the first branch there..
  632. so you evaluate that the next leaf would be 1 .. you just don't evalute the sibling node to it
  633. so you just don't go there since you can get at least 2
  634. Actual Algorithm
  635. [] alpha = -infinity---> update to 2, beta= infinity ---> 2, infinity
  636. /
  637. [] -infinity, infinity<-- update to 2---> -infinity, 2
  638. /
  639. []2
  640.  
  641. basically, if you find a beta value smaller than your alpha value, you stop that branch
  642. - you have to evalute at LEAST the first branch
  643.  
  644. Minmax normal = O(b^d) #leaves ... minmax prunning = O(b^(d/2))
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement