Advertisement
krusader74

CheatGYP.bas

Jan 2nd, 2023 (edited)
2,687
1
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. Rem CheatGYP version 1.0.
  2. Rem The simplest AI example imaginable using ML.
  3. Rem https://pastebin.com/AuDdj1RQ
  4.  
  5. Rem Developed and tested in QBjs
  6. Rem   https://qbjs.org/
  7.  
  8. Rem We want our system to compute answers to the formula
  9. Rem      y = m * x
  10. Rem where x is the input,
  11. Rem       m is some multiple that we need to learn from the data,
  12. Rem   and y is the output.
  13.  
  14. Rem Step 1: Train the system
  15.  
  16. Rem Our training data is just the three times table we
  17. Rem typically learn in second grade.
  18. Rem Each line of data is sample input, sample output.
  19.  
  20. TrainingData:
  21. Data 1,3
  22. Data 2,6
  23. Data 3,9
  24. Data 4,12
  25. Data 5,15
  26. Data 6,18
  27. Data 7,21
  28. Data 8,24
  29. Data 9,27
  30. Data 10,30
  31. Data 11,33
  32. Data 12,36
  33.  
  34. Screen 0
  35. Cls
  36. Randomize 0
  37. m = Rnd(10) ' start with a random weight
  38. gamma = 1 ' learning rate
  39. Print "i", "m" ' output a table showing iteration, weight
  40. Print 0, m
  41.  
  42. Restore TrainingData
  43. For i = 1 To 12 ' loop through the training data
  44.     Read x, y
  45.     erf = (y - m * x) ^ 2 ' error function: \DeclareMathOperator\erf{erf}
  46.     DerfDm = -2 * x * (y - m * x) ' gradient: \frac{d \erf}{d m}
  47.     If DerfDm <> 0 Then
  48.         m = m - gamma * erf / DerfDm ' update weight
  49.     End If
  50.     Print i, m
  51. Next
  52.  
  53. Rem Step 2: Now use the "AI" to answer multiplication questions.
  54.  
  55. Input "Enter a number or q to quit"; i$
  56. While (i$ <> "q") And (i$ <> "Q")
  57.     x = Val(i$)
  58.     Print "The answer is ", m * x
  59.     Input "Enter a number or q to quit"; i$
  60. Wend
  61.  
  62. Rem Things to consider:
  63. Rem
  64. Rem Gradient decscent requires a lot of examples to learn.
  65. Rem Gradient descent converges slowly, if at all.
  66. Rem Gradient descent only produces approximate results.
  67. Rem We can fix these issues with various tricks and cheats.
  68. Rem
  69. Rem Gradients get hard to compute when m is not just a scalar
  70. Rem but a matrix of weights or the composition of lots
  71. Rem of functions and transformations!
  72. Rem
  73. Rem This example only learns the multiplier,
  74. Rem it doesn't actually learn multiplication;
  75. Rem in fact, it exploits the fact that multiplication
  76. Rem is built right into the programming language and CPU.
  77. Rem
  78. Rem ML learns a lot differently from school children:
  79. Rem In grammar school, we just memorize these tables.
  80. Rem We later got an algorithm to multiply bigger numbers.
  81. Rem We also learn simplifying rules like:
  82. Rem   0 * n = 0 <-- multiplication by zero
  83. Rem   1 * n = n <-- multiplicative identity
  84. Rem   m * n = n * m <-- commutativity
  85. Rem   (a + b) * n = a * n + b * n <-- distributivity
  86. Rem   \bar{n} * 10 = \bar{n0} <-- multiplication by 10
  87. Rem   \bar{n} * 11 = \bar{nn} <-- multiplication by 11
  88. Rem   digital root of 9 * n = 9
  89. Rem We could also just count on our fingers when we got stuck!
  90.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement