SHARE
TWEET

CGSC Test 2

a guest Oct 21st, 2019 65 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. #Test 2
  2.  
  3. # Chapter 3.3
  4. _Extending Computational Modeling to the Brain_
  5.     - A new set of algorithms: Rumelhart, McClelland, and the PDP Research Group, `Parallel Distributed Processing: Explorations in the Microstructure of Cognition` (1986)
  6.     - Pattern recognition in neural networks: Gorman and Sejnowski's mine/rock detector
  7. `
  8. Rumelhart, McClelland, and PDP pursued mathematical tools for modeling cognitive processes: *Artifical Neural Networks*
  9.  
  10. Features of neural networks:
  11.     - `Parallel Processing` - Info spreads down network as activation of the units. Parallel because flow of info depends on all units in a layer, but those units are not connected
  12.     - `Connectivity` - Each unit has connections from the previous layer, and connections forward to the next layer. The weight of these connecions is modifiable through learning.
  13.     - `Homogeny` - No intrinsic difference from one unit to another. Differences lie in the connections between the units.
  14.     - `Loss` - Train/learn through examples and trying to reduce the mistakes that the networks makes
  15.     - Train via `backpropogation`
  16.  
  17. How 2 detect if a sonar echo is a mine or rock:
  18.     - Encode the external stimulus as a pattern of activation values
  19.     - Train examples through network with feed-forward responses, and train with backpropogation until you get optimal answers.
  20. Pattern
  21.  
  22.  
  23. -------------------------------------------------------------------------------
  24. # Chapter 3.4
  25. _Mapping the stages of lexical processing_
  26.     - Functional neuroimaging
  27.     - Petersen et al., "Positron emission tomographic studies of the cortical anatomy of single-word processing"
  28.  
  29. **Functional neuroimaging**
  30. - A tool that allows brain activity to be studied noninvasively
  31. - Topic: How is information about individual words processed
  32. - Tool: Positron Emission Tomography - PET. Works via studying the function of brain areas by measuring blood flow in the brain. An injection of radioactive Oxygen-15 gives us a tracker we can watch for about a minute.
  33.  
  34. **Petersen et al.**
  35. - Interested in understanding how linguitic information is processed in the human brain
  36. - Does silently reading a word to oneself involve processing infromation about how the word sounds? Does simply repeating a word involv recruiting information about what the word means?
  37. - Two leading [information-processing models of single-word processing](lexical access) answer these questions differently
  38. `Neurology model` (derived from observing brain-damaged patients) holds that the processing of individual words in normal subjects follows a single, largely invarient path:
  39.     - Info travels through a fixed series of information-processing 'stations' in a fixed order
  40.     - Sensory areas -> Auditory information processed in a seperate brain region from information about the word's visual appreaence.
  41.     - Visual info about word's appearence must be phonologically recoded before it undergoes further processing.
  42.     - To access semantic information of the word, the brain must work out what the word sounds like.
  43.     - Similarly, reading a word and then pronouncing it recruits info about what the word means
  44. `Cognitive model` (derived mainly from experiments on normal subjects, rather than brain-damaged patients). Processing of individual words is serial.
  45.     - Lexical information processing is parallel. Brain can perform multiple lexical processes at once. Several channels feed into semantic processing. No single route to phonological output.
  46.  
  47. Peterson designed an experiment with a series of hierarchical conditions that more abstractly process the information than their predecessor:
  48.  
  49. 1. Ask subject to focus on a small point. (Sets baseline for visually attending to non-word)
  50. 2. Measure brain activity as participants are passively presented with words flashed on the screen at 40 words/min
  51. Img(1) - Img(2) = Allows to filter out brain activation that is responsible for sensory processing in general, and not word perception.
  52. 3. Measure brain activation while saying word on screen.
  53. Img(2) - Img(3) = Which brain areas are involved in speach production
  54. 4. Present subject with nouns and asked to utter associated verb
  55. (Img(2) - Img(3)) - Img(4) = Brain areas involved in semantic processing
  56.  
  57. Results:
  58.     - Repeating visually presented words resulted in no activation of regions associated with auditory processing.
  59.     - Suggests there is a direct info pathway from visual cortex associated with visual word processing to the distributed network of areas responsible for articulatory coding and motor programming, PLUS a parallel and equally direct pathway from the areas associated with auditory word processing
  60.     - Areas associated with semantic processing were not involved in any of the other tasks. Suggests those direct pathways did not proceed via the semantic areas.
  61.  
  62.  
  63.  
  64. -------------------------------------------------------------------------------
  65. # Chapter 8.2
  66. _Single-Layer networks and Boolean functions_
  67.     - Learning in single-layer networks:
  68.         The perceptron convergence rule
  69.     - Linear separability and the limits of perceptron covergence
  70.  
  71. *Mapping functions*: Each item from domain is mapped onto exactly one item from the range.
  72. *Binary Boolean functions*: Take truth values as input, give truth values as output.
  73. *Truth Tables*
  74. *Hebbian learning*: The neurons that fire together, wire together. (Non-supervised learning)
  75. Frank Rosenblatt created a single-layer network that he called *perceptrons*
  76. `The Perceptron Convergence Rule`:
  77.     - Like Hebbian learning, but supervised, requires feedback about the correct solution to the problem its solving
  78.     Setup:
  79.         - Single layer netowrk
  80.         - Binary threshold activation function, output 0 or 1
  81.         - Learns by reducing error (δ)
  82.         - δ = Intended Output - Actual Output
  83.         - ε = learning rate
  84.         - Δ indicates adjutment
  85.         - T is the threshold
  86.         - Ii = i-th input
  87.         - Wi = weight attatched to the i-th input
  88.     ΔT = -ε * δ
  89.     ΔWi = ε * δ * Ii
  90. *Linear Separability* - Problem: The class of Boolean functions that can be computed by a single-unit network is precisely the class of linearly separable functions.
  91. *Multilayer Netowrks* - Solution: Hidden layers. Multilayered networks can compute any computable function
  92.  
  93. -------------------------------------------------------------------------------
  94. # Chapter 8.3
  95. _Multilayer networks_
  96.     - The backproagation algorithm
  97.     - How biologically plausible are neural networks
  98.  
  99. 1. Total Input = Sum from (j = 1 to N) of (w(ij) * a(j))
  100. 2. Transform from total input to activity level a(i) with activation function
  101. 3. Transmit activity level to units in next layer
  102.  
  103. Paul Werbo's `Backpropagation Algorithm`
  104.     Error is propagated backwards from the output units to the hidden units.
  105.     With supervised learned, we can calculate degree of error in a given output unit
  106.     Each hidden unit connected to an output unit bears a degree of 'responsibility' for the error of that output unit.
  107.     The error level of a hidden unit is a function of the extent to which it contributes to the error of the output unit, it then becomes possible to tune the weights between that unit and the output to decrease the error.
  108.     Rinse and repeat to apply this backwards so that the error propogates back to the input
  109.  
  110. |                  Biological                 |     Neural Networks      |
  111. |---------------------------------------------|--------------------------|
  112. | Many different kinds of neurons             | Homogenous units (no...) |
  113. | Connected to roughly constant # of neurons  | VERY parallel            |
  114. | MANY more neurons (cortical column of 200k) | Rarely more than 5k      |
  115. | Less training, we'd like to think           | LOTS of training         |
  116. | Unsupervised                                | Supervised               |
  117. | Local algorithms                            | Backpropogation          |
  118.  
  119. *Competitive netowrks* - Unsupervised, each output node competes and inhibits other outputs so that each outputs ends up classifying a set of similar inputs
  120.  
  121.  
  122.  
  123. -------------------------------------------------------------------------------
  124. # Chapter 9.1
  125. _Language and rules: The challenge for information-processing models_
  126.     - What is it to understand a language
  127.     - Language learning and the language of thought: Fodor's argument
  128.  
  129. `The physical symbol system hypothesis (particularly in its language of thought incarnation) is the only way of making sense of the complex phenomenon of linguistic comprehension and language learning. In the next section we will test the power of those arguments by looking at connectionist models of specific aspects of language learning.`
  130.  
  131.  
  132. Hypothesis that understanding is mastery of linguistic rules can be understood in different ways.
  133.     - One extreme: Understanding a language is fundamentally a matter of mastering rules
  134.     - Other extreme: users are capable of using words in accordance with linguistic rules because they represent those rules.  
  135.  
  136. **Fodor**
  137.     In order to do 'language', you hav to come up with truth rules and how the words relate. We speak by comapring truth rules about the words in our sentance we construct. He argues that these rules cant be 'written' in the language we're learning, so there must be language of thought.
  138.  
  139.  
  140.  
  141.  
  142. -------------------------------------------------------------------------------
  143. # Chapter 9.2
  144. _Language learning in neural networks_
  145.     - The challenge of tense learning
  146.     - Neural network models of tense learning
  147.  
  148. Neural netwrosk can model complex linguistic ksills without having any explicit lingusitic rules encoded in them (like predicting next letter or next word quite well).
  149. We know how children learn languages, most follow a similar trajectory. We have seen that learning neural nets reproduce some of the same errors and behaviors.
  150. Tenses in language are hard.
  151. Children go through three distinct stages in learning the past tense.
RAW Paste Data
We use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. OK, I Understand
Not a member of Pastebin yet?
Sign Up, it unlocks many cool features!
 
Top