Guest User

Untitled

a guest
Jun 20th, 2018
75
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.23 KB | None | 0 0
  1. // Im just going to make an example that demonstrates my point, and we could add quite a number of different properties here for a more robust system, like a list of memorized previous inputs
  2. // that weigh on the output of this neuron and so on, but again, the goal here to a proof of concept. We could also list different connection properties in this list so each connection between
  3. // neurons can have its own individual characteristics with this model.
  4.  
  5. // In this model each layer is calculated into a singular output array, which isn't realistic to the biological model, but I think would still work just fine, since with typical NN processing
  6. // cycles, we just care what the resulting output is once all input is propogated through the network, not what happens in between, and I think this accomplishes that.
  7.  
  8. // Keep in mind the first layer's connection indexes will not refer to a particular neuron output in the previous layer since there is none, but rather an index from the input array.
  9.  
  10. // This represents the processing of a 3 -> 5 -> 2 convolutional network.
  11.  
  12. // Again to be completely clear, the connection indexes are referencing the output from neurons that connect to the current neuron, not from it. No backwards hardware dillema.
  13.  
  14. var input = [0.5, 0.7, 0.2];
  15. var brain = [
  16. [ // Layer 1: (input)
  17. [0.5, 0.1, 1, 0], // Neuron: bias [0], strength [1], connection count [3] (from previous layer) (preprocessed for efficiency) and all values after that are connection indexes.
  18. [0.3, 0.8, 1, 1],
  19. [0.7, 0.7, 1, 2]
  20. ],
  21. [ // Layer 2: (hidden)
  22. [0.5, 0.1, 3, 0, 1, 2],
  23. [0.3, 0.8, 1, 1],
  24. [0.7, 0.2, 2, 0, 2],
  25. [0.6, 0.3, 2, 1, 2],
  26. [0.1, 0.9, 1, 0]
  27. ],
  28. [ // Layer 3: (output)
  29. [0.5, 0.1, 3, 0, 1, 3],
  30. [0.3, 0.8, 3, 1, 2, 4]
  31. ]
  32. ];
  33.  
  34. propogate = gpu.createKernel(function(previousLayerOutput, thisLayer) { // we would realistically make a kernel for each different layer size
  35. var output = 0;
  36. for (let i = 0; i < thisLayer[this.thread.x][3]; i++) {
  37. // During this loop, we could add to memory and other properties in a more robust neuron model so all the interactions matter, but in this ultra simple model they get overwritten
  38. // Nevertheless I'm looping through to show the interactions can be accounted for, and I think perhaps this loop can actually also be replaced by a kernel to support even bigger
  39. // networks.
  40. output = (previousLayerOutput[thisLayer[this.thread.x][3 + i]] + (thisLayer[this.thread.x][0] * thisLayer[this.thread.x][1])) / (1 + thisLayer[this.thread.x][1]);
  41. // So you dont need to dissect it, that just returns a weighted output based on the results from the previous layer and which of the neurons in the previous layer this neuron
  42. // should recieve input from
  43. }
  44. return output;
  45. }).setOutput([length]);
  46.  
  47. // Now obviously we would do this dynamicaly, but for the sake of laziness:
  48.  
  49. var layer1Output = propogate(input, brain[0]); // input
  50. var layer2Output = propogate(layer1Output, brain[1]); // hidden
  51. var layer3Output = propogate(layer2Output, brain[2]); // output
  52.  
  53. // Now, obviously we need to generate a propogate kernel for each layer's correct length, but for the sake of laziness, assume we did.
Add Comment
Please, Sign In to add comment