Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- // Im just going to make an example that demonstrates my point, and we could add quite a number of different properties here for a more robust system, like a list of memorized previous inputs
- // that weigh on the output of this neuron and so on, but again, the goal here to a proof of concept. We could also list different connection properties in this list so each connection between
- // neurons can have its own individual characteristics with this model.
- // In this model each layer is calculated into a singular output array, which isn't realistic to the biological model, but I think would still work just fine, since with typical NN processing
- // cycles, we just care what the resulting output is once all input is propogated through the network, not what happens in between, and I think this accomplishes that.
- // Keep in mind the first layer's connection indexes will not refer to a particular neuron output in the previous layer since there is none, but rather an index from the input array.
- // This represents the processing of a 3 -> 5 -> 2 convolutional network.
- // Again to be completely clear, the connection indexes are referencing the output from neurons that connect to the current neuron, not from it. No backwards hardware dillema.
- var input = [0.5, 0.7, 0.2];
- var brain = [
- [ // Layer 1: (input)
- [0.5, 0.1, 1, 0], // Neuron: bias [0], strength [1], connection count [3] (from previous layer) (preprocessed for efficiency) and all values after that are connection indexes.
- [0.3, 0.8, 1, 1],
- [0.7, 0.7, 1, 2]
- ],
- [ // Layer 2: (hidden)
- [0.5, 0.1, 3, 0, 1, 2],
- [0.3, 0.8, 1, 1],
- [0.7, 0.2, 2, 0, 2],
- [0.6, 0.3, 2, 1, 2],
- [0.1, 0.9, 1, 0]
- ],
- [ // Layer 3: (output)
- [0.5, 0.1, 3, 0, 1, 3],
- [0.3, 0.8, 3, 1, 2, 4]
- ]
- ];
- propogate = gpu.createKernel(function(previousLayerOutput, thisLayer) { // we would realistically make a kernel for each different layer size
- var output = 0;
- for (let i = 0; i < thisLayer[this.thread.x][3]; i++) {
- // During this loop, we could add to memory and other properties in a more robust neuron model so all the interactions matter, but in this ultra simple model they get overwritten
- // Nevertheless I'm looping through to show the interactions can be accounted for, and I think perhaps this loop can actually also be replaced by a kernel to support even bigger
- // networks.
- output = (previousLayerOutput[thisLayer[this.thread.x][3 + i]] + (thisLayer[this.thread.x][0] * thisLayer[this.thread.x][1])) / (1 + thisLayer[this.thread.x][1]);
- // So you dont need to dissect it, that just returns a weighted output based on the results from the previous layer and which of the neurons in the previous layer this neuron
- // should recieve input from
- }
- return output;
- }).setOutput([length]);
- // Now obviously we would do this dynamicaly, but for the sake of laziness:
- var layer1Output = propogate(input, brain[0]); // input
- var layer2Output = propogate(layer1Output, brain[1]); // hidden
- var layer3Output = propogate(layer2Output, brain[2]); // output
- // Now, obviously we need to generate a propogate kernel for each layer's correct length, but for the sake of laziness, assume we did.
Add Comment
Please, Sign In to add comment