Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [21:55] == GMpow2_2 [~5@2a02:810a:83c0:cb14:50eb:211d:9215:4379] has joined #netention
- [21:56] <GMpow2_2> [14:10:38] <patham9> Seth used events like a. :|: since a long time
- [21:56] <GMpow2_2> [14:11:00] <patham9> while I always resisted on statements such as <{pixel0} --> [on]>. :|:
- [21:56] <GMpow2_2> now you finally got it
- [21:56] <GMpow2_2> after like ~1 year
- [21:56] == GMpow2 [~5@2a02:810a:83c0:cb14:879:569d:ea01:547f] has quit [Ping timeout: 264 seconds]
- [21:56] <GMpow2_2> or even more...hard to tell
- [21:57] == GMpow2_2 has changed nick to GMpow2
- [21:57] <GMpow2> yes most of my ideas are crap
- [21:57] <GMpow2> and tony handwaves too much
- [21:58] <GMpow2> my rant got spamed away
- [21:58] <GMpow2> point was, tony just says "nars can learn anything if you teach it like a child, use your imagination !!!one one eleven"
- [21:58] <GMpow2> while in practice NARS is propably rather limited
- [21:58] <patham9> yes I agree here
- [21:59] <patham9> not limited when we compare it to other AI systems, but still not the holy grail Tony like it to be
- [21:59] <sseehh__> https://github.com/deepstupid/sphinx5/blob/master/sphinx4-core/src/main/java/edu/cmu/sphinx/decoder/search/SortingActiveList.java this is nothing more than a nars bag described with diferent words
- [21:59] <sseehh__> and a focus on the top priority elements
- [21:59] == sseehh__ [~me@c-67-186-58-124.hsd1.pa.comcast.net] has quit [Excess Flood]
- [21:59] == sseehh__ [~me@c-67-186-58-124.hsd1.pa.comcast.net] has joined #netention
- [21:59] <GMpow2> <patham9> not limited when we compare it to other AI systems
- [22:00] <GMpow2> i dont even agree there
- [22:00] <sseehh__> motherfucking rate limit wtf
- [22:00] <sseehh__> spam coming
- [22:00] <patham9> :D
- [22:00] <sseehh__> <sseehh__> <sseehh__> <sseehh__> https://www.youtube.com/watch?v=U0KTL62BsKI
- [22:00] <sseehh__> <sseehh__> <sseehh__> <sseehh__> i think you guys need to watch this video
- [22:00] <sseehh__> <sseehh__> <sseehh__> <sseehh__> and apply everythng this guy says about the energy generator to nars
- [22:00] <sseehh__> <sseehh__> <sseehh__> <sseehh__> how he says that there are a number of different devices that work etc
- [22:00] <sseehh__> <sseehh__> <sseehh__> <sseehh__> just like there would be a number of differetn nars that work
- [22:00] <sseehh__> <sseehh__> <sseehh__> <sseehh__> i saw a number of different parallels between his team's research in energy generation, and how the same sort of innovation is happening in AI
- [22:00] <sseehh__> <sseehh__> <sseehh__> <sseehh__> so i dont expect convergence on all points, maybe none
- [22:00] <sseehh__> <sseehh__> <sseehh__> <sseehh__> probably some important ones like you said
- [22:00] <sseehh__> <sseehh__> <sseehh__> <sseehh__> but dont expect full convergence or design agreement
- [22:00] <patham9> i saw it :D
- [22:00] <GMpow2> yes the freenode got retarded again
- [22:00] <sseehh__> <sseehh__> <sseehh__> symvision or something like it is still needed
- [22:00] <sseehh__> <sseehh__> <sseehh__> more helpful if it's woven into the nars control mechanism more completely
- [22:00] <sseehh__> <sseehh__> <sseehh__> just like i imagine now how sphinx voice recognition could also be
- [22:00] <sseehh__> <sseehh__> <sseehh__> since i see similarities in the data structures they use or could be "unified" to use
- [22:00] <sseehh__> <sseehh__> <sseehh__> mainly the bag-like priority queue
- [22:00] <sseehh__> <sseehh__> <sseehh__> ill show u an example
- [22:00] <sseehh__> <sseehh__> <sseehh__> https://github.com/deepstupid/sphinx5/blob/master/sphinx4-core/src/main/java/edu/cmu/sphinx/decoder/search/SortingActiveList.java this is nothing more than a nars bag described with diferent words
- [22:00] <patham9> i know your plans about unifying NARS with this
- [22:00] <sseehh__> <sseehh__> <sseehh__> and a focus on the top priority elements
- [22:00] <GMpow2> like deepminds RL AI can do a better job than nars i think...
- [22:01] <patham9> looks reasonable if it operates like this anyway
- [22:01] <GMpow2> <patham9> i know your plans about unifying NARS with this
- [22:01] <GMpow2> ?
- [22:01] <GMpow2> no
- [22:01] <patham9> was talking to seth not you
- [22:01] <sseehh__> yeah i mean if it cna work for voice recognition it can work for symvision
- [22:02] <GMpow2> cna?
- [22:02] <sseehh__> can
- [22:02] <GMpow2> oh
- [22:02] <GMpow2> :D
- [22:02] <patham9> "[22:00] <GMpow2> like deepminds RL AI can do a better job than nars i think..." im not convinced, compare for example the time needed googles deepmind trains their systems on something like Pong, with how long Seth does it
- [22:02] <patham9> seconds versus days
- [22:02] <sseehh__> whe ni see them using millions of training steps i have to think that nars is potentially much faster
- [22:02] <sseehh__> because i run things no where near that lon
- [22:02] <sseehh__> long
- [22:02] <GMpow2> ah sehs version, i talked about 2.x.y....
- [22:03] <patham9> we will both come there.
- [22:03] <GMpow2> hm this is true...
- [22:03] <GMpow2> (that seh's is more efficient)
- [22:03] <sseehh__> my intuition says that NAL can learn so much faster than any numerical connectionist learning
- [22:03] <GMpow2> effective...whatever
- [22:03] <sseehh__> that its ridiculous how ignored it is
- [22:03] <sseehh__> but i havent bothered to demonstrate this factually
- [22:03] <sseehh__> i figured that's something someone with a more academic approach should try
- [22:04] <sseehh__> in the meantime im happy with my new bags and index
- [22:04] <sseehh__> HijackBag
- [22:04] <sseehh__> fuck CurveBag
- [22:04] <sseehh__> lol CurveBag still has its place somewhere
- [22:04] <sseehh__> but HijackBag is like the sawed off shotgun of bags
- [22:05] <patham9> yes thats also my intiution about NAL, seh
- [22:05] <sseehh__> it should be possible to mathematically prove why its the case that NAL can learn faster and represent solutions more compactly
- [22:05] <sseehh__> aside from the most important fact which is that it maintains a human readble symbol grounding
- [22:05] <sseehh__> not a big matrix of floats
- [22:06] <GMpow2> it could be seens as a boolean sparse matrix
- [22:07] <GMpow2> a big matrix :D
- [22:07] <GMpow2> i was thinking about using this as an representation for the terms...its not that fruitful i think
- [22:07] <sseehh__> a deep learning network just shits out a big turd of arbitrarily calculated 0's and 1's
- [22:08] <sseehh__> nars on the other hand shits out perfectly crystlaline term logic formulas
- [22:08] <GMpow2> its still incomprehensible at some point :S
- [22:09] <sseehh__> ((is&this)~incomprehensible)?
- [22:09] <GMpow2> no, but the results of this with term complexity > 9000 :D
- [22:09] <sseehh__> (--,(this --> incomprehsnible)).
- [22:10] <sseehh__> yeah but those are like the subconscious thought underpinnings
- [22:10] <GMpow2> (--,({this} --> incomprehsnible)).
- [22:10] <sseehh__> if it cant produce a simple output then its no better than DL
- [22:10] <GMpow2> makes more sense...
- [22:10] <sseehh__> but even in the mean time before it forms that simple conclusion, it still is symbol grounded
- [22:11] <sseehh__> and potentially lossless with regard to the input it's fed
- [22:11] <sseehh__> or at least lossy in some logically explainable way
- [22:11] <sseehh__> relative to the input budgeting and truth confidences
- [22:11] <sseehh__> where is the relative confidence adjustment for a DL network input
- [22:11] <sseehh__> it cant do such a thing
- [22:12] <GMpow2> maybe its just another mic strategy to point "researchers" into the wrong direction
- [22:12] <sseehh__> i dont doubt that possibility
- [22:12] <GMpow2> or maybe we need an AGI xplosion that it gets recognized
- [22:13] <sseehh__> i dont care about proving any of this stuff right now
- [22:13] <sseehh__> just to get it to work
- [22:13] <sseehh__> if "they" dont want to help its their loss
- [22:13] <GMpow2> like a concious nars which has hacked itself into all google cars and does a syncronized nars dance with the new bodies...
- [22:13] <GMpow2> :D
- [22:13] <GMpow2> "not in my lifetime"...yes i know
- [22:14] <patham9> ^^
- [22:14] <sseehh__> i have radiation levels for it to monitor, security cameras for it to watch. if this isnt AGI i dont care
- [22:14] <sseehh__> and likewise if there is a tool out there that can do this already, ill use it
- [22:14] <sseehh__> but i cant find it
- [22:14] <GMpow2> well, it has enough papers and books to read and compress
- [22:15] <sseehh__> i have politicians and an entire space administration to prosecute for crimes aginst humanity
- [22:15] <sseehh__> with a robo lawyer
- [22:15] <sseehh__> etc
- [22:15] <GMpow2> and youtube "workers" work furiously too all the time
- [22:15] <sseehh__> companies to bankrupt, etc\
- [22:15] <GMpow2> and it could help getting rid of closed source crap like mathematica
- [22:16] <sseehh__> yeah they are definitely asking for it
- [22:16] <sseehh__> "we compute math so well we cant figure out how to optimize our income so we can give u it for free. oh pretend you didnt realize that"
- [22:17] <GMpow2> the irony would be if the AI does it by bootstrpping of mathematica :D
- [22:17] <sseehh__> what good is mathematica if it cant solve its own financial burdens
- [22:17] <sseehh__> that it has to shift that burden to people
- [22:17] <GMpow2> then it goes like "meh i replaced these numerial solvers myself with something better"
- [22:17] <GMpow2> (yes not in my lifetime, im not so sure)
- [22:17] <sseehh__> strong AI will end all proprietary software
- [22:18] <sseehh__> so i dont see why Cyc bothers to pretend to do anything extraordinary they would just end themselves
- [22:18] <GMpow2> hopefully
- [22:18] <sseehh__> instead it just looks like a joke
- [22:18] <GMpow2> and i'll know how big games work before i die...
- [22:19] <GMpow2> wishful thinking
- [22:19] <GMpow2> then there is software which is closed source and used by .gov
- [22:20] <GMpow2> like firewall, vpn... etc
- [22:20] <GMpow2> can't remember what they used
- [22:20] <sseehh__> https://github.com/automenta/narchy/blob/skynet2/doc/meme/belief_and_antidesire.jpg
- [22:23] <sseehh__> now that i got done with that AFK shit i will continue the next step after this hijackbag which is to re-use links. i have a very interesting expreiment to try
- [22:23] <sseehh__> its like scattering brainwaves
- [22:23] <sseehh__> as a consequence of link re-use
- [22:27] <GMpow2> i still need to get educated about links :D
- [22:27] <GMpow2> someday...
- [22:28] <sseehh__> i think the remaiing GC dead weight is held in dead links in inactive concepts
- [22:28] <sseehh__> but if these get re-purposed they will not be dead
- [22:28] <sseehh__> but they will also not necessarily be the same values or even relevant values as to what they originally were
- [22:28] <sseehh__> which wuld add a degree of unpredictability
- [22:28] <sseehh__> as they are used in premise formation
- [22:28] <sseehh__> but this might help
- [22:29] <sseehh__> to scatter brainwaves where they wouldnt have ordinarily crossed
- [22:29] <sseehh__> but the re-use of the links will be more computationally efficient regardless
- [22:52] <GMpow2> https://www.youtube.com/watch?v=KRGca_Ya6OM
- [22:55] <GMpow2> https://www.youtube.com/watch?v=FflcA85zcOM
- [22:55] <GMpow2> so they are saying that just by driving with a fast vehicle the timedialation could add up to a measurable range...
- [23:14] <GMpow2> hm or not...
- [23:14] <GMpow2> will take a while to add up 5 ps :D
- [23:23] <sseehh__> did u see either of the 2 important youtube videos i posted today
- [23:23] <sseehh__> ive heard the guy in the first one talk about it before
- [23:23] <sseehh__> but its explained better here
- [23:24] <sseehh__> he says nukes can only truly detonate when and where there is a certain configuration of the earth and the sun
- [23:24] <sseehh__> otherwise they explode as just dirty bombs
- [23:24] <sseehh__> then he explained what the launch computer really must have to calculate
- [23:24] <sseehh__> because they cant just detonate a nuke whenever and wherever they want
- [23:25] <GMpow2> hm nope
- [23:25] <sseehh__> it has to do with the point along the line between the sun and center of the earth
- [23:25] <sseehh__> where this occurs on the surface/atmosphere
- [23:26] <sseehh__> that a certain resonance is what enables the nuke to explode
- [23:26] <GMpow2> but this is nothing new
- [23:26] <GMpow2> but i dont believe this ^^
- [23:26] <sseehh__> he calculated it based on all the nuclear blasts
- [23:26] <sseehh__> their location and time
- [23:26] <sseehh__> and found a pattern
- [23:26] <GMpow2> yes i know :D
- [23:26] <sseehh__> how can u argue with that
- [23:27] <GMpow2> "no we can't launch nukes on germany because it doesn't resonate"
- [23:27] <sseehh__> "no we can't launch nukes on berlin germany until 5pm 10/11/2016 that is the next resonance"
- [23:27] <GMpow2> if i look at static in the TV i'll for sure see someday green aliens news
- [23:27] <sseehh__> what i wnat to know is how this resonance effect affects NPP's
- [23:27] <sseehh__> the reactors
- [23:28] <sseehh__> if they cross this arc
- [23:28] <sseehh__> do they create like a surge in power output or something
- [23:28] <GMpow2> can it be veriefied or disproven easily...no
- [23:28] <sseehh__> or maybe they know in advance and shield the rods
- [23:28] <sseehh__> i would like to know more about the 2nd video i linked
- [23:28] <sseehh__> that is more practcal
- [23:28] <sseehh__> the magnet motor generators
- [23:29] <sseehh__> http://magneticenergysecrets.com/
- [23:30] <sseehh__> loooks simple enough
- [23:30] <GMpow2> i dont put a label "BS" on it...i guess my mind immune system is again a bit tune against these strange ideas
- [23:30] <sseehh__> it might be easier to pull off with a microcontroller than all that circuitry
- [23:30] <GMpow2> goes up and down and up and down :D
- [23:30] <sseehh__> well i am susceptible to unwarranted gullibility
- [23:30] <sseehh__> so im fact checking
- [23:31] <sseehh__> this isnt the only magnetic motor like this, he says in the video
- [23:31] <sseehh__> that there are several using the same princiiple
- [23:31] <sseehh__> http://peswiki.com/directory:bedini-sg:exhaustive-summary
- [23:34] <GMpow2> order now...order now
- [23:34] <GMpow2> seems like a good idea to gt money for nothing...uhh...free energy
- [23:35] <sseehh__> yeah but they admit that thre are other magnet generators based on the same principle
- [23:47] <patham9> re
- [23:47] <patham9> howsit going seh? :)
- [23:47] <patham9> narchy world domination already? ^^
- [23:48] <GMpow2> huh?
- [23:48] <sseehh__> no, narchy promises anarchy
- [23:49] <sseehh__> i made this new bag yesterday
- [23:49] <sseehh__> it is my favorite so far
- [23:49] <sseehh__> https://github.com/automenta/narchy/blob/skynet2/nal/src/main/java/nars/bag/impl/experimental/HijackBag.java
- [23:49] <GMpow2> patrick knows how my bag looks like :S
- [23:49] <sseehh__> its a leaky hashtable
- [23:50] <GMpow2> lets see
- [23:50] <sseehh__> but the leak is determined by relative budget strength of the potential item replacement
- [23:50] <sseehh__> float probNext = den > Param.BUDGET_EPSILON ? np / den : 0.5f;
- [23:50] <sseehh__> if (rng.nextFloat() < probNext) {
- [23:50] <sseehh__> dPending = np - dp; //keep the new value
- [23:50] <sseehh__> range(np);
- [23:50] <sseehh__> } else {
- [23:50] <sseehh__> remove(x);
- [23:50] <sseehh__> put(displaced.get(), displaced); //reinsert what was removed
- [23:50] <sseehh__> it is represented entirely with one map
- [23:50] <sseehh__> which i use nonblocking hashmap for
- [23:51] <sseehh__> so its locklessly concurrent
- [23:51] <sseehh__> its like a bloom filter using nars budget
- [23:51] <sseehh__> for sampling i use a linear probing that selects based on priority
- [23:51] * GMpow2 looked at the code and understood nothing :D
- [23:51] <sseehh__> this i need to tune a bit better with some summary statistics
- [23:51] <sseehh__> if ((r < p) || (r < p + tolerance((((float)j) / jLimit)))) {
- [23:51] <sseehh__> if (target.test(x)) {
- [23:51] <sseehh__> n--;
- [23:51] <sseehh__> r = curve();
- [23:51] <sseehh__> thats because you have to understand the superclass too
- [23:52] <sseehh__> public class HijackBag<X> extends HijacKache<X,BLink<X>> implements Bag<X> {
- [23:52] <sseehh__> HijacKache is essentially an open indexed Map<>
- [23:52] <sseehh__> but it is a modification of NonBlockingHashMap so it can do all the super concurrency
- [23:52] <sseehh__> any open-indexed / cuckoo hashtable can work for this
- [23:53] <patham9> cool
- [23:53] <patham9> so it builds on this nonblocking hashmap?
- [23:53] <patham9> does it have a selection curve still?
- [23:53] <sseehh__> yes it extends it but it could also be rewritten to work for other Map implementations like UnifiedMap from eclipse collections
- [23:53] <sseehh__> yes in a way
- [23:53] <sseehh__> but since there is no sorting, it has to do it statistially
- [23:54] <sseehh__> when it probes the entries
- [23:54] <patham9> ah i see, interesting
- [23:54] <sseehh__> i measured some of the selection behavior, and it approximates what i can get with curvebag
- [23:54] <patham9> nice1
- [23:54] <sseehh__> so yes any arbitrary curve can still be defined
- [23:54] <patham9> if sorting can be avoided this is great
- [23:54] <sseehh__> yah and also keeping two data structures synchronized was a headache
- [23:54] <sseehh__> the map and the list
- [23:54] <patham9> oh i see. ^^
- [23:55] <sseehh__> http://i.imgur.com/BXaZsI4.png
- [23:55] <sseehh__> B and C are two different executions with different random seeds
- [23:55] <sseehh__> to show that it would get smoother
- [23:56] <patham9> nice
- [23:56] <sseehh__> this is for a 32 item bag without any forgetting
- [23:56] <sseehh__> if forgetting were involved, things would shift
- [23:56] <sseehh__> and the curve would become smoother
- [23:56] <sseehh__> but i think if they are stuck in place it has the jagged edges
- [23:56] <sseehh__> ill analyze it more
- [23:57] <sseehh__> after i make the better probing selector
- [23:57] <sseehh__> it uses a 'beam' threshold which widens
- [23:57] <sseehh__> the closer it gets to the end of the probe
- [23:57] <sseehh__> so that it tries to get a more precise result early on, starting from a random index
- [23:57] <sseehh__> but as it nears the end of the cycle through them all, it becomes more willing to accept anything
- [23:57] <sseehh__> i use a curve for this too
- [23:57] <sseehh__> also i randomize the iteration order that it will probe, either forward or backward
- [23:58] <sseehh__> in some ways this is like levelbag
- [23:58] <sseehh__> in how it probed levels
- [23:58] <sseehh__> except here each level is length 0 or 1
- [23:58] <sseehh__> and unsorted
- [23:58] <patham9> i see
- [23:59] <sseehh__> if it can measure the min/max range beter then it can be more precise in the threshold value it samples with
- [23:59] <sseehh__> anyway this is all fine-tuning
- [23:59] <sseehh__> it works here as a drop-in alternative for CurveBag
- [23:59] <sseehh__> now im using both
- [23:59] <sseehh__> CurveBag for concepts bag, and this for all cocnept's termlink and tasklink bags
- [23:59] <sseehh__> memory usage is lowered, bag access is faster
- [23:59] <GMpow2> <sseehh__> but since there is no sorting, it has to do it statistially
- [00:00] <GMpow2> i always woundered why the sorting is required at all
- [00:00] <sseehh__> it isnt here
- [00:00] <GMpow2> yes i know why but it sucks
- [00:00] <sseehh__> thas why i said its like the sawed off shotgun of bags, or more accurately its like a machine gun
- [00:00] <sseehh__> which sprays bullets everywhree
- [00:01] <GMpow2> its like as if pei was like "meh, that will be no performance issue"
- [00:01] <patham9> Peis bag didn't sort
- [00:01] <sseehh__> im stil using curvebag for concepts, which is sorted
- [00:01] <sseehh__> it "binned" like a histogram
- [00:01] <patham9> indeed
- [00:02] <GMpow2> hm i never understood this
- [00:02] <GMpow2> because i never looked into it
- [00:02] <sseehh__> it was like N lists
- [00:02] <GMpow2> hm...right
- [00:02] <sseehh__> and each one corresponded to a range of the priority 0..100%
- [00:02] <GMpow2> i can remember now :D
- [00:02] <sseehh__> like 0..0.1, 0.1..0.2
- [00:02] <patham9> indeed
- [00:02] <sseehh__> the problem with this is that the ranges cant be assumed to be balanced
- [00:03] <sseehh__> so sampling from them isnt fair without some accounting for their distribution
- [00:03] <sseehh__> which could be possible
- [00:03] <sseehh__> we may make a better level bag at some point with adaptive binning
- [00:03] <patham9> yes which sometimes gave items in sparse leves of low priority higher selection chance than items in dense levels of high priority
- [00:03] <sseehh__> like quartiles or something
- [00:03] <GMpow2> hm
- [00:03] <patham9> this might work yes
- [00:04] <GMpow2> is it actually a problem that the priorities run against zero
- [00:04] <sseehh__> no, the dynamic range can be whatever
- [00:04] <sseehh__> in curvebag you have a well known min and max
- [00:04] <sseehh__> so you can normalize any sampling to these if you want
- [00:04] <GMpow2> i have no idea how float behaves but i think it will round down to zero at some point
- [00:04] <sseehh__> like dynamic range compression in audio
- [00:04] <sseehh__> thats why we use an epsilon minimum value below which is considereed zero
- [00:05] <patham9> curvebag seems to be the simplest way to get the bag right. but i agree its not the most performant option
- [00:05] <sseehh__> /**
- [00:05] <sseehh__> * minimum difference necessary to indicate a significant modification in budget float number components
- [00:05] <sseehh__> */
- [00:05] <sseehh__> public static final float BUDGET_EPSILON = 0.0002f;
- [00:05] <GMpow2> ah i see
- [00:05] <sseehh__> this can be reduced arbitrarly within float precision limits
- [00:05] <sseehh__> but this gives enough room at the bottom for interseting dynamics
- [00:05] <sseehh__> no i think this hijackbag is simpler
- [00:05] <sseehh__> well it is algorithmically simpler
- [00:05] <patham9> algorithmically yes
- [00:05] <sseehh__> curvebag is more complex becaues it involves keeping two distinct data structures in sync
- [00:06] <sseehh__> and while this has been solved, even in the concurrent case (which is a headache) i wish i had thought of this approach earlier
- [00:07] <sseehh__> what i mentioned before, my next step is to try to re-use links
- [00:07] <sseehh__> but in a way which allows links, wherever they may be, to receive the new value rather than just get deleted
- [00:08] <sseehh__> do you see what this will do
- [00:08] <sseehh__> lets say a concept A gets a tasklink X
- [00:08] <sseehh__> er,
- [00:09] <sseehh__> maybe im confusing it
- [00:09] <sseehh__> ill see what the code turns out to do
- [00:09] <sseehh__> basically using a fixed set of link instances
- [00:09] <sseehh__> so anywhere that also uses them will be affected
- [00:09] <patham9> sounds like you want to avoid creating new objects for new links
- [00:10] <sseehh__> yes thats all
- [00:10] <sseehh__> but im tyring to think if there are any side effects this will have
- [00:10] <sseehh__> but i think thers only one situation i keep a refernce to a link
- [00:10] <sseehh__> so i can just clone them there
- [00:10] <sseehh__> and then the bag will be free to re-use its own links
- [00:11] <sseehh__> the link objects themselves can eventually get merged into the Map's data structure
- [00:11] <sseehh__> so it can be one array
- [00:11] <sseehh__> not one array referencing N link objects
- [00:12] <sseehh__> the problem with the extra instances is not just memory consumption but it puts pressure on the garbage collector
- [00:12] <sseehh__> while profilling i can see there are typically millions of these
- [00:12] <sseehh__> and they are relatively small objects each
- [00:12] <sseehh__> like < 50 bytes
- [00:12] <sseehh__> but its in the number of them that complicates the memory
- [00:14] <sseehh__> after i address this then the next step will be to revisit the deriver and optimize that further
- [00:14] <sseehh__> then ill evalaute what the remaining hotspots are
- [00:14] <sseehh__> this is specifically implementation side of things
- [00:14] <sseehh__> there are other areas to concentrate on wrt theory, user interface etc
- [00:17] <patham9> i see
- [00:18] <patham9> another question
- [00:18] <patham9> when you give feedback to links
- [00:18] <patham9> will this preference be used by tasks of different content? or is the budget increase just for native tasks?
- [00:18] <patham9> consider the concept cat
- [00:19] <patham9> ehm no consider this one:
- [00:19] <patham9> light --> on
- [00:19] <patham9> a task like switch --> on ==> light --> on can also be in this concept light --> on in your implementation right?
- [00:21] <sseehh__> any tasklink can be in any concept
- [00:21] <sseehh__> i should document where this happens, and also find if i'm missing any links which should be added
- [00:22] <patham9> im not talking about the object
- [00:22] <patham9> any task can be in any concept?
- [00:22] <sseehh__> any tasklink can be in any concept
- [00:22] <patham9> <light --> on> can be in <fruit --> [green]> ?
- [00:22] <sseehh__> in its tasklink table
- [00:23] <sseehh__> as a tasklink yeah
- [00:23] <sseehh__> not belief table
- [00:23] <patham9> why would you allow this?
- [00:23] <sseehh__> im just saying its possible for this to happen
- [00:23] <sseehh__> im not sure every case that a tasklink is inserted
- [00:23] <patham9> yes, and i am asking why you would allow this ^1
- [00:23] <patham9> ^^
- [00:23] <patham9> i claim its a bug if its happening
- [00:23] <sseehh__> well, we allow termlinks from anywheer to anywhere like in STM link
- [00:24] <sseehh__> if a strange tasklink is for some reason inserted into another concept, it could be for a purpose similar to this. again i need to check if any of my plugins do this
- [00:24] <patham9> termlinks? they link term to sub or superterms in your design
- [00:24] <sseehh__> but the net effecdt will be that a novel permise may be generated
- [00:24] <sseehh__> both
- [00:25] <patham9> yes but there wont be a termlink between <fruit --> [green]> and <{tim} --> human>
- [00:25] <sseehh__> due to STMLinkage maybe
- [00:25] <patham9> and task <fruit --> [green]> has no place in concept human
- [00:25] <sseehh__> again due to something like STMLinkage
- [00:25] <sseehh__> but ordinary concept activation, no
- [00:25] <patham9> i see
- [00:25] <patham9> there is an issue though
- [00:26] <patham9> for concept <light --> on> both tasks <light --> on>. and <switch --> on ==> light --> on>. will go into this concept
- [00:26] <patham9> when one is used, the link feedback will just reward the termlink
- [00:26] <patham9> where the information of which task was used, will be lost
- [00:26] <sseehh__> no my feedback works like this:
- [00:27] <patham9> so termlink budget is an ambiguity problem for different task contents
- [00:27] <sseehh__> each derived task records the termlink and tasklink and concept that formed it
- [00:27] <patham9> *has
- [00:27] <sseehh__> this is known because the premise contains this and the premise is what forms the task
- [00:27] <patham9> yes, and the feedback then strenghtens what?
- [00:27] <sseehh__> if there is feedback to apply i go to this concept, find this termlink and this tasklink
- [00:27] <sseehh__> and apply the feedback to them if they exist
- [00:27] <sseehh__> one or both may or may not exist anymore
- [00:28] <patham9> so you apply the feedback to tasklink-termlink combinations and not termlink alone?
- [00:28] <sseehh__> so its only if they still exist
- [00:28] <sseehh__> to both individually
- [00:28] <sseehh__> if possible
- [00:28] <sseehh__> or only one individually
- [00:28] <sseehh__> or neither
- [00:28] <sseehh__> and thats as far as it goes
- [00:28] <patham9> so both the task and the termlink will be strenghtened
- [00:28] <sseehh__> task, termlink, and tasklink
- [00:29] <patham9> yes
- [00:29] <patham9> i dont think that this can work very well
- [00:30] <sseehh__> compared to what?
- [00:30] <patham9> hm but on the other hand its not that much differently than rewarding the task-belief pair
- [00:30] <patham9> *differenzt
- [00:30] <sseehh__> it is different because it doesnt affect the task or belief
- [00:30] <sseehh__> it jus affects the links which chose it
- [00:30] <patham9> yes
- [00:31] <sseehh__> negative feedback is also generated if the task already existed, so in this way it also functions as a novelty filter
- [00:31] <patham9> problem here is that a termlink can also be chosen from another task of different content in the same concept
- [00:31] <patham9> leading to a ambiguity that becomes an issue
- [00:31] <sseehh__> true but thats why the feedback's effect is tunable
- [00:31] <sseehh__> it could be 1%, 5%, 100% whatever
- [00:31] <sseehh__> its in the aggregate activity of this that i expect results
- [00:31] <patham9> this doesn't resolve this ambiguity issue
- [00:31] <sseehh__> with low affect %
- [00:32] <patham9> the feedback strength doesn't resolve it
- [00:32] <sseehh__> if a termlink consistently receives negative feedback then it will be suppresed generally
- [00:32] <patham9> one way to resolve is having different termlink budgets for different task contents
- [00:32] <patham9> for each termlink!
- [00:32] <sseehh__> if you want to make the feedback based on pairs of termlink and tasklink then you need tasklink record and where is all this memory coming from to hold this effectively?
- [00:33] <sseehh__> what is the point of having termlink and tasklink if they are fused together into a big matrix
- [00:33] <patham9> we dont have them
- [00:33] <patham9> but we are currently resolving this ambiguity problem
- [00:34] <patham9> and i found two ways, having task-term-specific termlink budgets is one way
- [00:34] <sseehh__> it comes down to what you want the system to remember to improve the prediction ability for future premises
- [00:34] <sseehh__> but this is limited to memory
- [00:34] <patham9> the other one is giving up the idea of foreign tasks
- [00:34] <sseehh__> there will always be design tradeoffs like this
- [00:34] <sseehh__> a compromise might be to use a vector of termlink values each for corresponding operation type of a task applied with it
- [00:35] <sseehh__> then it will be a vectorized termlink model whcih is less expensive than a Map<TaskLink,Budget> sort of memory in each termlink
- [00:35] <patham9> there will be some tradeoffs, but it might later turn out that this ambiguity problem is so fundamental that the system will never overcome it
- [00:35] <patham9> without a fundamental change like the two i am currently investigating
- [00:36] <sseehh__> so far i havent seen a need to address this
- [00:36] <sseehh__> one other reason is how i budget premises
- [00:36] <sseehh__> which is in how termlink and tasklink budget result in the premise's budget
- [00:36] <sseehh__> there are a few options
- [00:36] <sseehh__> AND, OR, AVERAGE, PLUS etc
- [00:36] <sseehh__> if OR is used, then a premise wil be budgeted by either the termlink or tasklink
- [00:37] <sseehh__> so it has half the chance of being affected by the ambiguity
- [00:37] <sseehh__> of being suppressed by ambiguity i mean
- [00:37] <patham9> i see
- [00:37] <sseehh__> because it will be optimistic
- [00:38] <sseehh__> but if AND is chosen then it will be pessimistic
- [00:38] <sseehh__> etc
- [00:38] <sseehh__> but i dont see this as being a problem here
- [00:38] <sseehh__> because this feedback should only manifest such drastic consequences in the aggregate
- [00:38] <sseehh__> and even then its effect is only short-term
- [00:39] <sseehh__> as new links replace the old ones, this information is lost anywya
- [00:39] <sseehh__> also there are many ways to form the same premise
- [00:39] <sseehh__> from different termlink, tasklink pairs
- [00:40] <sseehh__> so if one fails, another wont
- [00:40] <sseehh__> via another concept etc
- [00:40] <sseehh__> there is a lot of redundancy built into NAL
- [00:40] <sseehh__> for better or worse
- [00:41] <sseehh__> er, many ways to form the same derivation from multiple premises
- [00:41] <sseehh__> and consistent merging of their results will have the same net effect
- [00:42] <sseehh__> even down to the same durability and quality calculations if careful
- [00:42] <sseehh__> like if there existed a hypothetical "steady state" budget for a derivation given the current system state
- [00:42] <sseehh__> each derivation being a sample which contributes to the result
- [00:42] <sseehh__> yo ucould theoretically calculate every premise possible in the memory
- [00:43] <sseehh__> and the resulting task budget, from all the merging, would have a final value
- [00:43] <sseehh__> nars only tries to approximate this as best as it can
- [00:43] <patham9> yes, indeed
- [00:43] <sseehh__> but imagine how much cpu would be required to get an exhaustive result for one task alone
- [00:44] <patham9> 1000 ^^
- [00:44] <patham9> do you mind if i share this discussion to #nars btw.?
- [00:44] <sseehh__> ok
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement