HTM School
Broadcasted live on Twitch — Watch live at https://www.twitch.tv/rhyolight_
An overview of today’s AI landscape. How AI might apply to games.
Source
HTM School
Broadcasted live on Twitch — Watch live at https://www.twitch.tv/rhyolight_
An overview of today’s AI landscape. How AI might apply to games.
Source
Comments are closed.
What I've been thinking about what separates classification ability from 'understanding' is the requirement of experience, as a being, existing in some kind of world, however limited/finite. This is what separates weak/strong, experience and thus worldliness. Sure, once you train an AI within its limited sense means – whether that's a text input box and it being able to surf Google all day on a whim, or it having a body and learning to manipulate objects and interact with people and things – you can then mass-duplicate its learned synapses into a product on shelves or shipped to homes/businesses via Amazon.. But at the end of the day the experience must be something that occurred, over time, with people helping to train it – or multiple ones and then choose the best candidates to duplicate for a mass produced intelligent product. Experience. Until we have something that can experience the way creatures can, we'll be stuck with weak AI.
What do I need to know to create my library of deep python training?
Please stop talking about deep learning. You have no experience at all and your statements are misleading our plainly wrong. Every bachelor student that has done a neural network course can bring up strong arguments and evidence against most of your statements. How about you look at the dota5 bot, the starcraft bot, neural machine translation, just as a start? You know, I've always been interested in the HTM and SDC but the way you relate or talk about the kind of research I'm doing aka machine learning with neural networks aka deep learning is just arrogant pushes every ML researcher away.
Is there a difference between artificial and "true" intelligence? I would speak of natural intelligence instead of "true" intelligence. Artificial intelligence can be true too.
The problem with naming smth machine learning or machine intelligence. The problem is that the machine is not learning the SW is. So it can be Learning Models or Intelligent models.
And we can model then in HW and/or SW.
Please speak about snn Vs htm
I am glad you are using my terminology … anomally 🙂
Iet say I have trained some dog and cat classification problem in deep learning.later I also planned to add horse in already trained dog and cat classification so if I trained only horse image in Ann . May be it get overfitted so that it can predict dog and cat mostly has horse. So it forgetting previous trained thing, so if I have to get good classification i should retrain the horse with some dog and cat image,so that model will not overfit for horse.because conv or dense neural network it is not time based training or temporal thing.so my point is that In trained Ann model if I want to add some new classification label I have retrain with already trained data.but human learning not like ann.so I am asking does the HTM solve that problem by using continuous learning with temporal memory it can have previous learned weight even if i retrain with new data or new pattern it will not overwrite the previous context or weight. I seen your video cake eat by girl or boy that is a good example about temporal memory and I have to point out that we can train the Ann without batch or continuous that is one by one due to time constraints and getting higher accuracy we train the Ann as batch dataset.Actually Ann generalise even more if I train the model one by one not as batch but one by one take so much of time to get converged to global minima.thats why most of the ml engineers using batch processing.i feel HTM can solve this problem because human learning thing continuous so that they will not forget previous experience even though they experience new things daily.it takes some long time to forget our old experience , but Ann will forget quickly previous experience.and I also excited about noise tolerance of HTM due to sparse distribution representation because in Ann model noise is big problem.so we prune it using some technique to reduce the noise so that Ann model perform. Noise tolerance also a good thing in HTM all the best for your HTM team and numenta. I am eagerly waiting for HTM and spiking neural network to get evoluted to break the deep learning problem like less noise tolerance, CPU or gpu power requirement and remember old experience. And I am wait for more resource on HTM and spiking neural network if you talk about snn Vs HTM it will much better. Lstm is a good time based thing it is also get vanished at very long term pattern and it is used only in text processing not for image processing and it takes processing power very intensive.If I am wrong please comment.all the best for future HTM.
Is there any video, paper etc. that can explain Apicals in HTM theory? I failed to come across any.