Microsoft Research
One of the main challenges for AI remains unsupervised learning, at which humans are much better than machines, and which we link to another challenge: bringing deep learning to higher-level cognition. We review earlier work on the notion of learning disentangled representations and deep generative models and propose research directions towards learning of high-level abstractions. This follows the ambitious objective of disentangling the underlying causal factors explaining the observed data. We argue that in order to efficiently capture these, a learning agent can acquire information by acting in the world, moving our research from traditional deep generative models of given datasets to that of autonomous learning or unsupervised reinforcement learning. We propose two priors which could be used by an agent acting in its environment in order to help discover such high-level disentangled representations of abstract concepts. The first one is based on the discovery of independently controllable factors, i.e., in jointly learning policies and representations, such that each of these policies can independently control one aspect of the world (a factor of interest) computed by the representation while keeping the other uncontrolled aspects mostly untouched. This idea naturally brings fore the notions of objects (which are controllable), agents (which control objects) and self. The second prior is called the consciousness prior and is based on the hypothesis that our conscious thoughts are low-dimensional objects with a strong predictive or explanatory power (or are very useful for planning). A conscious thought thus selects a few abstract factors (using the attention mechanism which brings these variables to consciousness) and combines them to make a useful statement or prediction. In addition, the concepts brought to consciousness often correspond to words or short phrases and the thought itself can be transformed (in a lossy way) into a brief linguistic expression, like a sentence. Natural language could thus be used as an additional hint about the abstract representations and disentangled factors which humans have discovered to explain their world. Some conscious thoughts also correspond to the kind of small nugget of knowledge (like a fact or a rule) which have been the main building blocks of classical symbolic AI. This, therefore, raises the interesting possibility of addressing some of the objectives of classical symbolic AI focused on higher-level cognition using the deep learning machinery augmented by the architectural elements necessary to implement conscious thinking about disentangled causal factors.
See more at https://www.microsoft.com/en-us/research/video/from-deep-learning-of-disentangled-representations-to-higher-level-cognition/
Source
He is genius
https://www.youcaring.com/allhurtdirectlyorindirectlyorinversely-1093750
Deepening of learning into a higher cognitive level:
Very good.
What and where are the works, who is working on this approach?
Wonderful video. You can't help but admire his approach for what is AI, and the way he manages to convey these concepts. Brilliant!
What a great mind… and what a moron – fallen into that religion of "doing good" he and 99.999 percent of great minds (less than 1 pct of humANIMALs) alway end up with… giving all these powers (more or less for free and without any control) to the brutal bloodthirsty ruling politico oligarchical predators that inevitably bring humANIMALs where they deserve to end up: self destruction (already nukes were too much, and these NewEvil monopolists will be much worse… all these GOOLAGs, AssBooks or MICROshit)
The intuition for why we current speech models can't produce good unconditional samples (see wavenet) is simply mindblowing. Phonemes occupy a small number of bits as compared with the overall signal (~10/s as compared with 16 k/s)!
Anyone has a link to the slides? And come on camera people, it's not a beauty pageant, it's ok if you show slides instead of the speaker's face 🙂
No matter how much machine learning or data processing we employ, human intervention will always remain. When Trump is in power, his popularity soars, even though he loses all re-elections to the senate. When Obama is in power, Hillary leads Trump in polls, even though she lost.
When will humans offset the influence of Cambridge Analytica and other manipulations. When will fake news stop. Can the evil Demon be banished from the net?
However disentangled leading to representations in higher cognition is interesting. I thought Turing predicted machines can never mimic a human cognition, even consciousness.
Who is the gentleman at 1:09:35 asking a question, and bringing up gradual learning?
Sampling rate * bit depth is a big overestimate of the amount of information in speech audio signals – look at the compression ratios that audio codecs can achieve
Doesn't translation into an abstract space necessitate a loss of information?
Humans use fuzzy approaches, while computers use precise numbers. Which one can work in this complex world?
Adversarial examples is almost always used as an example of complete AI failure, because it is "obvious" that the object preserves identity. But one could arguably do the same to us! As it was already demonstrated in https://arxiv.org/abs/1802.08195
amazing talk.
In 12:07, are cognitive states low dimensional if that is the case are they sparse? If they are both sparse and low dimensional it contradicts with what he said in his MSS talk in 2012, where he states high dimensional and sparse is better than low dimensional
Someone should write a detailed blog explaining stuff in this
Someone should write a detailed blog explaining stuff in this
Sounds right to me. But why do they assume that the traditional neural net and deep learning are the best or only possible fundamental structures and processes for a system with these capabilities of disentangled abstractions working together with granular representations?
Would be nice if the camera was on the slides in this video, rather than mostly on the speaker. Anyone know where the slides might be found? Sadly the link posted below is dead. This post has most of the slides though https://medium.com/@SeoJaeDuk/archived-post-from-deep-learning-of-disentangled-representations-to-higher-level-cognition-b848fdc0de2c