University of California Television (UCTV)
(Visit: http://www.uctv.tv/) Are intelligent machines possible? If they are, what will they be like? Jeff Hawkins, an inventor, engineer, neuroscientist, author and entrepreneur, frames these questions by reviewing some of the efforts to build intelligent machines. He posits that machine intelligence is only possible by first understanding how the brain works and then building systems that work on the same principles. He describes Numenta’s work using neocortical models to understand the torrent of machine-generated data being created today. He will conclude with predictions on how machine intelligence will unfold in the near and long term future and why creating intelligent machines is important for humanity. Series: “UC Berkeley Graduate Council Lectures” [12/2012] [Science] [Show ID: 24412]
Source
Similar Posts
28 thoughts on “Intelligence and Machines: Creating Intelligent Machines by Modeling the Brain with Jeff Hawkins”
Comments are closed.
Emotions in a human brain are nothing more than hormonal changes produced by the limbic system that you have learned to associate with certain conscious states because of their co-occurrence. The experience of an emotion is not directly linked to conscious processing, as is shown in people with generalized anxiety disorder whose limbic system is constantly in the state of apprehension and fearfulness despite there being no external factors to produce this anxiety.
ckjlnıksdjncmkomko
When I solve 1+1=, I predict that writing a 2 afterwards will make me happiest. When I decide how to solve a problem, my predictions about the results about various possible actions are used to select a single action.
Machines need emotions/instinct to decide motor actions. They can only do what we tell them to. If we tell them to defend themselves or their creators, they will without regard for anything else. So emotions would be the cause of Skynet. Pretty easy to avoid though. Just don't equip robots with nuclear missiles. Or make them absolutely hate to kill. Then a super-intelligent vacuum cleaner wouldn't decide to kill people to end the mess.
Grok works with much more simple data than humans do. Therefore, there are far fewer possible interpretations of the data received, so it is safe to make fairly risky assumptions/use more plasticity than the human brain does, in my opinion.
"it needs to ask whether what it thinks it's sensing is what it's sensing." Kind of redundant. Do you mean processing sensory data twice?
What you're talking about seems to be a much larger HTM than Grok. Grok is too small to learn fundamental logic. It cannot verify its ideas using fundamental ideas. It can't check if temperature fluctuations make sense using physics, because it does not know physics. It must simply rely on its knowledge of how temperature fluctuates.
Hello RedNNet,
In terms of what you quote, is not feedback/self-reference not a major property of intelligence?
As for the rest of what you post, I pretty much agree; I've talked to Ray Kurzweil's A.I. talkbot, Roxanne. My immediate observation is how Roxanne has no memory. You can't tell it to shut up, wait five minutes, and then ask me some question that comes to mind during that time.
As for the rest of my post here, I'm being pretty cryptic, and yes, I do think that Jacob Bronowski . .
. . . had some valuable insights in his "Origins of Knowledge and Imagination."
Well, if you're really interested, check out my Nature and origin of mathematical knowledge post; it's the third post of my Jacob Bronowski Scientific Humanism blog.
What Jeff Hawkins is talking about is a limited predictive machine, not something truly intelligent (without other features.) It does have feedback though. Based on what it predicted just before an input is received, it classifies the input.
The input activates columns, and the previous predictions select the causes. Those selections then trigger predictions, so inputs from a while ago indirectly lead to the current selections.
So HTM's memory requires feedback. It uses delayed…
self-reference. Self-reference without delay is just another name for non-temporal processing.
Jeff Hawkins also has other ideas for feedback. His book On Intelligence explains hierarchies, feedback and other ideas which aren't being used because of limited computing power. I'm pretty sure that his full system would be truly intelligent, meaning that it does what the neocortex does except for goal-oriented functions.
I'll take a look at that book, thanks. I'd also love to look at your blog post.
If you want, you really should learn how HTM works. I think it's fascinating, especially once you get past all the business stuff and learn about his ideas in On Intelligence.
I'm at John Stillwell math level right now. As for A.I., I don't think I'll ever be in a position to do much there. I've had some ideas about A.I. based on my pursuit of the nature and origin of mathematics. As Jacob Bronowski shows repeatedly throughout his works, whatever the other intellectual activities are, mathematics has it; while those other activities only share one of the common properties, mathematics is the whole thing.
Understand how mathematics works, and you understand . . .
. . . how the mind figures out nature – the hallmark, the difference that separates us from the other life on planet Earth.
I find this Jacob Bronowskian understanding to include human psychology also. Seems to me most people's reasonings are fragmentary; they do one or another aspect of the whole(which is what mathematics is). They inevitably either make poor assumptions(not questioning assumptions, or axioms), or making over/under generalisations.
As for my blog post, it's not hard.
If you add a little logic (dunno if that counts as math), you can create neural nets. That's different from psychology, because it deals with the little functional details. It's in a very basic stage, so it's completely different. I think it will result in unexpected explanations for human behavior.
The foundation of Mathematics is logic…… Logic is the fundamental rigor behind math and theoretical physics.
Every time I listen to Jeff Hawkins it reminds me what causes me to stay at a 9-10 handicap in my golf game is my brain….
I meant binary logic, such as a bitwise AND function. 1011 AND 1100=1000 for example.
1:11:45 sounds a lot like evolution..
Jeff is great but he is falling into the same trap as all the other ai techs. The other techs like the google car are programmed for a purpose (to automate driving). His tech is to do predictions. If he wants to produce an ai system based on the principles of the brain/MIND then it should be general and based on the inheritance model of object oriented programming. our brains are very flexible to input and output, so therefore so should ai.
If I understand him correctly, HTM CLA illustrates a general principle for how the neocortex works. But his actual application is that of temporal prediction, which is quite specific. If the general principle holds, it should also work in broader traditional machine learning (ML) applications, such as regression, classification, speech recognition, image recognition, data cluster detection, … all of which has a well established formalism in terms of how they work and how they are evaluated (along with actual working products). How does CLA compare to these in performance? (I realize in the talk that he said Grok was doing something different and that they really shouldn't be compared. It is its own market niche. But if the claim is that of general principle, I'd like to see what these principles mean when they are translated and implemented in other ML applications.)
The second issue pertains to internal representation and the process of simulation. Imagine what it takes for a human to solve a math problem or design a new machine. One engages the imagination, tries out different scenarios in the brain (simulation). One goes back and forth seeing what would "fit". Here, it's not clear how the principle of temporal prediction applies. It seems we're at least two orders of magnitude away from understanding the relevant principles of how the human mind works. Having said this, this problem applies to the state of the art of all ML technology, that it's still very primitive, sticking very close to the senses (input data stream), and mainly with the task of "pattern recognition" of this stream.
Bacteria are more intelligent Machines because they can make autonomous choices (a definition of intelligence), while machines are slaves of their program codes..
Intelligence requires subjectivity (consciousness) which is mental, not physical.
Let me explain.
Consciousness – the product of the unique subjective pole found only in Leibniz's metaphysics
Consciousness is what is produced by Mind (the One) as it transforms
physical sensory nerve signals into conscious experience.
Because consciousness is mental experience, it is subjective and therefore outside
of the realm of physical science. Yet we are all conscious.
Most Leibniz sites do not feature the unique subjective pole only found in
Leibniz's metaphysics which is implied by Plato's One or Mind. We can characterize
Leibniz's metaphysics as containing the Programmer (Mind), its code
(the pre-established harmony or PEH) and the operation of the code
as the production and manipulation of an ordered ( in space but but not marked in time)
sequence of perceptions and happenings. This subjective pole being in Plato's Mind or
One, it only knows what and where but not when. But because consciousness
is a sequence of periodic reports, it is analogous to a movie,
which although made up of individual frames, is experienced as continuous.
Each of these events is obtained as a myriad of perceptions as
the One or Mind repeatedly scans the universe of perceptions
associated with monads and returned (mirrored back, each with the
proper perspective) periodically as updates to an individual monad's
perceptions. This indirect process is required since monads have
no windows and cannot directly perceive outside of themselves.
Now it is important to address the operation of the code or PEH,
since all we are given is the code itself (the PEH). Leibniz is
careful to state that unlike Malebranche's theory, in
Leibniz's metaphysics Mind is not interventive. That is,
Mind apparently does not operate and cause things to happen at that
point in time. But I must add that Mind is in a timeless state, so that
such a statement has no meaning. The argument vanishes.
Mind controls the physical, not the reverse.
—
Dr. Roger B Clough NIST (retired, 2000).
See my Leibniz site: https://rclough@verizon.academia.edu/RogerClough
For personal messages use rclough@verizon.net
If he could just take a pause. He's bubbling like a non-stop bot for 1.5 hour!
Quantum computer => Brain simulation => Bioorganic Artificial Intelligence
Asholes, get rid of the ad that never gets to the clip. A problem only wih Univ of CA tv
tupperware sales pitch for NSA, no science
If we make enough bricks, we can build this tower all the way up to heaven. Sound familiar? And, don't get your panties in a wad.:)
one way to "upload" the brain would be to do transfer it slowly by using a brain/computer interface. If you manage to make biological brains work seamlessly with artificial ones you could slowly kill off one neuron at a time while functions are slowly reallocated inside the artificial brain