Videos

If brains are computers, what kind of computers are they? – Daniel Dennett keynote at PT-AI 2013



FHIOxford

Abstract: Our default concepts of what computers are (and hence what a brain would be if it was a computer) include many clearly inapplicable properties (e.g., powered by electricity, silicon-based, coded in binary), but other properties are no less optional, but not often recognized: Our familiar computers are composed of millions of basic elements that are almost perfectly alike–flipflops, registers, or-gates–and hyper-reliable. Control is accomplished by top-down signals that dictate what happens next. All subassemblies can be designed with the presupposition that they will get the energy they need when they need it (to each according to its need, from each according to its ability). None of these is plausibly mirrored in cerebral computers, which are composed of billions of elements (neurons, astrocytes, …) that are no-two-alike, engaged in semi-autonomous, potentially anarchic or even subversive projects, and hence controllable only by something akin to bargaining and political coalition-forming. A computer composed of such enterprising elements must have an architecture quite unlike the architectures that have so far been devised for AI, which are too orderly, too bureaucratic, too efficient.

Source

Similar Posts

16 thoughts on “If brains are computers, what kind of computers are they? – Daniel Dennett keynote at PT-AI 2013
  1. Very long video and very good to know, I'♏ going to put watch later on and have to watch it again because I don'Ť quite understand some of what he talking about but very educational and very interesting. Thanks for sharing this.

  2. Brains are individual clueless computers…Darwinism doesn't apply to individual cells….words are memes….and a whole bunch of other stuff.
    Amazing as usual.

  3. I think a main aspect of AI/consciousness is a meshing of hardware and software and them having a malleability to them.  If an AI computer were to be made, what would it be reproducing if it were made to reproduce, its software/programing/mind?  Or its physical hard body? Perhaps it would need if it wanted to created copies of itself, would need physical copies to be made, to store the program.  Or perhaps it would not worry about reproduction in a split sense like that, but only about itself gaining more information capacity, access to more energy and hardware, in a way like the human mind grows from baby to adult, the neural network being analogous to what Im trying to say an AI may want for itself, an ever increasing neural network.   An AI's reproduction might be similar to us making multiple digital copies of programs or information, or like how viruses exist, but then once again there would have to be a compatibility between software and hardware.   Think about DNA, it is pretty much the software and hardware, it only requires that original relatively small amount of seed like matter, and then the rest from there is pretty much energy source, it builds the hardware, the body, and the software of the consciousness interaction interface of the human observer, the brain before the brain is really on, has the potential of being on and storing information, due to evolution, and this assimilation of the body and brain taking in and sorting loads of information takes relatively long time for the observer to 'come to' consciousness and begin the long journey of learning life.

  4. a ring of tiny calcite crystals on a genome in a cell. information can be stored in crystals. real artificial intelligence would be in crystal from. I predict in new computers information will be stored on crystals.

  5. The idea of competitive processes in the brain goes back at least to McCulloch and Pitts' model of the reticular formation in the late 1960s. This is the same McCulloch & Pitts who worked out the logical neuron in the 1940s. Minsky's Society of Minds from the 1980s incorporated competitive processes. So Dennett is little behind the curve in his change of conceptual direction.

  6. misses a point consistently. you will never produce a truly intelligent machine until you have first produced a machine that wants and desires, because it needs to be self-motivated. That is, it needs first, what came first, before intelligence – it needs emotion, because it needs drive and self-originated goals. Most (I'd say all, actually) multicellular animals have it, and have had it since way back.

  7. Daniel Dennet has a great idea describing neurons as little agents that started their live when we were mono-cellulars.

    If we compare a brain to a colony of organized insects, we could say that each ant, a sterile female, is a proud descendant of the small wasp colonies where the queen and the helping daughters are sharing similar tasks. The ants evolved very specialized individuals, some can only fight, the soldiers and the other starting as cleaner, than builder, than get promoted to foragers, allowing themself to explore the world.

    The neuron never been autonomous single cell. They started as specialized cell like in jelly fish without central nervous system. These neuron continued to do simple task, never accepted to live a large group inside a brain. They still exist in the mammals in the autonomous system, controlling digestion, blood pressure and all other automatic feedback control systems.

    Somewhere between jelly fish and fish, another type of neuron appeared. It is high speed and control high energy red muscles. These neuron connect in layers with networks connection bypassing some layers and some network connecting upper layers to lowers, allowing high level negative feedback (like in control theory, normally result in stable output, may introduce oscillation when parameters improperly calibrated).

    Brief, it is useful to think about neuron as selfish. It help to understand how the random trial/error of millions of generation ended-up in a "good engineering", an efficient brain able to memorize so much information despite such small mass and using almost no energy.

    How natural selection can simultaneously favor the most powerful brain and the most efficient one? This comes from the cyclic variation of priority from century to century.

    In time of abundance, the best fighters win, displacing populations. This is the time when the best brain create tools and plot complex political alliance.

    When the food source become scare, the community that survive best with little food are the winners.

    Yet, in other time when virus are spreading, the natural selection priority is now on the performance of the immune system.,

    As these events repeat, the population that survive best thru all these cycles effectively accumulate the winning features. The impossible engineering trade off soon or later find its way, despite the random nature of the physical system which control the evolution.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com