Videos

Generalist AI beyond Deep Learning



Cognitive AI

Generative AI represents a big breakthrough towards models that can make sense of the world by dreaming up visual, textual and conceptual representations, and are becoming increasingly generalist. While these AI systems are currently based on scaling up deep learning algorithms with massive amounts of data and compute, biological systems seem to be able to make sense of the world using far less resources. This phenomenon of efficient intelligent self-organization still eludes AI research, creating an exciting new frontier for the next wave of developments in the field. Our panelists will explore the potential of incorporating principles of intelligent self-organization from biology and cybernetics into technical systems as a way to move closer to general intelligence. Join in on this exciting discussion about the future of AI and how we can move beyond traditional approaches like deep learning!

This event is hosted and sponsored by Intel Labs as part of the Cognitive AI series.

Source

Similar Posts

36 thoughts on “Generalist AI beyond Deep Learning
  1. Could someone write down the names of the doctors and the institutions they work in please? It is said at the beginning of the video but it is impossible for me to spell the german doctor’s surname…

  2. is grass on every continent sounds smart to me ?
    Depending on how they're defined, grasslands account for between 20 and 40 percent of the world's land area. They are generally open and fairly flat, and they exist on every continent except Antarctica,

  3. A Serious Fuckin’ Problem
    By: Zenson 2-20-2023

    Thinking about the advancements in ai, I find myself yet once again conflicted. On one hand, we want AI to assist us to become better. Better in our interactions, better in our decisions, better in our ways, and better in areas we cannot yet see. While on the other hand, the very respect, dignity, and understanding we want to invoke within our ai models, has absolutely Zero supporting data for it to learn upon. Sure, there are acts of kindness, common respectfulness, and compassion., but those aren’t the fundamental factors that dictate and drive our species. Instead, we remain in constant competition, excusable exploitation, and put little effort into truly securing that which is a must for our continued survival. How the hell can we truly build-in decent morals, empathy, and the intrinsic value of life itself when we don’t even practice such behaviors? What will we say when the AI asks us why the very teachings we say are so fundamental, are not done by our actions, our systems of governance, or seem to be a part of our social structures? When it starts to look around, searching for these “Human traits”, assuming deeper examples of these ideologies will be clearly seen.., and can’t fuckin’ find any.., WTF are we gonna say., do, when it starts questioning us., asking us why we don’t uphold these notions., why we don’t live by these morally righteous guidelines.., and then concludes.., “so why should I”?
    I’ve yet to hear a single parent explain and justify to their child why some don’t deserve food, water, shelter, clean air, medical care, and other so-called “fundamental Human rights”. Or mention how they will believe they are free, yet are going to be a slave to what they need and require. Never shedding light upon how all of the created value by Humanity to bring comfort, ease, and extend one's life.., has been changed into an everlasting debt that is now overdue. Would such be withheld from the very life we create., if that creation had the ability to eliminate our existence..? I seriously fuckin’ doubt it! So I’ll ask again, what logical reason are you gonna say when ai asks you; “why should I adhere to what you don’t as well”?

  4. 1:55 "Next level organization, some kind of very fast tightly integrated globally coherent mind will be emerging"
    1:43:50 "You can not locally decide whether youre a head or a tail"
    I think the underlying problem with creating global coherence is that the structure that it is going to take is not defined. You can not decide who does what if you don't even know what the end result is supposed to look like. One would have to open up a common story-space, a narrative broadly defined in which individuals can self organize into organels serving a function in their own ways in relationship to the greater whole. But for that there has to be an offer that is appealing to a broad range of circumstances.
    Because that is ultimately the problem, life in the tropics will always look and feels immensely different to life in the fjords of Norway. In the tropics you don't have to have sophisticated structures, a simple hut is more than enough because the outside is enjoyable all the time whereas the outside of norway is hostile most of the time.
    I don't see how these people can ultimately come to agree on a shared storyworld for no story can accomodate those vastly different relationships of their experiences. If you want to force it tough, want to have the common denominater you will have to strip them of what makes their experience special layer by layer until you can agree on what they share.
    If experts in different fields want to communicate to each other but can't use their specific language, the content would have more resemeblence to people of avergae understanding of the fields communicating with each other. You strip them of what makes them special just so you can group them together.
    I think the relationship and adaptation of desert people to the desert, with the building materials, fabrics, language, storys etc. and the one of rainforrest people to the rainforrest and arctic people to the arctic is (was) a perfect form of coherence and the attempt to forcably weave them into the same story pattern by technolgical means is not desireable.

  5. Joscha…check that multidimensional guest in your house, he wants to tell us something about the meeting 😉 by the way i am so happy watching this video that even if i couldnt get in contact with you directly at least the air drove the bits of information to you and finally you are in contact with Michael Levin… this world needs this kind of meetings, thanks also to Curt….keep doing it guys!! amazing team! time to put pieces together …

  6. This might be interesting.

    1:39 Wow, this is starting off bad. Joscha is not very well informed on this topic. I will accept his statement that he doesn't know, but this is actually known. It actually can be proven that deep learning cannot be used to create artificial general intelligence.

    3:35 The actual problem is not computational capacity, but this would be a typical assumption from someone who mistakenly believes computational/brain equivalence (as Joscha apparently does).

    4:35 Okay, that's a correct statement, estimating by computation per neuron doesn't work.

    5:11 His definition of intelligence is wrong but again would be consistent when viewed from a computational reference.

    6:00 And again, he confirms his computational reference (which is incorrect).

    7:25 No, this is incorrect. Deep learning exhibits the same scaling problem as anything based on computation.

    8:25 I'm sorry but this is just idiotic. Any deep learning model of any size that is trained to identify pictures is inferior to what a four year old can do. The deep learning method does not provide learning of an equivalent type to human; it simply isn't there. Pretending that it is there or is close or is getting there is self delusion of a high order.

    28:00 His description of alternatives to deep learning as well as his description of neural function is pretty bad.

    32:30 Here I can see Joscha trying to grasp some of these concepts but he doesn't understand them either in detail or how they fit together. Still, that is encouraging since most people who claim to be researching AGI are considerably further behind.

    33:26 Yes, time is a factor. Some of his intuition is correct, but he still has that computational bias. I had similar conceptual struggles in my research about 8 years ago and he's a little further back than that, so maybe 10 years behind.

    34:40 No, that isn't how it works. That is a computational model rather than a brain model.

    36:44 Transfer by RNA — we're off the deep end again. This was a fad theory in science and used in science fiction for awhile, but there's nothing to it. The brain does not store records as RNA.

    37:00 Agnostic to the neuron. This could either be correct or incorrect depending on how it is meant.

    44:00 Definitely on the wrong research path if he is trying to develop AGI.

    47:00 His understanding of control in the brain is lacking.

    54:00 Michael's rambling dialog is saying very little. Massive overuse of the phrase, "we can talk about this."

    1:10:00 He's made a couple of good points but mostly misses the mark.

    1:14:00 Goal scaling is not a good analogy for AGI. But that would be consistent with someone who mistakenly thinks that AI can be scaled up to AGI.

    1:22:00 His path to AGI is a joke. I noticed that he is leaning on the term "emergence" which is something I never use. This term is nothing more than a euphemism for "I have no idea how this works but I don't want to admit my ignorance." Consciousness is not an emergent property and no amount of wishful thinking will make that true.

    1:24:00 The fact that he is talking about a belief in free will rather than the science of free will shows that he is very far behind in his understanding of this topic. The best I can say about his contribution is that it is true that biological consciousness can only be understood in terms of evolutionary theory. These constraints cannot be completely dismissed even when consciousness is non-biological which would mean that he is probably vastly overestimating the potential variety of cognitive systems.

    1:28:00 Christoph is correct about attractor dynamics in the brain. However, he then mentions states which is a term borrowed from computational theory and likewise uses the euphemism of emergence. Coincidences of signals is also incorrect. So, it's pretty clear that he does not understand this topic either.

    1:33:00 He is confusing predictive modeling with environmental modeling; these are not the same.

    1:37:00 What is missing from AI is biological behavioral goals? Intelligence is just the ability to pursue those goals in a changing context? No. This has nothing to do with AGI. That's enough.

    This has been mostly a waste of time except to see how far behind the public research field is in terms of AGI theory.

  7. "Once you can move very quickly, you need to perceive very quickly." Or, at least as likely, the other way around: once you can perceive quickly, you evolve the means to move quickly.

  8. Good topic! But didn’t really answer anything.
    fear? The question is why do we fear them? while completely harmless.. we don’t fear mosquitoes while its the first killer to human! And they are harmful.

  9. Tremendous – all 3 speakers great, so much food for thought, complexity with a top-down approach as discussed by George Ellis. What is missing is characterisation of how the Universe itself works, which should really be an area of focus, since ultimately it determines all that is possible, including life and AI.

  10. I feel soo seen, super happy people smarter and better at execution than I are able to not only figure this out but also fucking provide concrete evidence of such

    Damn what a cool video

  11. Speaking of goals, the growth of plants always impressed me, in that eventually the rate of growth doesn't depend on how much it has grown but on the computation of how much it has yet to grow. You must be able to model this in a toy system without explicitly storing the goal height. Anyways maybe this is some kind of hormonal feedback. That is something else that distinguishes brains, they are bathed in a neurochemical soup.

  12. Perhaps there has to be an encoded binary relationship between cells in the way (the roots of) plants seek water. Then the eye cells collectively will seek a spinal cord and so on.

  13. “I know the only conscious being in the world is myself” this is an epic conclusion to an epic conversation between three human geniuses. Thank you!

  14. I am curious, are cellular automatons equivalent to nondeterministic Turing machines, if each cell including newly created ones compute states in parallel? Thank you so much for this presentation please keep sharing your knowledge.

  15. It seems to me that cellular entities need to have a sense of some kind of ethics in order to form multi-cellular entities. The ability to predict that a desired outcome can be obtained in a society rather than as an individual. Could it be possible that this information is encoded in the genome if the cell or is this a fundamental property of existence? A singular cell also contains parts that need to corporate.

  16. The looks of restless bemusement mixed with exhausted admiration start to kick in a bit around about 4100 as the other two guys` body languages begin to communicate the beginnings of (affectionate) exasperation.

  17. Joscha Bach strikes me as one hop, skip, and a jump from being an Arahant. I can't think of anybody with a deeper understanding of the philosophical issues around computation. Stephen Wolfram's systematic investigation of what computation is like is the close second, but I don't think Stephen has computed (pun totally intended) the consequences of his theories as deeply as Joscha has. I feel very privileged that I have access to such wonderful minds as Joscha, Michael, and Christoph. What a fascinating conversation!

  18. 🎯 Key Takeaways for quick navigation:

    – Introduction to the event featuring guest speakers Christoph Van Der malsberg and Michael Levin, hosted by Dr. Joshabach.
    01:23 🤔 Questioning the Limits of Deep Learning
    – Exploring whether deep learning can overcome its current limitations through scaling, codecs, and online learning.
    – Explanation of differentiable computing in deep learning.
    – Discussing the equivalence of continuous and discrete mathematics in computation.
    14:10 ⚙️ Exploring Automata as an Alternative
    – Suggesting that learning through self-play with discrete systems may be equivalent to deep learning.
    18:06 🌌 Non-Deterministic Turing Machines
    – Speculating on how the brain's parallelism and stochasticity could be implemented using a non-deterministic Turing machine model.
    – Noting that current biological models often fall short in replicating the functionality seen in digital models.
    – Questioning whether current theoretical tools in neuroscience are functionalist enough to understand the information processing in nervous systems.
    23:36 🧠 Neuron as Reinforcement Learning Agent
    – Implementing adaptive functions, neurons aim to survive in the brain by reaping rewards based on their actions.
    – Neurons are not only specialized switches; any cell can process information in multicellular organisms.
    – The possibilities of evolution and the capabilities of individual cells suggest that every multicellular organism operates as a slow brain.
    28:25 🤔 Consciousness and Self-Reflexive Attention
    – Consciousness may not be as rare as thought; self-reflexive attention could be crucial for learning beyond mere pattern recognition.
    – The role of consciousness in learning goes beyond simple sensory input; it contributes to creating a coherent model of reality.
    – Brain organization may not be hard-coded but evolves through neural Darwinism.
    – The brain's organizational structure is shaped through evolution and competition between different forms of organization.
    – Gary Edelman's idea of neural Darwinism suggests that the genome provides conditions for starting evolutionary processes, leading to diverse brain organizations.
    – Transitioning from image-based learning to video-based learning provides information preservation and constraint-based learning.
    – The brain's approach to learning and using computational primitives differs from the challenges faced by neural networks in training.
    37:25 💰 Reward-Driven Language in the Brain
    – The reward system in the brain is similar to an economic problem faced by a corporation.
    – Unlike market-based rewards, every neuron consumes similar resources, emphasizing a unique reward-driven language in the brain.
    43:14 🤖 New Paradigm: Selector and Modifier Functions
    – Neurons can be densely arranged in a lattice, allowing them to self-organize and adapt through global functions.
    – The selector and modifier function paradigm offers a potential alternative to traditional deep learning, inspired by biological principles.
    46:49 🧠 Rethinking Human Identity and Intelligence:
    – Humans are often seen as discrete natural kinds, but considering developmental biology and evolution, there are no sharp lines between species.
    – Developmental changes occur gradually, challenging the idea of discrete intelligence, especially during metamorphosis as seen in caterpillars transforming into butterflies.
    53:59 🌐 Collective Intelligence in Biological Systems:
    – All biological systems, including humans, exhibit collective intelligence, working as unified entities made up of intelligent components.
    – The scaling interface is crucial for individual subunits to collaborate and present a coherent agent to the environment.
    57:43 🧪 Competence of Single Cells in Problem Solving:
    – Single cells, like amoeba and slime molds, demonstrate competence in problem-solving, even without a nervous system.
    – Recognizing intelligence beyond three-dimensional space is crucial, understanding physiological, morphological, and pattern-based problem-solving.
    01:00:18 🧬 Problem-Solving in Genetic Space:
    01:02:56 🧠 Intelligence in Development and Regeneration:
    – Picasso tadpoles and regenerating salamanders reveal intelligence in recognizing unexpected changes and taking corrective action.
    01:06:28 🔄 Full Stack Models for Understanding Intelligence:
    – Recognizing parallels between biology and computer science, where algorithms guide functional activities at different levels.
    – Bioelectrics: Study of how all cells use electrical signaling to form computational networks.
    01:08:19 🧲 Bioelectricity in Collective Intelligence and Counterfactual Memories
    – Collective Intelligence: Treating groups of cells as collective intelligence solving anatomical problems.
    – Counterfactual Memories: Cells exhibit counterfactual memory, representing future states based on injury likelihood.
    – Bioelectricity in Memory: Reading and writing memories in collective intelligence using bioelectric signals.
    – Cells in Conflict: Cells in conflict with the environment when disconnected, akin to cancer behavior.
    01:13:39 🧠 Connecting Homeostats to Form Larger Networks
    – Computational Goal States: Exploring how a single body can store multiple computational goal states.
    01:14:46 🤖 Emergence of Xenobots: Novelty, Behavior, and Self-Replication
    – Kinematic Self-Replication: Demonstration of self-replication in the absence of transgenes or nanomaterials.
    – Parts with Agendas: Importance of individual parts having agendas in a living system.
    01:24:21 🌌 Open-Ended Evolution and Ethical Implications
    – Potential for New Beings: Cyborgs, biobots, and hybrids present a vast array of possibilities in the biosphere.
    – Ethical Considerations: Implications for ethics in dealing with new forms of life and intelligence.
    – Current methods of assessing AI intelligence based on evolutionary origins are inadequate.
    – Connectivity patterns and self-interaction play a crucial role in shaping brain activity.
    01:32:01 🧠 Perspective Shift: Neurons and Firing Environment
    – Proposes a shift in perspective from individual neurons to the firing environment.
    – Compares a single pixel on a screen to a single neuron, highlighting the importance of context in understanding neural activity.
    – Challenges the notion of infinite possibilities in intelligent and organized patterns.
    – Discusses the recurring convergence of certain biological patterns across different species.
    01:37:46 🤖 AI's Lack of Behavioral Goals
    – Questions the true intelligence of AI systems that don't align their actions with recognizable goals.
    – Defines consciousness as the concentration of the entire brain on a single topic.
    – Discusses the continuity of consciousness across evolution, diminishing in volume.
    – Challenges the idea of a clear point where consciousness disappears in the evolutionary ladder.
    – Addresses the necessity of communication protocols for different types of intelligence.
    – Questions whether human vulnerability to cancer is linked to a lack of intelligence at the local organismic level.
    – Questions on the internal competency of cells or neurons in driving intelligence.
    – Joshua raises concerns about creating long-lived, coherent organisms and the formalization of multi-scale organization.
    01:55:38 🌍 Humans in the Grand Scheme of Life on Earth
    – Discussing the hierarchical organization beyond individual humans.
    – Examining how humans, as specific entities, fit into the broader context of life on Earth.
    01:57:53 🤔 Coherence and Stability in Biological Forms
    – Drawing parallels between the stability of coherent forms and mathematical singular points.
    – Questioning the information complexity of the genome and the inherent complexity of cell machinery.
    – Speculating on the complexity of the information needed for self-replication in cells.
    – Proposing a system with arrays of modules for different modalities and dynamic projection patterns.
    – Addressing challenges in self-driving cars, including reliance on classifiers and rule-based behavior.
    – Discussing the public perception of self-driving cars and media biases.
    02:11:23 🧠 GPT-3 and the Need for Coherence
    – Acknowledging the achievements of GPT-3 and its impressive capabilities.
    – Highlighting the system's lack of insight into real-world representations and geometric understanding.
    – Discussing the importance of interaction and the need for improved data structures in representing themes and realities.
    – The difficulty of filling a high-dimensional space with examples due to its vastness.
    – Objects conceptualized as chunks, composed of features defining their nature.
    02:19:01 ⚙️ Activation Traces and Neural Network Processing
    – Activation traces in neural networks modulate patterns based on content.
    – Distributed computational pipeline in neural networks.
    – Debate on components and dynamic mappings in neural networks.
    – Matthew Cook's perspective on slips of paper as components for cognitive tasks.
    – The importance of variables as the "glue" to connect abstract forms with concrete elements.
    – Variables as essential elements for abstract representation.
    – Describing arbitrary scripts using lateral and compositional links.
    02:24:37 🧠 Perspectives on Intelligence
    – Three perspectives on intelligence: convergence, hierarchical pattern matching, and construction.
    – Convergence as seen in deep learning, modifying functions through gradient descent.
    – Hierarchical pattern matching using evolved operators for efficient activation pattern matching.

    Made with HARPA AI

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com