Videos

Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence | AI Podcast



Lex Fridman

Vladimir Vapnik is the co-inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. He was born in the Soviet Union, worked at the Institute of Control Sciences in Moscow, then in the US, worked at AT&T, NEC Labs, Facebook AI Research, and now is a professor at Columbia University. His work has been cited over 200,000 times. This conversation is part of the Artificial Intelligence podcast.

The associate lecture that Vladimir gave as part of the MIT Deep Learning series can be viewed here: https://www.youtube.com/watch?v=Ow25mjFjSmg

This episode is presented by Cash App. Download it & use code “LexPodcast”:
Cash App (App Store): https://apple.co/2sPrUHe
Cash App (Google Play): https://bit.ly/2MlvP5w

INFO:
Podcast website:
https://lexfridman.com/ai
Apple Podcasts:
https://apple.co/2lwqZIr
Spotify:
https://spoti.fi/2nEwCF8
RSS:
https://lexfridman.com/category/ai/feed/
Full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

OUTLINE:
0:00 – Introduction
2:55 – Alan Turing: science and engineering of intelligence
9:09 – What is a predicate?
14:22 – Plato’s world of ideas and world of things
21:06 – Strong and weak convergence
28:37 – Deep learning and the essence of intelligence
50:36 – Symbolic AI and logic-based systems
54:31 – How hard is 2D image understanding?
1:00:23 – Data
1:06:39 – Language
1:14:54 – Beautiful idea in statistical theory of learning
1:19:28 – Intelligence and heuristics
1:22:23 – Reasoning
1:25:11 – Role of philosophy in learning theory
1:31:40 – Music (speaking in Russian)
1:35:08 – Mortality

CONNECT:
– Subscribe to this YouTube channel
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman

Source

Similar Posts

22 thoughts on “Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence | AI Podcast
  1. I really enjoyed this conversation with Vladimir. Here's the outline:
    0:00 – Introduction
    2:55 – Alan Turing: science and engineering of intelligence
    9:09 – What is a predicate?
    14:22 – Plato's world of ideas and world of things
    21:06 – Strong and weak convergence
    28:37 – Deep learning and the essence of intelligence
    50:36 – Symbolic AI and logic-based systems
    54:31 – How hard is 2D image understanding?
    1:00:23 – Data
    1:06:39 – Language
    1:14:54 – Beautiful idea in statistical theory of learning
    1:19:28 – Intelligence and heuristics
    1:22:23 – Reasoning
    1:25:11 – Role of philosophy in learning theory
    1:31:40 – Music (speaking in Russian)
    1:35:08 – Mortality

  2. My two cents on Vapnik's astute observation about symmetry: brains inherently recognize symmetry because of the natural noisy connections they're born with, able to relate any aspect of perception to any other, virtually invariant of sensory mode or locality within a sensory mode. As it stands, creatures possessed of vision are able to learn all about symmetry early in their development – about when they are able to recognize anything at all – because their visual cortex has learned to encode whenever any part of their vision corresponds with any other part, in color, shade, saturation, texture, motion, etcetera. This lends itself extremely well to the recognition of simple symbols (albeit not very simple in terms of machine-learning). Experiencing everything, to my mind, is what defines the predicates that enable us to recognize digits, letters, voices, faces, shapes, and sounds, and even to have volition predicates for controlling not just our bodily/motor actions, but our own attention, decision making, and thought processes.

  3. Getting through this was an intense experience.

    did I get the end right?
    when solving a problem of interest do not solve a more general problem as an intermediate step. ?

  4. I didnt totaly get the difference between an heuristic and a predicate.
    They seem to do he same, reduce the amount of possible explantions to a problem.

  5. Building neural networks can give some ideas about how brainy neural nets work and so guide neuroscience research to look into that and if confirmed then we have proof that that direction is good because it works.

  6. I feel like Vladimir purposely doesn't say clearly the point of his argument. I hear a lot of mumbling instead of to the point statements. Is he not confident in his ideas, or what could be the cause of me feeling this way?

  7. I wouldn't like to work for Vladimir. I feel like he would push me to work on things he believes is the way to go without caring about if I share the same view or not.

  8. Here some concepts of image predicates based on photography:
    Is image sharp
    Is image color
    Is image black and white
    Is image contains objects
    Is image has subject
    Is image conveys the story
    Is image over exposed
    Is image under exposed
    Etc…
    These all very general concepts which could be applied to judge quality of the image to compare images.
    Love the conceptual ideas from Vladimir. That is the level of abstractions Vladimir is referring to.
    I think he actually outlined how to identify conditions for selecting admissible functions in formal way which could be formalised.

  9. Having a formal idea of whatever you perceive as ideal (predicate(unit)), which is abstract representation of ideal implementation detail for a given function would already in itself be ultimate solution. This structure wouldn’t necessarily have anything to do with intelligence since it’s solution is entirely based on human reasoning and it can not reason for itself

  10. Thanks a lot for this conversation, Lex. I always admired Vapnik, and his original work on Statistical Learning and VC dimension was the first pathway into AI in the late 1990s that made sense to me since my training was in mechanics, mathematical analysis, and control theory.

    On the other hand, your broad and exploratory approach to AI appeals to me as a means of synthesizing the multitude of opinions and results.

    The counter play between platonic rigor and a sort of scientific empiricism is so valuable and yet rare.

  11. Great conversation as always, a short exchange in Russian was truly invaluable for understanding Vladimir's thinking. I wish it was longer yet subtitles don't work via audio and I suspect not many of your listeners understand Russian.

  12. What an outstanding conversation! Было очень приятно услышать ваш маленький русский разговор тоже🙂 Потрясающие люди оба

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com