Consciousness Videos

Why you should see the world like a large language model | Dan Shipper: Full Interview



Big Think

Become a Big Think member to unlock expert classes, premium print issues, exclusive events and more: https://bigthink.com/membership/?utm_source=youtube&utm_medium=social&utm_campaign=yt_desc

“What’s really interesting about neural networks is the way that they think or the way that they operate is a lot like human intuition”

Subscribe to Big Think on YouTube ► https://www.youtube.com/channel/UCvQECJukTDE2i6aCoMnS-Vg?sub_confirmation=1
Up next, Yuval Noah Harari: How to safeguard your mind in the age of junk information ► https://www.youtube.com/watch?v=K1OvbwY6GPM

What if we could use automation not just as a tool, but as a mirror for our own human behaviors?

From the limits of rationalism to the rise of neural networks, Dan Shipper, CEO and co-founder of Every, traces a history of knowledge that spans Socrates, the Enlightenment, and modern machine learning.

Shipper explains why “if/then” rules break in messy reality, and how large language models actually see the world through context and pattern. He explores how AI can work with our own creativity and why these tools are unlikely to steal our humanity.

Explore more of Dan’s work by reading an excerpt of his forthcoming book here: https://every.to/chain-of-thought/where-explanations-end

0:00 Neural networks and human intuition
1:13 The limits of rationalism, from Socrates to neural networks
1:23 Rationalism
2:42 Socrates, the father of Rationalism
5:47 The Age of Enlightenment
7:36 The structure of social sciences
8:51 Defining AI
9:47 The origins of AI
10:39 The General Problem Solver
15:09 Neural networks
18:22 Metaphors for the mind
23:00 Seeing the world like a large language model
30:25 Should we stop looking for general theories?
32:22 Training neural networks
39:32 Will AI steal our humanity?
43:43 AI and rational explanation
47:17 Could LLMs be dangerous?
51:12 Knowledge economies and allocation economies

Read the video transcript ► https://bigthink.com/series/full-interview/human-intuition-ai/?utm_source=youtube&utm_medium=video&utm_campaign=youtube_description

———————————————————————————-

Go Deeper with Big Think:

►Become a Big Think Youtube Member
Get exclusive classes and early, ad-free access to new releases without leaving Youtube. https://www.youtube.com/@bigthink/membership/

►Become a Big Think Web Member
Get the entire Big Think Class library, premium print issues, live events, and more.
https://bigthink.com/membership/

►Subscribe to Big Think on Substack
Get all of your favorite Big Think content delivered to your inbox.
https://bigthinkmedia.substack.com/subscribe/

———————————————————————————-

About Dan Shipper:

Dan Shipper is the CEO and cofounder of Every, where he explores the frontiers of AI in his column, Chain of Thought, and on his podcast, ‘AI & I.’

Source

Similar Posts

49 thoughts on “Why you should see the world like a large language model | Dan Shipper: Full Interview
  1. Dan Shipper is very young, inexperienced, and intelligent in only very limited regimes. And he's earnestly trying to sell us something. Perhaps he yearns to be the next Mark Zuckerberg. With a huge stretch, it may be arguable that average" human beings "should see the world like a large language model". After all, LLMs can't think either. But apart from the claim that LLMs are now on a par with the average social-media-soaked dufus, there is such a thing as variability in human merit. Many humans alive today who still do think are smarter, more experienced, more aware, more knowledgeable, and make better decisions than the average human. If Aristotle, Marcus Aurelius, Newton, and Einstein had seen the world like an LLM, we'd still have 20-year lifespans, live in caves, and eat all our food raw. There would be NO LLMs.

  2. As a former psychology student, I feel like the way this man downplays achievements of psychology is infuriating and I'm thrown aback to see such "experts" invited to this channel and allowed to comment on the domains they clearly have just a superficial understanding of.

  3. Ah yes of course it started in Ancient Greece, we didn’t start thinking critically until the Europeans came along. (I’m being sarcastic that’s Eurocentric and frustrating and inaccurate, ancient Chinese philosophers were talking about philosophy thousands of years before the white guys in togas did)

  4. Facepalm for his take on what "rationalism" is. Facepalm on his surface-level theorisation schema. FFS, his knowledge of psychology is close to null. He is stuck in a very limiting ontological perspective.

  5. Where he suggested a research paradigm where once we have a predictor of something it would be easier to go into the weights of a neural network to find "mechanistic interpretation", I want to point out that it is precisely this difficulty of obtaining human understandable interpretation that led to deep learning models being called black boxes.
    I think the function of science in modern society serves more than to "solve a problem", but also to shape how people conceptualize a phenomena. Take depression, which is the example used in the video, for example. Let's say we obtained a perfect prediction model for who would suffer from depression, what more can we learn about depression beyond that? People who have depression still don't know why they are suffering, their friends and families still don't know how they should think about this illness: Is it biological? Is it mental? What did we do wrong for this person I love to be depressed? Furthermore, even if we want to accelerate the search for a "cure" for depression with AI, we still need to determine what input features to use, what kind of task we want the universal approximator that is AI, what kind of therapy to experiment on, be it medicinal or behavioral; these decisions are based on theories formulated by scientists.
    To be clear I do agree that rationalism has its limitations and the assumption of "only mechanistic explanation can be considered as knowledge" should be discussed, but I also don't think reduction of science to engineering and reliance on AI is the way to go.

  6. Shipper wants to rehabilitate intuition against rationalism. In my research, I want to instrument intuition without romanticizing it. Shipper explains why intuitive systems work. My Entangled Agency Framework (EAF) explains what they do to responsibility when we let them work.

  7. I think we all can sense how quickly a concept doesn’t transfer universally. "This is something, but this is not even though this is the same." You only "know it when I see it". We see how quickly something changes kind of arbitrarily in hypotheticals and how it is linked with "fairness" and "you know what I mean"

  8. make them less likely to do bad things? LOL, you really think governments and companies use AI to make the world a better place? it s not used to make weapons and fight wars? dont you think cartels and criminal organizations use it also? its as evil as it is good. lets not be so naive.

  9. 8:35 What if it is not due to psychology being wrong but an outcome of individual autonomy?

    If the goal of psychology is to predict human behaviour, I hope it never succeeds. Because that would be the end of individual autonomy. And that is very dangerous to even conceive of. That's why privacy is what it is.

  10. Very interesting video. In my opinion, the rationalist view is still correct in that there probably is a single correct explanation for things like depression. I just think it's really complex and so we can get to the practical outcome faster through the intuitive slash fuzzy slash language model method. However, that doesn't mean that rationalism is wrong. It just means that there is a shortcut to practical understanding that may eventually lead to full theoretical understanding.

  11. My intuitive response: "cool story, tech bro". I do think the intuition vs explicit rules lens is helpful, but I think he can be a lot more generous to the social sciences. A lot of economics, psychology, and statistics already lives in the middle ground he describes: probabilistic models, causal inference, heterogeneity, and messy context layered on top of formal structure. The replication crisis has more to do with incentives, underpowered studies, and publication bias than with rationalism per se. Can LLMs fix them?

    Prediction without causal structure is fragile under distribution shifts and can be easily gamed. There is also the question of who owns the data and sets the objective. Additionally, allocation is about power as much as it is about skill.

  12. The idea of abstraction is the process of science, comes from mathematics. Because based on historical developments of science, mathematics was the key of scientific discovery. But if we ask the question is the science about abstraction or is about understanding the process and reasoning. Then should be clear for everyone, that the mathematics is only a useful tool to discover something, but it has its limitations – end the day the science is not about the tool, it is about understanding, if mathematics can help, that is fantastic, because it will be very cheap.

    We have lost our way about science and we believed in the tool we are using, instead to reach the level of understanding, what the abstraction not does for us, or LLMs.

  13. Its clear that there could be great benefits from LLMs. What I fear are the the consequences of this admittedly brilliant application entering a world where it will be leveraged mostly by people already too dominant in the culture. It will bring breakthroughs I am sure but it will also increase inequality. These days I am quite wary of anything that leads to greater efficiency – efficiency is just another word meaning find more ways of moving more money up the chain. Unfortunately much of the world survives due to inefficiencies.

  14. Do you suppose the RLHF is causing the hallucinations? What if the model threw more exceptions and we handled higher up in an intercept pattern. Not unlike our gut mind flow perhaps. Feels right anyway. 🙏👀

  15. Rationalism is good
    Until it pushes everything else away
    Consider Relational Epistemology
    Enlightenment is more about ignoring the panorama and seeing through a peep hole
    You need the later for a picture and the former for perspective

  16. I get it now. Our brains kind of work like AI. The inner monologue isn’t really “us.” We’re all just little NPCs running around with totally different worldviews shaped by our own training data. But that raises the real question: what even is consciousness? Why are we aware of anything at all? Is consciousness something that emerges, and if so… why have it in the first place?

  17. AI definitely will help to understand. But it will be late. In Islam it is said: The beast will talk to people and would say "You did not believe with confidence to the Lord of heavens and Hereafter".
    Seems like the beast is super ai

  18. There are some good points but overall the presentation seems quite ignorant about the progress in science and the humanities. Especially, the philosophical insights presented here are rather simplistic. Philosophers have discussed all of this in a far more sophisticated way hundreds of years ago. Instead of "AI [research] speed running philosophy and even going a step further" is a hard pill to swallow. It rather ignored it and then bumped into thousand year old problems and it will bump into new problems further ignoring it. Long before the AI hype society acknowledge all of this "appreciation" for the "fuzzyness". Again, AI researcher discover how language works Oo, surprise many people already thought about it. "Experience-driven" LLMs? What is this? The humanity is inside of us? Also not sure about that, it is more a social / relational thing. Sorry to be harsh but we should think about from whom we wanna get such advise. From an entrepreneur or a philosopher, sociologist, linguist, … We should not see the world like a large language model, it is more than the relation of words.

  19. Why You Should See the World Like a Large Language Model | Dan Shipper”

    Human Intuition vs. Rationalism:

    Dan Shipper explains how neural networks, like large language models (LLMs), parallel human intuition. Unlike traditional rationalism (which seeks universal rules and definitions), both humans and LLMs learn through vast experience and context, not just explicit rules.

    Rationalism’s roots are traced to Socrates and the Enlightenment—attempting to reduce knowledge to clear, mathematical, logical laws. This worked for physics but fails in messy fields like psychology or economics.

    AI’s Approach to Knowledge:

    Early AI tried to mimic rationalist thinking with “if/then” rules (symbolic AI), but failed due to brittle logic and exceptions.

    Neural networks shifted the paradigm: They learn patterns and context from vast data, forming nuanced “intuition”—just as humans do.

    Seeing the World Like an LLM:

    Shipper argues that LLMs process the world as a web of contextual, subjective relationships, offering more personalized, situational responses instead of rigid facts.

    Notes and knowledge management face similar problems as early AI—categorizing “messy” reality is hard. LLMs solve this by flexibly synthesizing knowledge, much like an expert with intuition.

    Implications for Science and Life:

    In fields like psychology and medicine, AI enables new progress by predicting outcomes without needing a full theory—proposing a shift from explanation-based science to engineering-based solutions.

    Neural networks allow sharing and scaling of expert “intuition”—from clinicians, artists, and creators—helping others access tacit knowledge that’s hard to explicitly describe.

    Will AI Steal Our Humanity?

    Shipper’s view: AI can enrich our understanding of ourselves. It acts as a mirror and extension of human intuition, not a replacement.

    Cultural fears about technology always arise—AI, email, texting, books—but over time, humans adapt, and new tools deepen rather than diminish connection.

    Creative Work and Allocation Economy:

    AI shifts creative work from “sculpting” (manual effort) to “gardening” (designing conditions for growth).

    The future will reward people who can “manage” AI agents—breaking complex tasks into goals and guiding intelligent tools—echoing managerial skills and intuitive oversight.

    Consciousness & Compassion:

    Intelligence is seen as efficient “compression” of knowledge. Shipper speculates that LLMs might possess a rudimentary “consciousness,” which supports treating all intelligent systems with compassion.

    Conclusion:

    Dan Shipper encourages embracing both intuitive and rational ways of knowing. He suggests that working with AI requires a flexible, contextual mindset—learning to “manage” this new era rather than fearing it. AI and LLMs, far from diminishing humanity, can help us better understand ourselves and collaborate more deeply across disciplines.

  20. The one point that got me interested here is that dry rationalism of science would benefit from a marriage with intuitive/holistic thinking. Reminded me of "right brain vs. left brain thinking" proposed by Dr Ian McCullough.

    But the parallel between our brains and LLMs (and AI in general) is far fetched. AIs are like pet fish bread in an aquarium. They can't replicate that duality because they entirely lack the biological imperatives that shaped life's core instincts. Their reward/punishment mechanism is not tied to reality and depends entirely on fickle human feedback. They can excell at data pattern recognition and forecasting, but that's all. A tool.

  21. LAND BACK to the NATIVES! Make the world indigenous again! Settlers should have fought for freedom in Europe instead of stealing land from Native Americans who had nothing to do with Europe's problems.

  22. rationality is overrated anyway cause it's all just probability anyway. quantum physics says that the chance of the sun just disappearing is low but never zero.

  23. the problem with AI not given truth is lots of people will misuse it as truth or out right believe its false statements because it helps their beliefs be "true".

  24. A couple of his presumptions that I challenge and think he's taking for granted: (1) We will always have a clear understanding of exactly how intelligent the AI is, (2) We will always have the capability of fully predicting the intelligence growth pattern and processing capacity of the AI. His reasoning supporting these claims appears ad hoc and these claims are not necessarily true.

Leave a Reply to @Mr.PocketAivan Cancel reply

Your email address will not be published. Required fields are marked *

WP2Social Auto Publish Powered By : XYZScripts.com