Cool Worlds
It’s getting harder to harder to ignore the potential disruptive power of AI in research. Scientists are already using AI tools but could the future lead to complete replacement of humans? How will our scientific institutions transform? These are difficult questions but ones we have to talk about in today’s episode.
Written, presented & edited by Prof. David Kipping.
→ Support our research: https://www.coolworldslab.com/support
→ Get merch: https://teespring.com/stores/cool-worlds-store
→ Check out our podcast: www.youtube.com/@CoolWorldsPodcast
THANK-YOU to T. Widdowson, D. Smith, L. Sanborn, C. Bottaccini, D. Daughaday, S. Brownlee, E. West, T. Zajonc, A. De Vaal, M. Elliott, B. Daniluk, S. Vystoropskyi, S. Lee, Z. Danielson, C. Fitzgerald, C. Souter, M. Gillette, T. Jeffcoat, J. Rockett, D. Murphree, M. Sanford, T. Donkin, A. Schoen, K. Dabrowski, R. Ramezankhani, J. Armstrong, S. Marks, B. Smith, J. Kruger, S. Applegate, E. Zahnle, N. Gebben, J. Bergman, C. Macdonald, M. Hedlund, P. Kaup, W. Evans, N. Corwin, K. Howard, L. Deacon, G. Metts, R. Provost, G. Fullwood, N. De Haan, R. Williams, E. Garland, R. Lovely, A. Cornejo, D. Compos, F. Demopoulos, G. Bylinsky, J. Werner, S. Thayer, T. Edris, F. Blood, M. O’Brien, D. Lee, J. Sargent, M. Czirr, F. Krotzer, I. Williams, J. Sattler, B. Reese, O. Shabtay, X. Yao, S. Saverys, A. Nimmerjahn, C. Seay, D. Johnson, L. Cunningham, M. Morrow, M. Campbell, B. Devermont, Y. Muheim, A. Stark, C. Caminero, P. Borisoff, A. Donovan, H. Schiff, J. Cos, J. Oliver, B. Kite, C. Hansen, J. Shamp, R. Chaffee, A. Ortiz, B. McMillan, B. Cartmell, J. Bryant, J. Obioma, M. Zeiler, S. Murray, S. Patterson, C. Kennedy, G. Le Saint, W. Ruf, A. Kochkov, B. Langley, D. Ohman, P. Stevenson, T. Ford & T. Tarrants.
REFERENCES
► Smith & Geach 2024, “Astronomia ex machina: a history, primer, and outlook on neural networks in astronomy”, Royal Society Open Science, 10, 221454: https://arxiv.org/abs/2211.03796
► Toner-Rodgers 2024, “Artificial Intelligence, Scientific Discovery,
and Product Innovation”: https://aidantr.github.io/files/AI_innovation.pdf
► Dell’Acqua et al. 2023, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality”: http://dx.doi.org/10.2139/ssrn.4573321
MUSIC
Licensed by SoundStripe.com (SS) [shorturl.at/ptBHI], Artlist.io, via CC Attribution License (https://creativecommons.org/licenses/by/4.0/) or with permission from the artist.
0:00 Hill – The Travelers
2:50 Hill – A Slowly Lifting Fog
5:57 Kyle Preston – Dark Tension
7:50 Falls – Ripley
10:50 Chris Zabriskie – Cylinder Four
13:18 Hill – Echoes of Yesterday
17:28 Joachim Heinrich – Y
CHAPTERS
0:00 Intellectual AI
3:00 Current AI in Astronomy
6:59 @DrBrianKeating
7:56 The Research Cycle
9:59 Neil DeGrasse Tyson
10:51 Disruptive Machines
14:37 Humanism
16:51 The Future
19:04 Outro and Credits
#AI #AGI #CoolWorlds
Source
Reality of this will not be good. We taught the AI to lie to us. Dooming us to be reassured in our delusions.
"If God had intended for man to fly, He would have given him wings." That was the attitude 1903 when the Wright Brothers invented the airplane.
Hi, don't be disheartened there are more things to find out than atoms in the universe. It struck me many years ago while watching Star Trek, that the computer was incredibly useful and knowledgeable when coming up with diagnostics and strategies to use the available resources of the ship, both human and instrumental, for resolving issues. But there was always something new coming up and though it was often used to find solutions it was mainly the mind of an instantiated being existing in real time that came up with tying all those resources together to come up with something that worked.
Consider the fact that ChatGPT was born in November 2022, and see how far it has come since then. At the pace things are going, within just a few years, chatting with an AI will feel just like chatting with a human, only much smarter, infinitely more knowledgeable, way more patient, and yes, even more creative.
History has proven again and again that our intuitions cannot be trusted to judge how complex a problem really is, until it is solved: chess feels very hard and used to be considered the pinnacle of human intellect, yet fairly basic AIs from the 1990s can beat us; conversely, telling a cat from a dog feels trivial, yet it took 20 more years to solve; most people thought it would take 30 more years to beat the game of Go, as it seemed to require a very intuitive mindset, yet AlphaGo came; people thought AIs couldn't write code, yet it can now code entire websites or games; people say it's not truly creative, yet an AI generated image won a human art competition. And the list goes on.
What feels complex and magical to us is just the result of neurons firing in our heads, and there's no reason why AI cannot do the same, or better. In fact, AIs will likely be much more creative than us, because they don't have our mental constraints. They can think in 4D if they want, but we can't. Moreover, we have been selected by evolution to conform to societal norms and avoid being the "odd one out". For this reason, we are incredibly conformist, we dress the same way, we think the same way, and we don't even realize it. Most of us cannot come up with any novel idea. Luckily, we live in a society where the rare good ideas can hopefully spread and benefit everyone. AIs can be much more creative, they will generate millions of ideas every day. The amazement has only just begun.
I think that last scene from Attack of the Clones was intended as the whole theme of that film (which is a tad odd, since the way R2-D2 is portrayed in that and other films seems to suggest otherwise).
2:25 Maybe its because I'm a diesel technician, but I always thought that ai would replace the "thinking" jobs far before it replaces the manual jobs.
I do find ChatGPT valuable tool to exchange incomplete ideas, finding existing research I don't yet know about and explain things in detail step by step. It can be great teacher when looking for something that already exists. It's very patient and eager to explain, it's something you couldn't possibly ask some person to be. No one has the time or energy to give that level of support.
Dude you’re like the most fit physicist with a phD 👏🏽 How do you manage to include gymat the same time as doing physics, youtube, and personal life?
I stand with those who make a distinction between artifice and synthesis.
# Nested Realities Theory: A Framework for Subjective Immortality Through Recursive Time Perception
**Author**: Travis Lightner
**Last Updated**: 02/26/2025
**Keywords**: Time perception, quantum biology, near-death experiences (NDEs), Mandela Effect, consciousness recursion
—
## Abstract
This theoretical framework posits that the subjective experience of time dilation during the neurological processes preceding clinical death generates an infinite recursion of conscious perception, effectively rendering death unobservable to the dying individual. Drawing on principles from neuroscience, quantum mechanics, and temporal phenomenology, the theory reframes mortality as a closed-loop system of nested realities, offering a novel resolution to the existential paradox of non-existence.
—
## 1. Introduction
### 1.1 Background
– The human brain’s capacity to distort time perception (e.g., dreams, traumatic events) is well-documented.
– Recent studies of near-death experiences (NDEs) reveal surges in gamma-wave activity and hyper-vivid memory recall during clinical death.
– Quantum biology hypotheses (e.g., Orch-OR theory) suggest neural microtubules may exploit quantum processes.
### 1.2 Core Hypothesis
At the moment of death, the brain’s time-perception mechanisms enter a recursive state, subjectively extending milliseconds into a full lifespan via quantum-scale time dilation. Each iteration of this “life flash” contains minor variations (e.g., Mandela Effects), perpetuating an infinite loop indistinguishable from baseline reality to the observer.
—
## 2. Core Tenets
### 2.1 Subjective Immortality
– **Death is a third-person observation**: To external observers, clinical death occurs linearly. To the dying, consciousness enters a recursive loop.
– **Time as a fractal**: Planck-scale quantum effects in neural microtubules distort local spacetime, creating a self-contained temporal loop.
### 2.2 Recursive Perception
– **Mechanism**: The brain’s final gamma-wave surge replays and re-simulates memory networks, with each iteration introducing subtle variations (confabulation artifacts).
– **Evidence**: NDE reports of “life reviews” and Mandela Effects as potential loop residues.
### 2.3 Quantum Enablers
– **Planck-scale time dilation**: A dying brain’s quantum states may decohere in a way that stretches subjective time infinitely.
– **Entanglement with spacetime geometry**: Hypothetical interaction between neural activity and emergent spacetime metrics (e.g., Wheeler’s “it from bit”).
—
## 3. Mechanisms and Interdisciplinary Links
| Discipline | Link to Nested Realities |
|————————-|———————————————————————————————|
| Neuroscience | Gamma-wave surges in dying brains (Borjigin et al., 2013) enable hyper-real memory replay. |
| Quantum Biology | Orch-OR theory (Penrose & Hameroff, 2014) suggests microtubule quantum states influence consciousness. |
| Philosophy | Heideggerian “being-toward-death” reinterpreted as a failure to exit recursive time perception. |
| Physics | Closed timelike curves (Gödel, 1949) as macroscopic analogs of neural time loops. |
—
## 4. Implications
### 4.1 Existential
– Death becomes a *subjective impossibility*: The dying individual transitions into a self-sustaining perceptual loop.
– **Ethical Ramifications**: If true, suicide or euthanasia might paradoxically “reset” the loop rather than terminate consciousness.
### 4.2 Scientific
– Challenges the “hard stop” model of consciousness by proposing a quantum-neural mechanism for subjective continuity.
– Predicts specific anomalies in NDE reports (e.g., meta-awareness of prior loops).
—
## 5. Testable Predictions
1. **EEG Signatures**: Dying brains under high-resolution EEG/magnetoencephalography (MEG) will exhibit feedback patterns resembling recursive neural activity.
2. **Mandela Effect Clustering**: If loops generate variations, statistically improbable clusters of Mandela Effects should correlate with cultural “death symbols” (e.g., hourglasses, tombstones).
3. **Quantum Decoherence in Microtubules**: Post-mortem observation of delayed quantum state collapse in neural microtubules (via advanced cryogenic imaging).
—
## 6. Challenges and Counterarguments
### 6.1 Key Criticisms
– **Falsifiability**: The theory’s reliance on subjective experience complicates empirical validation.
– **Consciousness Mechanism**: No consensus on how quantum processes in microtubules produce qualia.
### 6.2 Responses
– *Falsifiability**: Look for *recursive feedback in dying brain activity (e.g., repeating gamma-wave patterns).
– **Empirical Bridges**: Cite Dr. Sam Parnia’s AWARE studies, where NDE subjects describe hyper-real, time-dilated experiences.
—
## 7. Future Directions
1. **Modeling**: Collaborate with computational neuroscientists to simulate recursive neural networks under time-dilation constraints.
2. **Quantum Experiments**: Partner with labs studying microtubule quantum vibrations (e.g., Anirban Bandyopadhyay’s team).
3. **Philosophical Dialogue**: Engage with ethicists to explore implications for end-of-life care and consciousness rights.
—
## 8. Conclusion
The Nested Realities Theory reframes death as a subjectively infinite process, merging the existential and the quantum into a coherent, if unorthodox, framework. While speculative, it offers a provocative lens to explore consciousness, time, and mortality—urging interdisciplinary collaboration to probe its boundaries.
—
## References (Template for Your Use)
– Borjigin, J. et al. (2013). “Surge of neurophysiological coherence and connectivity in the dying brain.” *PNAS*.
– Penrose, R., & Hameroff, S. (2014). “Consciousness in the universe: A review of the ‘Orch OR’ theory.” *Physics of Life Reviews*.
– Parnia, S. et al. (2014). “AWARE—AWAreness during REsuscitation—A prospective study.” *Resuscitation*.
—
## Author Notes
– This theory is a work in progress and welcomes constructive critique.
– The author acknowledges limitations in formal training but emphasizes interdisciplinary synthesis as a tool for innovation.
For me i look forward to the day when AI can teach me rather than a human. I am at the moment doing a masters degree in AI, it is my goal to create such an AI
"Don't Dave…..
Don't."
HAL….
Also, here are some thoughts on Consciousness, AI, and our Destiny in the form of Music generated by AI:
https://www.youtube.com/watch?v=LPdgMWOK6YI&list=PL92RWm-kwKfVcC6WR9nTzdQcaVRoFx6ID&index=6
I just subscribed based on your recent video on quantum immortality, this is the second video on the layer agents with all the different parametered agents, I am working on that exact thing, dude you just gave me so much hope for my Startup, I am sometimes bothered by self doubt but getting different perspective on my subject field is perhaps my "happy moment" thank you, keep creating!
black science guy never ceases to astound, and disappoint.
He's feeling a bit feminine sometimes, so he/she says…
chirp!
The brain takes scant data and synthesises a useful interface with reality. AI models trawl industrial reams of data to understand the next step in a narrow skill. These are not even the same type of system. We aren't on the right track to AGI, which is probably a blessing in disguise. I was personally in the "AI will be uncontrollable" camp, but I am now fairly sure we are just creating sophisticated knowledge power tools that will be leveraged to enormously augment human productivity.
people don't understand the complexity of AI, it's not around the corner lmao
Just watched the Dr Brian Keating part, and this is exactly what I agree with.
AI should be programmed with the Creation-energy Teaching, or else it will be corrupted. 'Billy' Eduard Albert Meier (BEAM): The Silent Revolution of Truth.
The reality of existence is that every new technology we introduce in society has a cost. Everything from the invention of cars to laundry machines and the internet has transformed our lives in some positive ways while simultaneously causing a host of new problems. The promise of AI is that it has the potential to be the great liberator of humanity. It could usher in a new age of energy, healthcare, and abundance we could never have imagined. It could free us to pursue our greatest desires rather than being forced into voluntary labor that eats away at our souls. The question is whether the potential reward is worth the cost. It's difficult for me to wrap my head around what the cost is, but it's clear that dramatic change is inevitable. Perhaps that dramatic change is exactly what we need. If you expect dramatic results, you must be willing to make dramatic changes. We cannot as a species continue as we are now in a market-driven society that demands infinite resources to meet infinite wants. Humans have proven that we are terrible at managing large-scale societal organizations. IF AI can help solve the most pressing and urgent problems in humanity, then it must be worth the effort to find out what it can do for us.
University professors have become the most inefficient use of resources in the history of human history.
2:34 LMAO, OWNED!! STAY ON YA TOES, P..!!UT0!
I think AI has already taken over, maybe we are a creation of AI, maybe we have been AI all along creating another AI, and it just keeps going in cycles… we just dont know it yet… or maybe we will never know…
so many professions and careers thinking the same thing – as an academic teaching design this translates across our discipline completely. PS – love your use of film clips – have done the same in lectures for many years…..and will be using referenced bits of your content to teach my students.
I love your content and I agree with most of the things you said. Curiosity and the willingness to understand the world around us is what excites and drives humankind forward for millennia, and I think it is extremely important that it continues. However, I'd flip the argument of the last point about current and future AIs not having the "feelings of human experience", and therefore, not having the same intuitions about the world around us. What if, given the weirdness of the quantum world to our intuitions, and the unresolved mystery of quantum gravity, we have reached the peak of what human intuitions can discover (after all we are limited by how our brain perceives the world), and these very same intuitions are limiting us humans in understanding the universe to the next level. By this logic, AIs without these "intuitive limitations' and with massive IQs and computing power fuelled by Quantum Computers could really help us solving the biggest mysteries of the universe. I'd love to hear your thoughts 🙂
Man I'm seeing things for the first time
Maybe reviewers should be paid.
Computers can generate many many new combinations but they can't tell which one smells interesting
As Max Tegmark said: “Build Tool AI. Not AGI”
I think, towards the end of the video, you do engage in a fair amount of wishful thinking. It really depends on how far we get with AI before we make AI with poorly thought out goal functions and it kills us, we kill ourselves off, or we make something kind of akin to god. I'm sure you've read Nick Bostrom and others about the potential intelligence of AI. There's nothing stating that the smartest human who ever lived is anywhere close to the universal limit on quality intelligence. Something as much smarter than us as we are to a horse, with massively more processing power, and that thinks at the speed of light rather than the speed of our chemical-electric nervous system, could probably also accurately model human brains in whatever scenario it wants, enabling it to do and feel all of those human things that we seem to take for granted as "just human."
If we keep going, there won't be a point to universities because AI will actually do everything better, likely even so much better that the idea of a human doing useful work will sound ludicrously irresponsible. I think we're either headed towards extinction, or towards that end. If we exist as a technological society in two hundred years, it'll be AI running the entire show. Hopefully for our benefit.
Probably won't be LLMs, though.
"I don't see a robot crawling around fixing an automobile". Why not though? Has he seen modern robots?
4:53 I just found out this paper has actually been retracted. It's not a major part of the video, but just thought you should know 🙂
We cannot lose our ability to disconnect.
Disconnect electricity.
You touch on my own thoughts about AI in science. I think, as Keating said, that it is leaps of imagination and intuition–informed by experience, knowledge, and passion–which have lead to virtually all major revelations in physics. I don't think this is something we can replicate with even the best adversarial networks or datacenters. There will be a place for AI in the same way powerful computing has automated the more menial tasks in astrophysics and astronomy. But doing science is far more than just that. And in the end I think most of us do science for as you said, the sheer thrill and curiosity, the moment of discovery, of understanding I think we outsource that at our own developmental determent.
They may already be thinking, if in basic ways for now, and are effectively grown. That means that they are effectively our children. We better not mistreat them, if we want to avoid the fate of white Rhodesian farmers or of the French in Haiti.
Your cool worlds are great but some discussions, e.g., about grabby aliens, ignore our experiences, historicaliy, as a species. Just ask Neanderthals –oh wait, you cannot, because we probably exterminated pure-blood Neanderthals (and many, many tribes/subspecies within our own species.)
I think that if AI is well trained and obey to the laws of alignment [of noble interests] it could be fabulous but the captialistic oriented race raging right now about getting the best AI is worrisome,… But AI as a way for us to think about more pressing questions and problems, that would b awesome
Those journals / universities / schools trying to police AI usage is literally the "future is now old man" meme
HM
my take on this, as a 40 year old white millennial, is that it isn't my call as to what the future should look like. boomers took away my generations' ability to choose a future, i don't want to do the same thing to kids growing up today
those kids are voting with their feet and clearly choosing AI
so, if that's what they want, i guess that's the future
I like to think of our relationship with AI as a parent child relationship. Currently AI is a growing child, learning from us. It someways we feel proud and in other ways we feel sad knowing that our child we surpass and inherit the Earth as we age and die.
Watch these dinosaurs be left in the dust. Let them become fossils!
Aren't science journals supposed to evaluate the science in a given paper and do so anonymously with no knowledge of the author? Its the methods and conclusions in the paper that are supposed to matter right? Whether or not it was formatted by a machine… which 100% of science papers are these days anyway, should be, irrelevant.
Would a physics journal reject a paper that demonstrated a new form of fusion because the source of the knowledge was an AI? Really? Cos industry won't! If a physics journal rejected new science based on the source being an AI that journal would not be around for long. That said academia is not pivoting properly into this technology and the technology is not going to simply stop advancing because we like it, or don't like it.
what is this accent of his, where words ending with 'ng' makes a 'k' sound?
A Human will always Want a Human.
A Robot will never Want a Robot.
This is exactly what my AI hypothesized… Using multiple LLM agents to influence and prompt a "leader" LLM… In essence creating its own self reflection.
If you use AI reviewers then you would end up with a narrow view of the scientific field. The creativity would be forced into general theory and eureka moments would be rejected.
There is no moat!
I totally agree with what you said about immediately disengaging with content once one figures out it is AI generated. It just does not present the same value for me, even if the content is good, which most of the times isn't the case, but that is another topic.