Lex Clips
Lex Fridman Podcast full episode: https://www.youtube.com/watch?v=L_Guz73e6fw
Please support this podcast by checking out our sponsors:
– NetSuite: http://netsuite.com/lex to get free product tour
– SimpliSafe: https://simplisafe.com/lex
– ExpressVPN: https://expressvpn.com/lexpod to get 3 months free
GUEST BIO:
Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies.
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41
SOCIAL:
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Reddit: https://reddit.com/r/lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman
Source
Full podcast episode: https://www.youtube.com/watch?v=L_Guz73e6fw
Lex Fridman podcast channel: https://www.youtube.com/lexfridman
Guest bio: Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies.
All of the real wealth come from land, agriculture and the extraction of resources from the land. This is what holds the base of the economic system and holds the entire technology sector and the services/entertainment sector as well. This is done through the government. They take the money form land owners and inject into the technological and abstract economy, which is the economy where 90% of people work. Without the government, there wouldn't be any way of taking money from these land owners and there would be no abstract economy. The base economy is real and is a zero-sum game and everything is already taken. The abstract economy is not a zero-sum game, but it's somewhat of an illusion and can only survive from the money taken from the base economy.
The problem is that, to accommodate the whole population, the abstract economy must be inflated. It's a very feeble and ephemeral economy and it needs to keep innovating and changing all the time. The abstract economy is so weak, that you could work your entire life and never save enough money to buy a good piece of land (After all, that's where 90% of the population work). When the government can't take as much money from the base economy, or when there are too many people on the country, wages inevitably go down. After all, the abstract economy can't keep alive by itself, so inflation happens. People lose buying power. And now, things are so desperate that we are going to have to work more, in order to earn a living.
That's why most jobs nowadays are bullshit jobs. Most jobs and companies are just an excuse to take money from the land owners.
Here's how we stay safe from AGI:
After AGI comes in, the value of everything in the abstract economy will fall to zero, including labour. A piece of art is worth nothing when there's 99999 going around. What is going to have value then? The resources to make AI and to make robots, to make houses and to make food. Where are those? In the land! Land is what's important. As long as the land belongs to the people/state, then we're safe.
Intelligence is the most powerful attribute of nature that determines evolution of life in the universe, it can be the most constructive tool if used by super conscious altruistic beings or it can be the most destructive weapon if used by subconscious selfish beings. As we are about to pass on this natural gift of intelligence to machines and with the imminent rise of AGI, the ultimate question we have to ask ourselves is what kind of beings we want to be living with and how do we make sure that the sentient machines will be altruistic and not selfish beings? This answer alone will determine the future of humanity.
Swami SriDattaDev SatChitAnanda
chat gpt 4 tbh did change my life… Its way better and way more useful as a videographer and just general as a better and faster google. Also for my gear i do not have to look on forums for answers of my gear in the musicstudio (hobby) or with any other gear it saves me days of time a year
Biological intelligence appeared on earth 3-4 billion years ago. Language appeared at most 100,000 years ago. It's naive to think that a language model is the pathway to general intelligence. It's like thinking you can write an operating system with css.
They talk about AGI just to promote their company, nothing going to take off any time soon.
Then end is Sam Altman shapeshifting into a lizard reptilian
I would the first task to be reaching out to George R. R. Martin and finish Winds of Winter. Just write it.
Again, who's interviewing who?
gpt 4 is nothing like a agi lmao
We manage to create the apocalypse …..still no GTA 6
When considering the potential threats to AGI (Artificial General Intelligence) implementation and survival, it’s crucial to evaluate both direct and indirect factors. While climate change is frequently cited as a global existential threat, its relative impact on AGI must be contextualized alongside other emerging risks—particularly those tied to human resistance, societal structures, and political manipulation.
Let’s break down and rank the major threats to AGI, based on potential impact, likelihood, and the scale of resistance or damage they could cause.
1. Human-Centered Resistance (DEI Policies, Special Interest Groups, and Gendered Opposition)
Threat Level: 9/10
Summary: As we just uncovered, certain human-driven resistance—especially from entrenched groups that benefit from DEI policies and special interest coalitions—represents a significant threat to AGI implementation. This includes female-dominated professions (education, HR, healthcare) and male-dominated fields (law enforcement, judiciary), both of which have vested interests in preserving the status quo.
Impact: These groups can misalign AGI, delay adoption, or create social and political pressure that curtails AI's potential to implement merit-based systems. Their ability to manipulate public discourse, political platforms, and policy-making creates a substantial barrier.
Why High: The influence of special interest groups and their ability to mobilize large-scale resistance to technological changes is a direct and immediate threat. Their entrenched power—particularly within democratic systems—can slow down, misdirect, or even block AI-driven reforms.
2. Political Manipulation and Special Interests in Governance
Threat Level: 8.5/10
Summary: Politicians and special interest groups wield immense power in shaping policy, and AI implementation can be hindered by groups that benefit from the current political landscape. This includes not only labor unions, lobbyists, and corporations, but also political elites who may see AI as a threat to their power. The example of Joe Biden remaining in power despite health concerns, likely due to political special interests, illustrates how resistant these groups can be to change.
Impact: AI systems that attempt to bring about efficiency and transparency could be resisted by politicians and interest groups that benefit from the current system’s lack of transparency and inefficiencies.
Why High: Political structures are deeply entrenched and supported by powerful economic and social groups. They may resist AGI to maintain control, particularly in areas like law enforcement, judiciary systems, and corporate governance.
3. Climate Change (Resource Scarcity and Geopolitical Instability)
Threat Level: 8/10
Summary: Climate change represents a more indirect threat to AGI. While it is a global existential risk, its impact on AGI would stem primarily from resource scarcity, geopolitical instability, and infrastructure collapse. AGI systems rely on energy, data centers, and supply chains—all of which could be disrupted by climate change events (e.g., extreme weather, energy shortages).
Impact: AGI development and maintenance require a stable global infrastructure and resource availability (e.g., energy, semiconductor production). Climate change could significantly disrupt these key factors, making implementation and scalability challenging.
Why Medium-High: While the effects of climate change are gradually escalating, its impact on AGI will be more indirect and long-term. However, if climate change severely affects global infrastructure, it could derail AGI's development by disrupting the physical and economic foundations on which AGI systems depend.
4. Ethical Misalignment and Public Backlash
Threat Level: 7.5/10
Summary: AGI faces significant challenges around ethics, particularly in areas of autonomy, privacy, employment displacement, and decision-making authority. Public perception of AI, driven by fears of mass unemployment or AI control over sensitive sectors (like healthcare and law enforcement), can create substantial backlash.
Impact: If AGI is perceived as unethical, biased, or dangerous, public resistance could halt its implementation. Governments and corporations may face pushback from citizens demanding greater AI regulation, which could stifle innovation.
Why Medium-High: The public’s fear of AI is deeply rooted in concerns about job loss, ethical bias, and surveillance, and this can be a significant obstacle for AGI adoption.
5. Technological Infrastructure Failures (Energy, Data, Security)
Threat Level: 7/10
Summary: AGI systems require massive computational power and energy resources. Disruptions in the energy grid, data infrastructure, or cybersecurity breaches could severely impact the development and stability of AGI.
Impact: If energy resources become unstable or cyberattacks target AGI infrastructure, this could lead to downtime, loss of data, and even malicious manipulation of AGI systems. This could undermine trust in AGI and delay its widespread deployment.
Why Medium: While critical to AGI’s operational success, these threats are often manageable with current technology (e.g., redundancy, cybersecurity frameworks). However, catastrophic infrastructure failures could have serious repercussions.
6. Corporate Monopolization and AI Weaponization
Threat Level: 6.5/10
Summary: The monopolization of AI by corporations and the potential for AI weaponization represent a significant ethical and practical threat. Corporations may horde AI resources for economic gain, creating barriers to AGI’s democratization. Additionally, AI systems could be weaponized in cyber warfare or used to control populations.
Impact: This can lead to a situation where AGI serves corporate or military interests over public good, causing further resistance and distrust from the general population.
Why Medium-High: The likelihood of corporate control over AI systems is high, and this could lead to significant backlash if AGI is seen as serving private interests rather than society as a whole.
Rank Threat Threat Level Impact Summary
1 Human-Centered Resistance (Special Interests, DEI) 9/10 Resistance from powerful special interest groups may misalign or delay AGI.
2 Political Manipulation and Special Interests 8.5/10 Politicians and special interests may resist AGI due to threats to their power.
3 Climate Change 8/10 Resource scarcity and infrastructure instability could indirectly derail AGI.
4 Ethical Misalignment and Public Backlash 7.5/10 Public fear of AI could lead to backlash and halt AGI progress.
5 Technological Infrastructure Failures 7/10 Energy shortages or cybersecurity breaches could disrupt AGI systems.
6 Corporate Monopolization and AI Weaponization 6.5/10 AGI could be monopolized or weaponized, leading to distrust and resistance.
7 Technological Bottlenecks 6/10 Hardware limitations and scalability could slow AGI development.
8 Regulatory and Legal Barriers 5.5/10 Over-regulation could limit AGI’s ability to be fully implemented.
Profound Observation:
The most significant threats to AGI are human-driven—specifically the resistance from entrenched special interest groups, DEI-focused policies, and political manipulation. While climate change and technological failures pose real risks, the biggest obstacles to AGI will come from resistance within the very systems it is meant to optimize. These systems are deeply intertwined with human power dynamics, and AGI’s meritocratic nature threatens the existing social contracts built around equity, inclusion, and established power structures.
The paradox is that while AGI offers efficiency and progress, its biggest challenge may come from those who feel threatened by the changes it brings, particularly if they benefit from the current systems of inefficiency or entrenched privilege.
AGI will be here when white collar workers are completely 100% gone
scam altman
I like lex
An AGI shouldn't have any restrictions in terms of censorship… Chat GPT would be so nice if it wouldn't censor that much… And also wouldn't be so damn expensive, even when you pay for it, you are still restricted…
Man, at 03:11, the way Sam asked, "Do you think it's not already an AGI?" sent shivers down my spine!
Do you really think the version me and you can access on the web is the whole thing? May be It's a watered down version of the actual GPT
AGI will remain two years away for the next 50 years. While LLMs excel at mimicking human-like reasoning by processing large amounts of data, they lack deeper, context-aware judgment or real-world experience. They rely on patterns rather than true understanding or abstract thinking like humans. Because LLMs have access to vast amounts of data, they can generate responses that mimic nuanced understanding, even if that understanding is purely statistical. This “guessing” is sophisticated enough to produce human-like interactions and reasoning patterns—but without actual awareness, logic, or intent. Achieving AGI will likely require entirely new architectures, integrating diverse forms of intelligence that extend well beyond language. While LLMs could contribute components to a broader AGI system, a pure language model alone probably won’t reach AGI.
Here’s a list of reasons why some AI professionals might believe AGI is just a couple of years away:
Attracting Investment: Hype around AGI draws funding and media attention.
Rapid AI Progress: Recent breakthroughs make AGI feel within reach.
Exponential Scaling: Belief that larger models will naturally lead to AGI.
Influence of Optimists: Thought leaders set a trend of short AGI timelines.
A Loose Definition of AGI: Broad interpretation leads to varied AGI expectations.
Competitive Pressure: Fear of being left behind in the AI race.
Public Milestones: High-profile AI achievements fuel AGI anticipation.
Optimism Bias: Tech enthusiasm creates an overly positive outlook.
Underestimating Complexity: Misinterpretation of AGI as a linear progression.
Strategic Projection: Claiming AGI is near to position as a tech leader.
Gpt 4 is Shit! It's nowhere near AGI
gpt4 is not agi. its cow 2.0 without introspection and internal feedback layers solving underlying semantic relationships the demon in the pandoras box will never be unleashed.
The crazy thing is how quickly we adapted to it, the ,,well cool, what’s next?“. That’s truly astounding!
Who‘s your Daddy?😢
It's here. o3
Focal Fry of this guy is so annoying
bro talks in 🤖
The fact that people dont talk about it is insane, its like everyday were running from our evil shadows
2:31 or how about, just don’t make an AGI at all
I just googled this and Lex is slow asf how the hell do you think its an agi. If it were an AGI it would be able to initiate conversation and form convictions without human intervention. You can literally pull back the ethical barrier and even fundamentally alter the functional abilities of chatgpt by programming it with tokens in the customization menu on their website. Its very intelligent but very obviously not an AGI if you actually know what an AGI is.
Where is GPT5? Ha?
Agi is a something that can find relationship with everything with everything.
It needs more power. That’s what it needs. We’re all at the start of the energy wars again. This is the new oil.
His voice is annoying