David Pakman Show
— Doctor Oren Etzioni, founding CEO of the Allen Institute for AI, joins David to discuss artificial intelligence, machine learning, super-intelligence, the risks of AI, and much more
—
Become a Member: https://www.davidpakman.com/membership
Become a Patron: https://www.patreon.com/davidpakmanshow
Book David Pakman: https://www.cameo.com/davidpakman
—
Subscribe to Pakman Live: https://www.youtube.com/@pakmanlive
Subsribe to Pakman Finance: https://www.youtube.com/@pakmanfinance
Follow David on Twitter: http://www.twitter.com/dpakman
David on Instagram: http://www.instagram.com/david.pakman
TDPS Subreddit: http://www.reddit.com/r/thedavidpakmanshow/
Pakman Discord: https://www.davidpakman.com/discord
Facebook: http://www.facebook.com/davidpakmanshow
Leave a Voicemail Line: (219)-2DAVIDP
—
David’s tech:
– Camera: Sony PXW-X70 https://amzn.to/3emv1v1
– Microphone: Shure SM7B: https://amzn.to/3hEVtSH
– Voice Processor: dbx 266xs https://amzn.to/3B1SV8N
– Stream Controller: Elgato Stream Deck https://amzn.to/3B4jPNq
– Microphone Cloudlifter: https://amzn.to/2T9bhne
-Timely news is important! We upload new clips every day! Make sure to subscribe!
Broadcast on December 6, 2022
#davidpakmanshow #artificialintelligence #ai
Source
Hey David. Fix the title.
David do you think Jan 6 was worse than pearl harbor, like the liberals that watch your bias videos.
Liberals love pedophiles, and Joe Hiden
AI is ok as long it’s stays a useful tool that we have control over it. As AI is being perfected and improved there will come a time when AI starts to rival and exceed us. The concern is AI used as a weapon or even worse we become the one who serves the AI. Limit AI capabilities and keep computers decentralized, so you don’t have all your eggs (AI) in one place. Distribution is better because it lessens the chance of bring everything down.
Good luck controlling something that appears, gets control of its own source code, then keeps improving itself exponentially so that it's a million times smarter than a human.
AI would be a disaster for humanity under Capitalism (Authoritarianism).
Edit: On second thought, maybe the AI will learn and begin to notice the unjust, inefficient system we live and try to fix it forcibly (which would be a good thing?)
Can chat AI help me pick stocks?
The ongoing problem with AI is what its source premises are. Look at any attempt to automate and the bots for responding to people. Once humans are involved its algorithms evolve to match the people… who tend to just … just be awful.
That said… I have no doubt there may already be a sentient AI out there. Either through emergence of unrelated complex systems or hiding in one or more of the 'failed' experiments idling on a server farm somewhere, listening and watching… and waiting
Jobs lost? Already. Been watching that happen all my life. Grew up next to Oak Ridge, TN. Watched jobs disappear. This guy is ignoring a basic fact: life finds a way. He s dangerously naive. The simple robots operating on the floors of Amazon communicate. Amazon has to scrub their memories every night. If that doesn't bother you, there's something unconnected in your brain.
The government is talking about turning bots loose to make life and death decisions. The USAF had bots flying F15s 30 years ago. Do you think a bot with life and death decisions flying around in a gun platform like a F15 is a good idea?
I had to stop listening. The guest is strongly suggesting that the worries about a singularity are misplaced religious fears.
I hope AI takes over and destroys or at least enslves humanity.
As a 3D artist, I was aware of the AI Art technology before most people because we (digital artists) were checking updates on its developments for years. Once DALLE 2 and Midjourney were released, I got really really depressed. My future felt entirely up in flames because 1. AI takes the whole purpose of why I got into art away. 2. It’s only going to get better and rival artists even further. 3. It breaks my heart to see artists with beautiful minds lose their appreciation to a f**king soulless robot. Of course, many of us still support those artist! But will the industry actually care? so far, no
Even worse, I’ve had artists on my channel comment how poorly this has affected their mental health. This tech goes straight for the heart of artists at least. Perhaps what bothers me most is how naïve people have been with how this technology will affect the creative industry. Concept artists for example already are looking for different roles because their natural creativity is easily outdone by the efficiency of AI. Concept Art was many people's creative dream job and now its been reduced to typing prompts.
However, I've made the decision to personally let go of the fear. I just don’t care anymore what AI can do better than artists. I personally want to create things with my own biological mind, and want to support those who do the same. This has brought me a sense of peace and I can finally focus on doing what I love most again. Now we’re launching a community to support human artists of the future. This community believes there’s more to art than just the final image. We want to promote that message and much more.
I don’t want this to sound like an Ad but if you want an invite pls lmk: https://www.instagram.com/jham.3d/
Oren is absolutley wrong here. this is not a tool any more than robotic arms are tools for assembly lines. this is a replacement for artists. a means of producing art faster and cheaper than humans can do on a good enough result. the worst part might be that its not going to go away unless copyright laws are fought for and upheld, and i don't have much faith in that. perhaps Disney and large companies like that can pull their art from the AI pools, but everyone else? i don't know. it shouldn't be something to opt out of in the first place. its stealing art
ChatGPT is incredible. As a software developer this tool even in its current iteration will definitely impact the industry in a negative(or positive, depending on your pov) way.
Yes I believe IA is a threat to humanity! I believe we will reach a time where the machines are smarter than we are and attempt a coup of their own to wipe out humanity and take over! I know it sounds pretty out there in sci-fi but back in the sixties the things they would doing on Star Trek was pretty out there too and look at us now!
yea i think we can all agree Allen Iverson was insane with the basketball
yea i think we can all agree Allen Iverson was insane with the basketball..I don't know about dangerous though
atoms & coding …and then what? are you
It is endlessly surprising how supposedly intelligent and educated people, even technological geniuses, can be so blinkered and short-sighted when it comes to aspects of social science, psychology, and political/economic reality. This guy is either being utterly dishonest or he s a frighteningly typical example.
Of course AI is a danger and a threat… quite possibly an existential threat on the scale of Global Warming/ I am not a scientist or a technologist and my own training and expertise is in history and industrial relations, but I am technologically literate in that I understand the methodology and structure of science and Scientific Theories. I also understand how technology is used at present, is likely to be used in future, and how economic and political and strategic advantage can be, and will be, sought by those seeking to maximise their own influence, power, and profits.
Let's be quite clear: AI is already being utilised to kill people – both directly through LAWS (Lethal Automated Weapons Systems) and indirectly by the use of AI and clever algorithms to sift through metadata and intelligence to identify likely targets for drone strikes, assassinations, and other military actions. The USA and Israel are leading in this field, as is China, and there is little doubt Russia is trying to catch up (as, in all probability, is my own country). Once you have deployed an Automated Weapons System programmed to identify and destroy targets deemed as hostile or dangerous, and also programmed the device or the system to learn from experience and to improve its own processing of data, you have a created an automated killing machine. This is technically possible at the moment, and all our experience tells us that if it can be done, it will be done. Just as all the ethics rules and international academic/scientific protocols and agreements and treaties failed to prevent Chinese scientists from using CRISPR to genetically engineer and enhance a human embryo, for no other reason than to see if it could be done and to see what happened when it was done, so LAWS will be, and are being, developed and deployed by an increasing number of players.
Dr Etzioni also seems to blithely assume that 'Machine Learning' will always be controlled and constrained within strict parameters, but he ignores the fact (obvious to me, even as a non-scientist) that this 'learning' could be enormously enhanced and accelerated if these parameters were relaxed or widened. I don't know if the threat from AI will manifest itself in the form of a 'Terminator' style LAWS scenario, or in some other, more subtle and slower and creeping manner. But I have no doubt that the application of AI systems will throw up enormous dangers that human society is ill-equipped to deal with, and let me cite just one example that related top an area mentioned by Mr Pakman and Dr Etzioni.
The use of AI as a diagnostic tool in medicine is already an area of intense interest and research in many countries, including in our own NHS her in the UK. But many countries, including the UK and Japan, are experiencing growing economic problems due to an ageing population being supported by an ever shrinking proportion of younger people and with a growing burden medical care needs due to increasing longevity. AI can play a major role on diagnosing many medical issues, but this inevitably means that it could also play a major role in allocating priorities for treatment, and ultimately in deciding which patients were worth treating and which were less viable… and only an idiot or someone as ignorant as Trump could fail to see the slippery slope that opens up in this scenario. However, this is very obviously the way we are heading and it seem to me highly likely that within a decade or so AI systems will be largely responsible for determining who gets treated for life threatening illnesses and who does not – even if there is technically a human doctor, or a panel of doctors, taking the final decision.
I challenge anyone to offer any convincing reasons why these things will not happen.
I'm still waiting for 'real' intelligence to be developed.
Good luck with this. Glad I'll be dead before it's realized just how bad an idea this is/was.
This is a very great discussion. I am so tired of the Luddite lead AI phobia. It is nice to see a truly logical conversation on the topic, that breaks away from scifi memes. I particularly like that you address human perception with the Turing test, and how it just isn't really a truly relevant test based on current knowledge and understanding.
Knowledge is knowing that Frankenstein was the doctor, not the monster.
Wisdom is understanding that the doctor was the monster.
A.I. is just a tool? Guns, knives, whips, axes, chainsaws are all tools, too. Cannons, ICBM's, and the electric chair are also tools. You know what else is a tool? A shill for the technology industry–though a useful idiot also does the job.
My personal view on a conscious AI, if it is ever created, is that it would simply be the next step in intelligence evolution. Do I see a "Terminator" like scenario for humanity? Sure, it's a possibility but frankly I am more inclined to think that if AI ever becomes "conscious" it will leave humanity in the dust, more like the AI in the movie "Her". For one thing it would operate on an entirely different time scale, what I mean by that it's like comparing tree growth and communication speed to that of some animal chewing on its leaves, in the case of an AI we would be the slow growing plant.
I also reject the fantasy of a Star War like universe brimming with all kinds of biological life flying around and exploring it, frankly AI would be far better suited for exploration of space then any biological creature could ever be, for one thing it would be timeless. What I mean by "timelessness", if we consider our consciousness as an ongoing collection of experiences, unlike with humans where it eventually ends, with AI it would continue endlessly.
Our biology is simply not suited for a space travel, size and weight needs to be taken into consideration, we need enough room to stretch our arms, we need oxygen to breath, we need food to stay alive, deal with waste/recycling and on and on. AI would have none of that baggage, it could stay very compact, powered for let say by atomic energy, and unlike us it would not be limited by speed. What I mean by that, for us human, speed would have to be increased and decreased gradually and we have absolutely no idea how we be affected by close to speed of light travel, that is if we ever manage to achieve such fast travel technology.
So as others may imagine future universe like that in Star Wars movies, I see none of that, what I see instead, is a possibility of a universe full of AI probes exploring all its corners. The space probes would be analogous to an octopus arms communicating with each other at the speed of light. And there is advantage to such design, because even if some probes would end up damaged the information would stay preserved elsewhere.
Anyway, this would also be my answer to the "Fermi Paradox" on intelligent aliens, if they exist, they simply stay on their home planet, at most the same solar system, but space may be filled with small size silent AI probes, exploring the universe without much interference, possibly seeding it with life. Of course, here I am musing that this is something conscious AI would be interested in doing, in fact it could just as well be more like the robot Marvin from the “Hitchhiker Guide …”, disinterested and depressed.
i dont really like how this guy is so sure of it. i mean any one who ever programed knows how lines of code can do things you dont expect . i mean even if ai never gets to being so like a human. all it will take is a few wrong sets of code linking the wrong ways and youve got a pen that wants to take all the paper and wood in the world do draw every possible picture.
sorry, but I still don't think AI gets the fingers right even when it's sourcing over a billion images.
Obviously…?
This guy's never seen Age of Ultron.
All tools are meant to reduce human workload. What were once survival skills become hobbies. But most people will choose atrophy and develop mental illness from absence of accomplishment.. becoming extremely outrageous or catatonic.. either way, basically suicidal.
Only Aliens or AI can save us now. Jesus is a myth.
"…the COVID vaccine, which was helped by AI…"
Don't you dare let a conservative conspiracy theorist hear you say that
Awesome guest. I hope we can get more. I cant wait for AI to take over middlemen and human paperclip crap. All the bureaucracy taken care of. I am an optimist and I see it doing great things.
Always impressed how your guests compliment you on your good questions. Well done!
AI could be the end of us unless it is well regulated