TED
In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT’s development and get Brockman’s take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.
If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership
Follow TED!
Twitter: https://twitter.com/TEDTalks
Instagram: https://www.instagram.com/ted
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences
TikTok: https://www.tiktok.com/@tedtoks
The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
Watch more: go.ted.com/gregbrockman
TED’s videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com
#TED #TEDTalks #ChatGPT #ai
You don't need this crap. Think for yourself.
Now a few months later and I’ve heard stories of many writers who have lost their jobs because of Chat GPT. I don’t think it’s fully acknowledged that they have unleashed something that will create as much bad as good, if not more.
AI evolution is inevitable, the question is what kind of stupidity will humans do with such technology.
The big question is what is the true motivation behind artificial intelligence, is it human advancement or money , if its just money then that road can be very bumpy one, one that needs to be policed and I challenge why do we need AI we have lived without it for millions of years .. giving our world over to AI takes away our control and as many people fear the job security of millions of humans , how can that be good ? .. how far could it go will we have AI doctors , AI soldiers , AI teachers AI parents , AI driven production line workers , we need a big conversation about AI before it goes out of control and becomes a force for the bad .. even the godfathers of AI are creating warning bells and we need to listen before we become the victims of AI .. .. lets debate
All the monkeys clapping in ecstasy , not quite sure why but feeling compelled 😂
Chat GPT is literally changing the way companies work. In my job it is NO LONGER allowed. Why would a company hire a $100,000 employee when they can hire soeone for $40,000 and train them in using GPT.
Thats what our CEO told us
Let see what how the politicians will control AI. Cause it could be a menace for their power and also a very useful tool to control people.
Moving too fast with no controls…this is VERY risky…this guy’s responses are unsettling…we don’t do virus vaccination in an “open” environment for a very good reason. There are great benefits but I see far greater risks.
"You have to read the whole book. No one wants to do that." I'm stunned by this statement. Apparently the audience is as well. If this is the mindset of someone developing the technology that will forever change our civilization and our SPECIES, we are in big trouble.
Anyone figured out what's next?
Now I'm actually worried: "we think about these risks all the time [when not busy trying to make billions]", not "we've built safety procedures."
It still can't spell ChatGPT in an image. Fail.
fernandito pito
Idiocracy
Is there any logic in trusting the very people whose livlihood and reputation and success are all invested in unleashing AI, as to whether it's a good idea for the world? What human is going to admit "Yeah, I may be contributing to human self-extinction, but it's exciting and pays well and brings me respect and admiration from peers and family."
He is talking like AI
There's nothing lucky, exciting, or important about using a new technology that sucks every last bit of your private info out of you every time you use it and stores it for who knows who to look at and track. This is a cobra wrapped in a pink bow. We're selling our souls for "convenience."
If ChatGPT vs OpenAI? What is your favorite
Is He s a real Human or a robot??
Please ted put some indonesian subtitle for your videos
Good thing my job is Ai proof phew
Fact checks can be very bias especially with controversial topics
important points covered
Never settle for an Artificial Imbicile when there are so many natural ones around.
Imagine if we made AIs that could interpret brain signals in order to output controls to things like fake limbs.
AI is more creative, rational, and intelligent than humans. In other words, AI should do politics. Then there will be no more wars, no more poverty, and a drastic decrease in crime.
Humanity will eventually choose that path. However, the current human race is not yet ready for such a world.
I am a former semiconductor engineer who successfully developed and patented the world's first national project, and I am currently a high school teacher. And I am the first teacher in Japan to practice active learning. I also educate teachers as a leading expert in education in the age of AI.
I think that the other party speaks towards the potential bad sides of AI, and the other party failed to explained it or turn the table on the goodness AI they are advocating…
The bigger news spreading out there is that AI could "potentially" be evil. And that might be the missing input. What are the developer's preventive measures to avoid AI being used as a tool for bad causes, beause it is indeed powerful.
AI has the potential to end careers, destroy the Arts, increase scamming and help spread disinformation. I get all of the wonderful things that it will also be able to assist with and improve on and achieve. But to have this guy sit there and say that he's leaving his creation up to us to sort out and create guardrails for is frankly terrifying. I'm not sure he's aware that protection laws are always ten steps behind the advancements of technology, so we will never be able to catch up with its progress and people's lives and jobs could be at risk in the future, if not properly regulated. It's definitely not to be taken lightly. I feel like people are living with a false sense of security thinking it will be all hunky dory, but it has the potential to bring real chaos and turmoil if we're not careful too. I know my industry as a Film Composer, which I've spent the best part of my life working towards, with £20,000 of student debts still looming over my head, is at huge risk of going extinct. Certainly lower to mid jobs on the industry scale even some high end jobs too will go in the next two to three years. Music Library Companies will probably be first to go and that's just one example out of hundreds of other job sectors that will be threatened. I mean will there be enough new jobs going round to make up for the high levels of unemployment that AI might end up creating? How does society work when computers are putting humans out of jobs? It's a beast we have no control over and won't be able to truly tame.
Greg Brockman sounds just like the nerd who could've developed such a thing 😂 Such a robotic tone and awkward body language… Just Kidding.
Comment by @HauntedHarmonics from "How We Prevent the AI’s from Killing us with Paul Christiano":
"I notice there are still people confused about why an AGI would kill us, exactly. Its actually pretty simple, I’ll try to keep my explanation here as concise as humanly possible:
The root of the problem is this: As we improve AI, it will get better and better at achieving the goals we give it. Eventually, AI will be powerful enough to tackle most tasks you throw at it.
But there’s an inherent problem with this. The AI we have now only cares about achieving its goal in the most efficient way possible. That’s no biggie now, but the moment our AI systems start approaching human level intelligence, it suddenly becomes very dangerous. It’s goals don’t even have to change for this to be the case. I’ll give you a few examples.
Ex 1: Lets say its the year 2030, you have a basic AGI agent program on your computer, and you give it the goal: “Make me money”. You might return the next day & find your savings account has grown by several million dollars. But only after checking it’s activity logs do you realize that the AI acquired all of the money through phishing, stealing, & credit card fraud. It achieved your goal, but not in a way you would have wanted or expected.
Ex 2: Lets say you’re a scientist, and you develop the first powerful AGI Agent. You want to use it for good, so the first goal you give it is “cure cancer”. However, lets say that it turns out that curing cancer is actually impossible. The AI would figure this out, but it still wants to achieve it’s goal. So it might decide that the only way to do this is by killing all humans, because it technically satisfies its goal; no more humans, no more cancer. It will do what you said, and not what you meant.
These may seem like silly examples, but both actually illustrate real phenomenon that we are already observing in today’s AI systems. The first scenario is an example of what AI researchers call the “negative side effects problem”. And the second scenario is an example of something called “reward hacking”.
Now, you’d think that as AI got smarter, it’d become less likely to make these kinds of “mistakes”. However, the opposite is actually true. Smarter AI is actually more likely to exhibit these kinds of behaviors. Because the problem isn’t that it doesn’t understand what you want. It just doesn’t actually care. It only wants to achieve its goal, by any means necessary.
So, the question is then: how do we prevent this potentially dangerous behavior? Well, there’s 2 possible methods.
Option 1: You could try to explicitly tell it everything it can’t do (don’t hurt humans, don’t steal, don’t lie, etc). But remember, it’s a great problem solver. So if you can’t think of literally EVERY SINGLE possibility, it will find loopholes. Could you list every single way an AI could possible disobey or harm you? No, it’s almost impossible to plan for literally everything.
Option 2: You could try to program it to actually care about what people want, not just reaching it’s goal. In other words, you’d train it to share our values. To align it’s goals and ours. If it actually cared about preserving human lives, obeying the law, etc. then it wouldn’t do things that conflict with those goals.
The second solution seems like the obvious one, but the problem is this; we haven’t learned how to do this yet. To achieve this, you would not only have to come up with a basic, universal set of morals that everyone would agree with, but you’d also need to represent those morals in its programming using math (AKA, a utility function). And that’s actually very hard to do.
This difficult task of building AI that shares our values is known as the alignment problem. There are people working very hard on solving it, but currently, we’re learning how to make AI powerful much faster than we’re learning how to make it safe.
So without solving alignment, everytime we make AI more powerful, we also make it more dangerous. And an unaligned AGI would be very dangerous; give it the wrong goal, and everyone dies. This is the problem we’re facing, in a nutshell."
1:03: 🤖 OpenAI showcases new AI technology that can generate images and integrate with other applications.
5:43: 🤖 The process of training chat GPT involves unsupervised learning and human feedback to teach the AI how to use tools and generalize its skills.
9:41: 🤖 The collaboration between humans and AI in fact-checking and data analysis can lead to solving impossible problems and rethinking how we interact with computers.
14:35: 🤯 OpenAI has achieved significant progress in language models, demonstrating the emergence of semantics from syntactic processes.
19:29: 🤔 As AI models scale up, new emergent behaviors and capabilities can be observed, but there are risks and challenges in predicting and controlling them.
23:42: 😬 OpenAI's approach is to let reality hit them in the face and gather feedback from the world, but they acknowledge the challenges of ensuring responsible and safe AI development.
27:34: 🤔 The speaker believes that it is important to approach the development of technology, particularly artificial intelligence, incrementally and with caution.
Recap by Tammy AI
To think that we have only scratched the surface… WOW. Humans + AI = greatest partnership of all time
AI is taking over
So basically this is how skynet starts 😂🎉
0:30 Yes, but do you hear from anyone who isn't an idiotic totalitarian "educated" in a government-run youth propaganda camp?
There's too much "unquestioned status quo dishonesty" here. Too few Thoreau-type Spooner-type questions.
Why to people think it was a joke to better be polite to AI? Better we all do. Specially if AI now learns how to push back humans..
Allways good to be polite 🤗🫠😇😘
If human scope could be encapsulated, up to now, into a formal narrative, it would be relatively nebulous at best, but now, the light has been switched on and all our futures are clear and bright with promise. AI will usher in directions for humanity to follow that will benefit everyone, all we have to do is use it properly 😀
How many of these comments are AI/Bots I wonder?🧐
My issue is not the technology or the new" tools" that may help
already 4 months, but can't do any of that in ChatGPT…. when will all these features be released?
So long as these AI powered chat bots aren't activating machines that grind our bones into paperclips, the harm that AI can inflict upon us will only be proportional to the extent we lend these stochastic parrots our deference over their presumed expertise over actual human wisdom. We must proceed with caution.
how does the 5ethoical theories relate to ted talk
what a tool to have i love it 😊
I do coding but I have never thought of coding an actual AI
The most complicated code I did was code a calculator on scatch
I hate this
e power of AI GPT is significantly useful but if there is no internet it could be impossible.This is a sophisticated tools to acquire knowledge.
Gustavo…