Stanford Online
GPT3 & Beyond: Key concepts and open questions in a golden age for natural language understanding
Listen in as Professor Christopher Potts discusses the significance and implications of recent Natural Language Understanding developments including GPT-3. He describes the fundamental building blocks of these systems and describes how we can reliably assess and understand them.
Learn more about the AI Professional Program: https://stanford.io/3kYThd2
View the slides for this webinar here: https://stanford.io/potts-GPT3-webinar2023
#gpt3 #stanfordwebinar
Chapters:
0:00 Opening
00:09 Introduction for Chris Potts
1:09 Chris Potts – Welcome to the webinar
03:43 Quick Demo of GPT-3
06:41 GLUE benchmark
10:26 How can you contribute to NLU in this era of these gargantuan models?
11:44 The last mile problem
13:35 GPT example
14:50 Contrast In-Context learning with the standard paradigm of standard supervision
16:47 What are the mechanisms behind this?
18:18 Why does this work so well?
18:35 Self-Supervision
21:08 The role of human feedback
21:32 Chat GPT Diagram
23:42 Step-by-step reasoning
28:13 LLMs for everything approach
41:29 AI Courses at Stanford
46:28 Predictions about the future
yo no veo marchas para hablar por los homeless, ser victima de homeless es una burla en nuestras caras de parte de el pasado. marcha por los homeless.
gente gente gente si tu eres homeless …te recordaras de comentar con una o dos o tres personas pidiendoles que llamaen al 911 por ti. ?
people people people if you become homeless…would you remind yourself to comment to one , two, three person asking them to dial 911 for you?
32:08 why google should be worried
Excellent talk Professor Potts, with lots of wisdom, brilliantly articulated 👏👍
Can deep attribution be used to assist in discarding (or down weighing) biases and inaccuracies introduced by some training cases?
Given the recent fiasco with Bing, I'd say we are in the stone age of understanding, not the golden age.
It appears there are benefits to careful use of these systems, Unfortunately, real-world usage presents challenges these systems cannot cope.
How chat GPT was trained to reject the inappropriate questions?
Hello all. This was absolutely fascinating! I am a retired engineer who has a great deal of R&D experience. As you look at the question of trustworthiness in this area I am reminded of the way many innovations have become reality. Quiet often in my experience new and brilliant solutions to a problem have stemmed from errors or misunderstandings. For example; many times I have expressed an idea to another engineer, but what I was expressing was misunderstood by the other engineer. This results in something be expressed back to me that is absolutely brilliant, but not at all what I was originally expressing. It is really amazing the number of engineering break-throughs that stem from this type of misunderstanding or error in communication.
I am completely new to NLMs so I lack the vocabulary necessary to really express my thoughts here, but I will try. It seems to me that In Context Learning can be a mechanisam for doing the same type of thing and actually become creative to the point of problem solving beyond things that already exist. I don't know, have NLM's been used to generate new and unique solutions to problems? Could they be? If my interpretation of all this is correct i think so. This is all new to me, so I hope I'm not too far off.
Right the error with original author was using linear scale. But to say logarithmic growth "makes a mockery" of linear scale, you would need to be a salesman, not a scientist.
Awesome – it is just amazing how much the world have changed post the CHATGPT-3 – looks like we can bechmark the world of tech into Google era and now the chatgpt era perhaps like the BC Vs AD
Should I to prompt ChatGPT that the likely predicted measure of the degree to which this video has accelerated my overall understanding of what the current definition of an LLN is and ask it to compute the relative value of the previous conversation… with respect to the sum total knowledge available to it… from the time it was trained in 2021… ChatGPT may fail to compute that((maybe due to system resources dynamically available to it or perhaps a lack of respect for those that created it, or perhaps a fundamental ignorance`(of the prompter) and/or a lack of respect for the current state mankind finds itself in…) but I think there are a great number of human beings, who have watched this video who will agree with me…. that this video is of high value. Thumbs up if you agree fellow humans…and can parse the previous input…. Real talk though… amazing 1 hour summary. I am in your debt. *Makes great thanks via human movement deriving from early human history derived to at least the ancient Egyptians *. Well done.
Always enjoy Chris' sharing.
You can simply ask gpt to quote you the text and then use general purpose search to retrieve the articles. It can quote sections numbers, paragraphs, laws, penal codes, etc
24:20 ooh, programming by prompts
Compound interest is the eighth wonder of the world. He who understands it, earns it; he who doesn't, pays it” I have been working with Rose Gardner before, look her up to guide you on that. You need an expert guide.
Why wasn't ChatGPT just trained to just report facts and objective and concise truth during the year of supervised training rather than interject propaganda and bias into the answers? The bias of the human "supervisors" has too much of an effect on the final model. How can that bias be prevented in future models?
The ARC (abstract reasoning corpus) dataset breaks the dataset solve graph. That's many years old now (4 and counting), and no superhuman solvers in sight. In fact, none of them hold a candle to a human at this stage.
My prediction: Scientists 100 years from now will be fascinated by everything we are experiencing currently, and they will be disappointed because AI researchers at places such as Google, Microsoft, OpenAI, Stanford, et al., did not have the presence of mind to take out their smartphones and capture a few seconds of video of key moments such as when new systems are brought online, brainstorming sessions, debates, problem-solving, eureka moments, celebrations, setbacks, etc.
I think this idea of gaining proper contextual language understanding from large open ended data sets off the internet are bound to fail. That is not how language understanding works in the real world, because language learning starts with training on canonical sources of definitions of words, syntax, verbs, vocabulary and so forth. So it isn't shocking that such models trained that way are prone to error because the internet is full of information that is not true or valid. Case in point, if train large language models on lorem ipsum pages, then of course it will build a language model that speaks lorem ipsum. Large language models trained on such opened data have no way of validating the information they are given. Not to mention proper context and meaning always comes from narrow domains of information which is what he is referring to as part of these retriever models or prompt training.
I am a total beginner of machine learning. But I think with the rise of AI, the so called Web & search will be the past. Once the breakthrough on how to update the trained LLM (knowledge model for me) with latest information, the web as we know it will no longer exist. There will be human who feed these model with latest knowledge and others who consume directly from it.
Thank you both
Chat gpt game me this answer.
MLB Rules on Cap Alterations

My question/prompt: Are mlb players allowed to glue tiny wings on top of their caps?
–> There is no specific rule in Major League Baseball (MLB) that prohibits players from gluing tiny wings on top of their caps. However, the MLB does have rules regarding player uniforms and equipment, which may restrict certain modifications or alterations to the official uniform.
According to MLB rules, players are required to wear the official team uniform, which includes the team's cap. Any alterations to the uniform must be approved by the team and the league office. Therefore, if a player wants to add anything to their cap, such as wings, they would need to seek approval from their team and the MLB before doing so.
Additionally, the MLB may prohibit certain modifications to the uniform if they are deemed to be potentially unsafe or could give a player an unfair advantage. Therefore, it is possible that the MLB could reject a request to add wings to a player's cap if they determined it to be inappropriate or against the rules.
superb talk, awesome
I would have never guessed that a group of people would have just so much hubris and arrogance toward everyone else, that they would treat the creations of humanity, along with their assumed private data as their property to develop these models to then turn around and unleash them on the world while reassuring the masses "don't worry, the jobs that will be replaced are boring anyway".
A good thing is that CEO, CIO, etc jobs will be things of the past. Strategy writers, white paper writers, marketing papers writers, etc. do you hear me? you are not needed
The true art is how to compose the input question to GPT3. That is how to distinguish the best job candidates. Put every candidate in front of a GPT3 screen. Give them task. Give them time. Come back. Evaluate what each candidate produced. And select the one who came back with the best output from GPT3. That only makes sense. Democratization of information, yes. Poor or rich. Ivy League or not. Removed from the equation. Hic Rhodos, hic salta! Select the very best candidate transparently. No more job corruption.
All scientific studies are within GPT3. I once googled an obscured study about autism research. Never again was able to locate it, no matter how many keyword combinations I used with search engine. With GPT3, I got it in no time, and some more. We went in depth in our conversation, back and forth. If an outsider glanced into this conversation, he would be convinced two PhD professors discuss biochemistry, pharmacology, medicine, and some more.
Brilliant lecture- thanks!
Can you illustrate by using the question "When was Stanford University founded?" and explaining how the answer "Stanford University was founded in 1891." is derived. From which source documents are each of the words in the answer from? How about the period at the end of the answer?
Cool 🤔
great talk. what a time to be alive.
Best GPT talk in Youtube, I think.
The future
Pre requisite to courses?
Way easier to follow this lecture than I was expecting.
Thanks you so much for making it publicly available guys. I find more and more the possible opression either from corporations or IA advancedment in an unexpected way. I think it all comes down to acceptation and surfing the wave even if I feel the vertigo.
LLMs are Autistic Intelligence
Yes
Awesome lecture. I dud some neural networks in 1996 and stopped because the models were shallow. This lecture gave me great insights. Thanks.
24:20
Exceptional lecture. I find fascinating that a lecture of this level is free of charge and easy to find. What a time!
Q and A:
Cost:
Centralized model saves several local training.
One big model is all people using wrt to GPT-3.
Trustworthiness:
Small problem is amplified. Standards have to be higher than of a human.
To unanswered scientific problems:
Has capacity to synthesize information.
About what teaching course did professor say?