James Briggs
In this video, we’ll talk about how to build better prompts for OpenAI’s GPT-3, Cohere LLMs, and open-source LLMs (like those on Hugging Face). We’ll treat prompt engineering as a mix of engineering and artistry, using rules of thumb from OpenAI, Cohere, and others.
All of these models are large language models (LLMs) capable of doing a huge range of tasks. Prompt engineering is the key to applying these models to different use cases.
📌 Code Notebook:
https://github.com/pinecone-io/examples/blob/master/learn/generation/prompt-engineering.ipynb
🌲 Pinecone Gen AI Examples:
https://github.com/pinecone-io/examples/tree/master/learn/generation
🎉 Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
👾 Discord:
https://discord.gg/c5QtDB9RAP
00:00 What is Prompt Engineering?
02:15 Anatomy of a Prompt
07:03 Building prompts with OpenAI GPT 3
08:35 Generation / completion temperature
13:50 Few-shot training with examples
16:08 Adding external information
22:55 Max context window
27:18 Final thoughts on Gen AI and prompts
#artificialintelligence #openai #gpt3 #deeplearning #nlp
Wow, I'm researching about this now in my Master degree. Nice video James!
keep em coming.
Wonderful video as always. I have been using extremely similar prompt but somehow get poorer results than yours. Sometimes my bot just refuse to admit it doesn't know, base on the context. 🤣prompt engineering sometimes really feel like you're not in control.
Source knowledge is exactly what I was looking. Thank you ❤
I had a thought. Does it make sense to create a program that allows a user to enter just a few keywords, then have the ai ask a few more questions to determine user intent and then have the ai output a high quality prompt for conversational ai models? For people who just don't understand prompt engineering. Any thoughts?
I didn't realise Andrew Tate was into LLMs
Love your content mate 👌
Your videos help me, and I assume other users, bridge the gap in formal education💪
Keep up the good work
🤜🤛
Awesome video! Thank you for sharing about the importance of prompt engineering and how it impacts every level of development
May be next time give us the same on Bloom ( Hugingface ) , I would rather work and promote real Open Source projects.
For some reason I thought open ai will always put a prompt limit of half the maximum tokens? So even if you're looking for a short answer the max prompt can be 2000 tokens. Is that true?
Everything always comes together when I watch his videos. Big ups James.
Just want to say that you are truly awesome! I seriously hope you can continue doing these amazing videos for a long time into the future – they are super helpful! Keep rockin it up!
Your content is fire!! Thank you, James!
Hi Mate, Always love your content. Is there any way for us to generate same content for every run (like random seed) in GPT-3? Also, do you share my insights about how to fine tune GPT-3 for any usecase.? Thank you in advance
Andrew Tate broke out of jail and teaching AI now
It's literally minsky's predicted next step in programming. That's probably by design.
Thanks for all your videos! Had a quick question, if I wanted something like a contextualized NER, how would I go about that? Current NERs just print out all entities which is not always useful.
Example context: "John works at Google in California, and Sarah works at Apple in New York"
Question: Where is Sarah's organization and location?
Expected Answer: Apple, New York
Loved the video. Crisp and clear! Really appreciate it. Thanks!
Truly incredible free education. You should make a course for Coursera. You'd be number 1 quick.
I would like to implement openAI to learn a own query language I have in my app, it's a language based on pegjs. Should prompt engineering work to return reliable queries?
Is there an explanation of how the model can make an educated guess whether or not it should answer "I don't know". I don't see this explained by the attention module. Maybe it is part of the reward function on top of it?
Hi! Thanks for all the valuable content you are sharing. Do you have a website?
Thank you. I've appied the examples with prompts in french it worked fine too. Sometimes answers tend to be a bit shorter and colder
Amazing video!
Great tutorial. I sometimes find myself involuntarily raising an eyebrow when people start throwing around the term "prompt engineering" because, as you said at the beginning, writing prompts often feels more like art than science. Clearly there is some technique involved because the "instruction, source info, input, output" pattern is one I've seen used in multiple different places.
Maybe this is a dumb question, but does prompt writing have to be more art than science? i.e. For a given task and model pair, is there such a thing as a mathematically optimal prompt? When people fill the "source info" part of their prompt patterns with examples as a means of sorta in situ fine-tuning the model, how many examples should be provided? Is there a point of diminishing returns where you're just needlessly filling up the context with tokens that are better spent on completion?
Thank you…your videos are the first ones I watch for anything LLM related.👍
I'm very conscious that EU/UK may try and restrict chatGPT – but I don't see how they can outlaw VPNs in the near future
Andrew tate
very helpful! thank you
What skills should I learn first to start this?