James Briggs
OpenAI have introduced “Functions” for GPT-4 and GPT-3.5-turbo. The new feature allows these chatbot LLMs to read a list of Python (or any other programming language) functions and use them as tools. This function calling feature is available only in the latest releases of GPT-4 and GPT-3.5-Turbo (0613).
📌 Notebook Link:
https://github.com/aurelio-labs/cookbook/blob/main/gen-ai/openai/gpt4/gpt-4-functions.ipynb
🌲 Subscribe for Latest Articles and Videos:
https://www.pinecone.io/newsletter-signup/
👋🏼 NLP + LLM Consulting:
https://aurelio.ai
👾 Discord:
https://discord.gg/c5QtDB9RAP
Twitter: https://twitter.com/jamescalam
LinkedIn: https://www.linkedin.com/in/jamescalam/
00:00 New GPT-4 and ChatGPT Functions
01:13 Function Calling in OpenAI Docs
01:43 Implementation in Python
03:25 Creating a Function for GPT 4
04:37 Function Calling Instructions
06:11 Calling ChatCompletion with Functions
07:41 Using GPT 4 to Run Functions
09:25 Creating Image Generation Pipeline
11:02 Adding Image Generation to Function
13:43 Creating Product Pages with Images
15:16 Final Thoughts on OpenAI Function Calling
Any one tried the default values in the spec? I have a json specification for the function and i specify a default value for an argument. It seems when the parameters are returned, some times default values returned, and sometimes not. Any one had any expereince? @jamesBriggs
Thanks for yet another great video! Is this any better than asking via system input to extract structured JSON from the user input given an expected JSON structure?
Another reason to fiend for an openai key, heh. Cheers for the video and the hard work @ work.
Thank you! Are functio descriptions taking context space? It may take more context space to describe multiple functions than in langchanin in that case.
I actually tried to get GPT to do this in the old model, but it really didn’t manage to consistently respond with valid command parameters as instructed
Loved this! 🙌
I understand that this is a step towards fully autonomous agents, where we are striving for automated task execution. There will be a roadmap towards more extensive functionality in the direction of automation. But for now, what is the difference between this approach and asking for a structured output in JSON, parsing it, and passing it to your own function?
This new functionality is also super useful if you are working in a language not supported by Langchain.
Auto-GPT basically…
You have a gecko chatting to you in the background.
He was saying – "NO,… I won't shut up!"
this is gonna save me a lot of tokens compared to using zero shot react. I think
Hi @James, I still don't understand what is the function capability that is show here by GPT-4? I was under the assumption that it extracts parameters from users input and call's the function and gives the output? But in the above example that you have show is it gives us the parameters that is passed to external function? Confused here. !! So, it has doesn't has capability by itself to call the external function?? In the above example you asked gpt-4 to return 2 parameters, now without using special feature I can always prompt GPT-4 to returns those 2 parameters..right? Then why functions?
OpenAI as a company is a joke. For months I've been trying to get access to GPT4 via API, STILL WAITING.
Yet I'm paying for GPT4 every month… That's a disgrace.
Hi James! Thanks for the wonderful video. How do you think the emergence of this "function” will affect LangChain agents and other large language models like Gorilla?
Thanks James million times for this fresh and great content.
I'm wondering if it's something we can do or not, is it possible if I created a plugin for my website and created a function that calls this plugin and returns data?
Is this possible?
What IDE he is using?
How about adding the "image_desc" output as the image's alt text in HTML so that it can show the description to the user on hover.
Are you inside a sauna my man ? Also killer vid as always !
Thanks for the video!
For even nicer/cleaner code, it's super simple to use the Python inspect library to create a base class that automatically determines parameter names and requirements, and lets you extend it such that you only need to "register" the parameter descriptions and types and it'll provide a translation to the JSON schema format that GPT requires. So once you have that base class, you can focus on defining your functions in code and GPT will just understand them automatically with minimal manual schema-writing. Super clean and useful!
gpt is crap. It returns falsities and hallucinates and invents .. hyped to heck, piece of crap. Cant write a pice of faultless code to save its virtual life. It makes mistakes always. And even if not, it does not save time, as you spend as much time prepping it and instructing it as you would coding it yourself
Damn, this is just 100% copy paste implementation from auto-gpt, even json schema. It seems OpenAI have out of new ideas, can only copy from others… That's too bad :(🥲
Would you use this over Langchain Agents then?
Thank you for another great one! You mentioned it was super quick to put this together. Curious how long exactly and what does it take to know to get there? Hoping to benchmark.
Noticed an interesting issue: It appears GPT 3.5 is much more likely to hallucinate a non-existent function, so that pushes one towards the more expensive GPT 4! Ho hum!
i wish u would have showcased function calling. janky example with half the vid about image gen