GPT 3

Creating fine-tuned GPT-3 models via the OpenAI fine-tuning API



BuzzRobot

Join the Bugout Slack dev community to connect with fellow data scientists, ML practitioners, and engineers: https://join.slack.com/t/bugout-dev/shared_invite/zt-fhepyt87-5XcJLy0iu702SO_hMFKNhQ

The GPT-3 language models have a range of impressive capabilities; however, they also have a number of performance limitations.

In this talk OpenAI researcher, Todor Markov, discusses a new fine-tuning API by OpenAI and show examples of how it can be used to create specialized models.

Our speaker – Todor Markov is a researcher on the Applied AI team at OpenAI. His current work focuses on improving the safety & monitoring system for the OpenAI API.

00:09:38 — Introduction Todor Markov to the Bugout community
00:11:12 — Colleges to OpenAI
00:11:36 — GTP-3 & the OpenAI API
00:24:43 — Can GTP-3 make mathematically correct statements?
00:31:11 — OpenAI fine-tuning API
00:41:55 — Questions: sizes of GTP-3 models, size of data sets, technical background of fine tuning, models for improving semantic search, GTP-3 to transfer to texts, chip models, A/B testing model quality, is it expensive. customers, limit of models, ​​tips for people who want to work in AI, plans for meetups