Nicholas Renotte
Want to get your hands on GPT3 but cbbd waiting for access?
Need to kick off some AI powered text gen ASAP?
Want to write a love song using AI?
I got you!
In this video, you’ll learn how to leverage GPT Neo, a GPT3 architecture clone, which has been trained on 2.7 BILLION parameters to generate text and code. You’ll learn how to get setup and leverage the model for a whole range of use cases in just 4 lines of code!
In this video youβll learn how to:
1. Install GPT Neo a 2.7B Parameter Language Model
2. Generate Python Code using GPT Neo
3. Generate text using GPT Neo and Hugging Face Transformers
GET THE CODE FROM THE VIDEO: https://github.com/nicknochnack/GPTNeo
Code from the previous tutorial:
https://github.com/nicknochnack/MediaPipeHandPose
Chapters:
0:00 – Start
1:15 – How it Works
2:45 – Install PyTorch and Transformers
5:08 – Setup GPT Neo Pipeline
8:08 – Generate Text from a Prompt
14:46 – Export Text to File
17:48 – Wrap Up
Oh, and don’t forget to connect with me!
LinkedIn: https://bit.ly/324Epgo
Facebook: https://bit.ly/3mB1sZD
GitHub: https://bit.ly/3mDJllD
Patreon: https://bit.ly/2OCn3UW
Join the Discussion on Discord: https://bit.ly/3dQiZsV
Happy coding!
Nick
P.s. Let me know how you go and drop a comment if you need a hand!
how do you validate accuracy of all predictions if you use it to sentiment analyses ?
You can use some of thoses models to predict sentiment but I actually was thinking how could I validate if it is predicting it with high accuracy ?
Very interesting, thank you Nicholas π
love this video… please update for next video how to train with custom datasets txt file. Thanks β€οΈ
Pls solve the problems facing during installation of mediapipe and other modules…….
Which is better GPT2-large model or GPT NEO?
Also, can we train it for a given article? So that it generates an AI spin version of the given article?
it's easier to just use the huggingface api and call for the model directly. it's lightningly faster and you don't have to install/download the model locally
Friendly Coding Community for coding help: https://discord.gg/codingweb
Hey nicholas,do you have any idea/title in predictive analysis project? I would be glad to hear yr idea!
Can you set this up to write python functions or classes based on a description? For research purposes naturally!
Thanks for your content. Can you share how to fine tune GPT-Neo with custom data?
Hi sir, do you have any tutorial about train and test data set and save it to the csv file?
Hi Nicholos. Can you please make a video on pre-train language model using domain specific text ? or can you guide me how to do it?
Hey Nicholas, can we use this for text to sql generation? If so how can we do it. Any reference will be really helpful. Awesome work btw. To many more!
Gpt π³
Very impressed with this example – scary output π
this is awesome, thank you! other question.. when you run, where is it actually downloading the 10g file? I don't see it in the venv
Yes. please make a video on fine-tuning it.
Thank you so much man! I could literally find 0 tutorials out here and I am very new to anything relating to python
Hey Nicholas!
Thank you for your awesome videos. I find myself returning again and again to bingewatch your content!
Do you plan to make a video on object detection API on android or ios? Maybe an app where you take a photo and then it detects objects and counts them?
No coding anymore π Most of the NLP packages are trained for some dataset (I read Neo is trained on the Pile dataset). But will the dataset be updated and used for retraining each year? I can imagine that some general information, for example bitcoin news of 2021 is not available. Can u trained a specific dataset based on the current dataset? (transfer learning) I am also interested in python packages to cluster similar words based on some input text? Maybe you can add it to yure "Great Wall" list…
Thank you!
I tried this one on google colab. And it didnt work. Coz, all of the ram on google colab just got used up and the notebook crashed. So, the code couldn't get implemented further.
Do you know what is the higher limit on max_length? See that you put in 50, but what happens if you put in , say 10000? Is it theoretically possible if you have the required computing power? Or is there an inherent limit so you can not put in a number more than a certain amount?
Cool video btw, keep up!
Hi,
can you please share the system requirements to run it as fast as you did, GPU/TPU etc specs? Thanks
Great job your videos are really helpful , but i want to ask what's the difference between working with hugging face or with aitextgen for text generation for both models gpt2 and gpt neo , thank u !
Many thanks for your splendid videos!
Is there a German version available? I tried this one with German input, but only trash
has been generated.
How does it compare against gpt2 model?
Your Contents are Lit and Epic π₯. best Youtube Channel for Python Programmers. You should probably get 1milion+ subs
Great video, just a quick question what Pipeline would I use to generate a similar text after feeding it a list of texts without retraining it.
I wanna ask that, how Can we use Gpt neo,(if possible) for finding semantically similar sentences?
Thank you for bringing this topic, been interested in GPT-neo. btw, I am still a newbie for language modeling, can I pass this gpt-neo for a specific task such as sentiment analysis? how to do that? also using a specific dataset like bioinformatics or such?
unable to load the model in colab. ERROR:
"ValueError: Unrecognized configuration class <class 'transformers.models.gpt_neo.configuration_gpt_neo.GPTNeoConfig'> for this kind of AutoModel: TFAutoModelForCausalLM.
Model type should be one of BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig."
thanks for the video Nicholas! Quick question, I;ve been trying to implement the gpt-neo-2.7b model via transformers like you demonstrate here with gradio to make a quick web ui and I'm having trouble with the prompt. It seems like the text-generation pipeline doen't accept "= inp" rather than a predefined text input like aitextgen. The problem I'm facing is that gradio only recognizes the predefined prompt regardless of what i type into the input field on subsequent generations. I would just use aitextgeen, but I can't seem to properly tell colab to use the pre downloaded 2.7B model I have saved in my drive, so it redownloads it every time and takes 5-7 minutes versus this method which I was able to easily avoid redownloadibg by mounting my drive and replacing what you have for the model argumeent with the path to the model in my drive. Any idea if theres another way to define prompt to be based on input rather than predefined text?
embarrassing – but I can't even get past the 1st step here – jupyter memory craps out (exceeds allocated 2 GB) just trying to install pytorch. What am I doing wrong π
hi nicholas when fine tuning gpt2 i had a problem because the sentences in my dataset are too long , i got this error that it is greater than 1024 , any ideas how to fix this please ?
Would love to see how to fine-tune the model !! Thx great video π
Wow! I know nothing about this stuff! #noob Stumbled on this video by accident…and was glued! Thank you so much. (or did Neo respond to this?) Subscribed!
how can you train your own data in GPT-NEO for instance say i want to create a hary potter book for example using the text from the previous books using pipelines? Pretty new to AI and looking for some examples
This dude is on fire
I am Working with lstm in my csv file 223641 rows and 2005 here in this i need to predict class column so i droped it i need to predict 257 class in LSTM input_shape(..) How should i take input shape parameters please help me?
Great!!! I really hope you'll go deeper into this. Thanks
When I run this command
generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
or even download the model in a folder like this
generator = pipeline('text-generation', model=neo-models/')
It is not loading and produce the result as "Killed" text.
which usually means "out of memory"
even though I have nothing loaded except pycharm GUI
I have tested this on Ubuntu and Centos Server. Same result
below is the whole code:
import gc
import os
from transformers import pipeline
import torch
gc.collect()
print("================1")
generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B') ====> here comes the KILLED
#generator = pipeline('text-generation', model='neo-models/'')
print("================2")
prompt = "what is the meaning of life"
res = generator(prompt, max_length=50, do_sample=True, Temperature=0.9)
print("================")
print(res)
Is there a way to auto populate the ai gen engine with a list of relevant key words to get the engine to create long form content automatically ?
Hi , i have a problem when i do this istruction:" generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
" i have this runtime error "RuntimeError: [enforce fail at ..c10coreCPUAllocator.cpp:75] data. DefaultCPUAllocator: not enough memory: you tried to allocate 104857600 bytes. Buy new RAM!
" my pc have 8Gb ram i dont think this problem is caused by ram, everyone can help me?
content was super but ……………..when I applied @ step3(""generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')"") ….my laptop was hanged …and error was raised, tail of it is …."Buy new RAM!" ………………..:P
After that….my internet DNS server was not connecting …..then I need to restore pc settings………. then i was able to access you tube and commented………..So what had happened……
my friend, I get an error when executing the code:
"warnings.warn('torchaudio c++ extension is not available.')"
our blinds have been mown. the gnomes will ever owe you a boon. huzzah!