All About AI
GPT-3: The Ultimate Productivity Tool
Are you tired of spending hours trying to summarize long texts?
Do you struggle to write concise and engaging summaries for your blog posts?
In this video, we will explore the power of GPT-3, the ultimate productivity tool for summarizing long texts.
From its capabilities to its limitations, we will show you how GPT-3 can help you quickly and accurately summarize large texts, allowing you to focus on other important tasks.
Join us as we demonstrate how GPT-3 can revolutionize your summarization process and boost your productivity.
00:00 Intro GPT-3 Large Text Summary
00:21 Summarizing Long Texts with GPT-3
https://www.allabtai.com/how-to-summarize-a-large-text-with-gpt-3/
https://www.allabtai.com/
It's insane how good it is. I am doing the splitting manually to stay under the token limit on DaVinci 3.
would you share the script? thanks a lot!
wouldn't it be better to use fine tuning ?
Another good video with a great idea!
Great. I have a question concerning turning personal images to art using midjourney. Why I get whole different person not the person in the image?
great vid
Could you have it summaries the bible?
If you split it, how could it understand the context of the article? Maybe a chunk could reference a person from the previous chunk, that is not in the chunk you are summarizing
𝚙𝚛𝚘𝚖𝚘𝚜𝚖
Would be great if you provided the script. Thanks.
Do you will share all the codes ? Or in your premium subs ?
you got my subcribe ! nice video
this was an engagning video! 😉
Stupid Video . Year 2023 and you hide scripts. Bravo
bro doing everything in the world to gatekeep as much as he can, you're that one guy on the internet that everyone hates "DM'd you the fix" "join my club/website to learn more" like what?
import re
import openai
openai.api_key = ""
def read_file(file_path):
with open(file_path, 'r', encoding='utf-8') as file:
text = file.read()
return text
def split_into_chunks(text, max_words=650):
sentences = re.split(r'(?<!w.w.)(?<![A-Z][a-z].)(?<=.|?)s', text)
chunks = []
current_chunk = []
for sentence in sentences:
words = sentence.split()
while len(words) > max_words:
current_chunk.extend(words[:max_words])
chunks.append(" ".join(current_chunk))
current_chunk = []
words = words[max_words:]
if len(current_chunk) + len(words) <= max_words:
current_chunk.extend(words)
else:
chunks.append(" ".join(current_chunk))
current_chunk = words
if current_chunk:
chunks.append(" ".join(current_chunk))
return chunks
def send_to_gpt_api(chunk):
max_retries = 5
retry_delay = 10 # sekundy
for attempt in range(1, max_retries + 1):
try:
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "Jesteś pomocnym asystentem."},
{"role": "user", "content": "Podsumuj w punktach zaczynająć od myślnika:" + chunk}],
temperature=0,
max_tokens=2000
)
ret = response["choices"][0]["message"]["content"]
print(ret)
return ret
except openai.error.RateLimitError as e:
if attempt < max_retries:
print(f"Próba {attempt}: Ograniczenie tempa. Czekam {retry_delay} sekund…")
time.sleep(retry_delay)
else:
print(f"Próba {attempt}: Ograniczenie tempa. Przekroczono maksymalną liczbę prób.")
raise e
ret = response["choices"][0]["message"]["content"]
print(ret)
return ret
def main():
input_file = "input.txt"
output_file = "output.txt"
text = read_file(input_file)
chunks = split_into_chunks(text)
shortened_chunks = []
for chunk in chunks:
shortened_chunks.append(send_to_gpt_api(chunk))
shortened_text = " ".join(shortened_chunks)
with open(output_file, 'w', encoding='utf-8') as file:
file.write(shortened_text)
if _name_ == "__main__":
main()
Hello! I just discovered your channel and am hoping the scripts have been made public? I would like to use this process for my academic work in helping to summarize textbook chapters
How to write such a script ?