9 thoughts on “GPT3.5 vs GPT4 – When to use GPT4?”
can it use data from 2023?
Is that guy ai generated lole
thanks for the update! Been using 3.5 since launch, but was never sure if it was necessary to upgrade. cheers!
in short for most people 3.5 is good enough
if we post a URL to chat gpt4 and get them to read news, does it work? or they will act as if the read base o
bro said a whole bunch of nothing
We use both of them on my business. I can confidently say that GPT 3.5 is sufficient for 95% of cases.
One thing we do is work through the basic logic of our code using GPT 3.5 16k, and then when we are ready for the final output, we switch to GPT-4 32K.
When the final prompt is too complex, it can cost up to 4 or 5 USD that one single prompt, but it is definitely worth it if you know how to guide it correctly.
It's incredibly useful for coding low-level or mid-level languages; we use it for C, C++, and driver-related tasks, even assembly. However, it can be expensive, which is why we initially guide it as 3.5, this way we do all the "configuration" and arguing on a cheaper token/cost rate, because yeah, sometimes you have to argue with it lmao, especially during debugging tasks or pentesting; it becomes extremely cautious and "ethical". So it's easier and cheaper to argue with it being 3.5; then when you convince it to help, you change it to 4-32k and it will proceed because it already uses the previous tokens as memory.
Well, I expended too much, no-one asked lol but yeah, he is totally right.
can it use data from 2023?
Is that guy ai generated lole
thanks for the update! Been using 3.5 since launch, but was never sure if it was necessary to upgrade. cheers!
in short for most people 3.5 is good enough
if we post a URL to chat gpt4 and get them to read news, does it work? or they will act as if the read base o
bro said a whole bunch of nothing
We use both of them on my business. I can confidently say that GPT 3.5 is sufficient for 95% of cases.
One thing we do is work through the basic logic of our code using GPT 3.5 16k, and then when we are ready for the final output, we switch to GPT-4 32K.
When the final prompt is too complex, it can cost up to 4 or 5 USD that one single prompt, but it is definitely worth it if you know how to guide it correctly.
It's incredibly useful for coding low-level or mid-level languages; we use it for C, C++, and driver-related tasks, even assembly. However, it can be expensive, which is why we initially guide it as 3.5, this way we do all the "configuration" and arguing on a cheaper token/cost rate, because yeah, sometimes you have to argue with it lmao, especially during debugging tasks or pentesting; it becomes extremely cautious and "ethical". So it's easier and cheaper to argue with it being 3.5; then when you convince it to help, you change it to 4-32k and it will proceed because it already uses the previous tokens as memory.
Well, I expended too much, no-one asked lol but yeah, he is totally right.
didn't answer the question wtf
There’s huge differences that he never covered.