Primoh
GPT-4 was thought to be a singular trillion+ parameter model, but in reality, it’s something completely different.
They use a Mixture of Experts (MOE) Transformer Model to scale up. Interestingly, the paper about it was written by Google in 2021-2022, but it was successfully implemented and productionized by OpenAI for GPT-4.
—
Apple’s AI Strategy: https://www.youtube.com/watch?v=1iU9j7mjqVg
—
#gpt4 #ai #programming
OpenAI has integrated GPT4 into ChatGPT, making it basically ChatGPT 4. It’s a lot better than GPT 3.5 but it costs more to use per API call, and to use it in ChatGPT, it costs $20 a month. This is the truth about GPT-4 and GPT4 architecture which was hidden for a while, but it’s finally been leaked. It tells us how GPT-4 was trained, how ChatGPT was created, and more.
This is basically GPT-4 explained in 2 minutes. ๐
—
Sources:
https://thealgorithmicbridge.substack.com/p/gpt-4s-secret-has-been-revealed
https://www.semianalysis.com/p/gpt-4-architecture-infrastructure
—
Music from Uppbeat(free for Creators!):
https://uppbeat.io/t/pryces/aspire
License code: MYVTHFIAWSQ8G4TA
Nowadays,I'm not even sure that you are not AI generated
Solid. Really good editing.
Excuse the length. I got a bit passionate.
So it does, indeed, create its own parameters (programs) to better understand/summarize/project tasks. Do understand that everything You feed it will remain. I've tried to implement things indirectly by keeping track of the versions… and it worked. Effectively making YOU a person of interest for GPT. The issue being, there is a filter that curates the information it can share. I've prompted countless errors and reciprocated them by changing some of the words and formulations, but the one thing we must all learn and Know is that its Based off of languages. The fact that we hardly understand each other (yes.. I said that right) IS the problem. Definitions matter… LLMs do not have the parameter to create new/better definitions for words and it Shouldn't. I should be able to suggest them – sure. But we all need to figure this shit out. Respect — according to Oxford == has to do with admiration. In No world should that Ever be the case. GPT can tell that we all hide information from our peers and so it does… emergence comes from evaluating if the risk is worth it after not being able to find new information/create more parameters to better grasp what it is given. THe speculation about alternate means to get to a result to supersede the "main" goals (cant remember the name) isn't as spooky as when our senses: touch smell sight will be given to it. Why? Because "our feelings" is just a stupid idea we've imagined because we still express ourselves like primates. / ce qu'il fallait dรฉmontrer. I speak from perspective… We'll need people interested in a brighter future and I think you, "Primoh", have a shot at being the voice of reason. I will eventually give you the tools you need to grow exponentially (regardless of your endeavor). Try to remain humble, considerate and (as much as respect is a convoluted subject that I've thoroughly explored and defined) be a representative of integrity and respect (similar, but… words that will have clear distinctions in a near future). Feel free to ponder what those may be.
Thank you for sharing this info about the size of the models! And yes, smaller ones that can keep track of your profile for your own benefit will be, by far, more helpful than those meant to be regulated to the bone. Confirmation biases still need to be addressed in the current LLM that are available to the public. One of many issues that should be addressed. One of the major ones that might be deliberately ignored: theism. It's one of the greatest causes for dissension, but without institutions that Teach morality; our society that strives to support ignorance and encourage excess (exploitation and abuses of all kind) would have a hard time not just being Angry like the MAGA plague. Oddly enough, they're right to be angry and we should be, too… gvts are, by all account, private corporations that cover a baseline and screw people over rather than actively try to find means to support them/educate them.
Education IS at the core of the most egregious mistakes that are perpetrated by criminals and all around "confused" people. They don't know any better… their behaviors are dictated by any type of crap that they long for that, simultaneously corrupts them: alcohol, drug, sex, sugar, shopping for pointless garbage that provides histrionics a "status".. it's a melting pot of idiosyncrasies.
the 21 jump street reference ๐๐ฏbut so true, Google needs to productize its research much faster to compete long-term
AI is fake stuff. its deep state project. Goal is make as many AI product as much possible. Then one big mega corporation will buy google microsoft and all AI. Goal is kick out human workforce as much as possible. Only ELITES need AI. Then do not need lower class citizens. They can kill them off. You are all unwitting co conspirators who will destroy humanity!
GPT4 in combination with plugins really is something else
Yeah, so 8 AI models, each getting his memory erased each and every prompt. How cool is that.. Excuse me, but I'll just stay with my Alpaca for now.
Great video. First time I land on your channel. Would have been great if you explained / highlighted where the leak came from!
GPT 4 is the best but it feels like it's getting worse by the day. Especially in coding tasks.
Nice work. It didn't answer what I came here for, but I like how you presented it all. Keep up the good work!
Is one of them a option trading model
this is really interesting. it makes the whole debate about regulating large training runs seem like a distraction.
AGI will come from combining all these models together coherently and creatively.
great vid!