Matthew Berman
Code LLaMA is a fine-tuned version of LLaMA 2 released by Meta that excels at coding responses. Reports say it is equal and sometimes even better than GPT4 at coding! This is incredible news, but is it true? I put it through some real-world tests to find out.
Enjoy ๐
Become a Patron ๐ฅ – https://patreon.com/MatthewBerman
Join the Discord ๐ฌ – https://discord.gg/xxysSXBxFW
Follow me on Twitter ๐ง – https://twitter.com/matthewberman
Follow me on TikTok ๐ – https://www.tiktok.com/@matthewberman60
Subscribe to my Substack ๐๏ธ – [https://matthewberman.substack.com](https://matthewberman.substack.com/)
Links:
Phind Quantized Model – https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ
Phind Blogpost – https://www.phind.com/blog/code-llama-beats-gpt4
Meta Blog Announcement – https://about.fb.com/news/2023/08/code-llama-ai-for-coding/
What tests should I add to future coding tests for LLMs?
What is with these shocked faces on thumbnails?
Man, you turned my world around
Thanks for your content!
Would have been good to see how it does on other languages such as html, css, scss, js, ts, php, node etc
install it now and it'll be out of date in a few months, with some other LLM beating it, good vid but I'm sticking with chatgpt for now.
I was able to coax chat gpt into writing a working snake game. I used iterative prompting. At one point I ran the program, receiving an error, I pasted that error and chatgpt resolved it correctly. Ultimately it correctly implemented snake with one random
fruit.
Will the 34B run on a 4090?
I'm struggling to figure out the workflow for iterative conversations with codeLLAMA. The examples are all single prompt-response pairs. I want guidance on prolonged, iterative back-and-forth dialogues where I can ask, re-ask, and ask further over many iterations.
A tutorial showing how to incrementally build something complex through 200+ iterative prompt-response exchanges would be extremely helpful. Rather than one-off prompts, walk through prompting conversationally over hours to build up a website piece by piece. I want to 'chew the bone' iteratively with codeLLAMA like this.
which model would you suggest for three.js or babylon.js?
would be interesting to ask CodeLlama to generate Game Theory simulations. Just to see how much of Math or other non-developer domains it can bring as code.
I've done it with GPT-4 and is really cool how much Game Theory you can learn just by running python examples.
can you test on falcon LLM and is it better than LLAMA or chatgpt 4?
Hey Matthew – would be great for you to do a deep dive in Text Generation UI and how to use the whole thing.. Also, cover GGUF and GPTQ (other formats too) would be helpful…
Hi Matthew, amazing video! Thanks!
Could you tell me what is your Graphic card ?
Great video, how does it compare with WizardML?
Yes please please make a video regarding setup
Yes, Please show us how to locally install it! They charge through the nose soon.
How well it compares to other languages than Python?
CRAZY!!!
Thanks Great Video! I found LLama to be great to code with and I am integrating Llama2 into our own Multi Application Platform.
That transition at 0:14 is something else.
With this man every coding assistant model is the best coding assistant model ๐๐