Machine Learning Street Talk
This week Dr. Tim Scarfe, Dr. Keith Duggar, Yannic Kilcher and Connor Leahy cover a broad range of topics, ranging from academia, GPT-3 and whether prompt engineering could be the next in-demand skill, markets and economics including trading and whether you can predict the stock market, AI alignment, utilitarian philosophy, randomness and intelligence and even whether the universe is infinite!
00:00:00 Show Introduction
00:12:49 Academia and doing a Ph.D
00:15:49 From academia to wall street
00:17:08 Quants — smoke and mirrors? Tail Risk
00:19:46 Previous results dont indicate future success in markets
00:23:23 Making money from social media signals?
00:24:41 Predicting the stock market
00:27:20 Things which are and are not predictable
00:31:40 Tim postscript comment on predicting markets
00:32:37 Connor take on markets
00:35:16 As market become more efficient..
00:36:38 Snake oil in ML
00:39:20 GPT-3, we have changed our minds
00:52:34 Prompt engineering a new form of software development?
01:06:07 GPT-3 and prompt engineering
01:12:33 Emergent intelligence with increasingly weird abstractions
01:27:29 Wireheading and the economy
01:28:54 Free markets, dragon story and price vs value
01:33:59 Utilitarian philosophy and what does good look like?
01:41:39 Randomness and intelligence
01:44:55 Different schools of thought in ML
01:46:09 Is the universe infinite?
Thanks a lot for Connor Leahy for being a guest on today’s show. https://twitter.com/NPCollapse — you can join his EleutherAI community discord here: https://discord.com/invite/vtRgjbM
Podcast version: https://anchor.fm/machinelearningstreettalk/episodes/029-GPT-3–Prompt-Engineering–Trading–AI-Alignment–Intelligence-em6pjh
#machinelearning
First! 💪
Second
yeah another one 🙂 love it.
geohot is created a nice library tinygrad. He is trying some amzing stuff. To build an AI-Accelator for AMD Graphic cards <3 I would love to see this happening. Screw Nvidia monopoly 😀
THIRD AGAIN… 😬 …
"Prompt engineer" reminds me kinda of priests who question an oracle… 😀
James Simons did with a team some good money with machine learning in the market. here is an interview https://www.youtube.com/watch?v=QNznD9hMEh0
Sorry for spaming your comment section, but theire is also the possibility by predicting the market with a swarm. https://unanimous.ai https://www.football-data.co.uk/wisdom_of_crowd_bets and read the book of James Surowiecki – "Wisdom of Crowds"
Please make a video on trading and I believe Keith is good at that topic
40:22 Tim and Gwern is in all out word fight
What a time to be alive 😂😂
I enjoy your podcast piece by piece. 1:03:30 There is allready a chatbot that uses GPT-3 http://kuki.ai/
Here's a list of cool GPT-3 examples: https://www.lesswrong.com/posts/6Hee7w2paEzHsD6mn/collection-of-gpt-3-results. I particularly like this one: https://twitter.com/kleptid/status/1285269255907356673. The weird language is bc the model is used in AI Dungeon.
I love these videos thanks for sharing your knowledge and understanding!!!!!
GPT3 powered plugin for IDE that catches bugs on the fly as you code, warns about potential security issues in your code, suggests better implementation of your functions, refactors your code would be very valuable
software engineers won't become prompt engineers but many graduates of English & Literature would
Another really interesting episode. I've also thought about how some of the technology we will get from an AI science agent will not be explainable to us because of our limited intelligence (mental computational power).
Great talk as always!
GPT-3 is so much smarter than GPT-3. What GPT-4 will be, just may be the kind of intelligence everyone will agree is AGI. So when we talk about intelligence being a black box, and how some black box intelligences are superior to others, why is it difficult to believe that the next GPT — or the one after — will be a better black box than any human?
Such a great and interesting video!
The market changes constantly and every successful trader wants to improve, update, enhance and make better. Even Experts with many years of experience and large profits in their bank accounts still work hard to analyze and improve on how they trade. This applies more importantly to new traders and those with minimal experience. Rachel Brooke offers great assistance here. An analysis and improvement strategy gives you a structured way of maximizing your potential while also capitalizing on the good part of your trading and money management strategies. This helps you become more productive and profitable on a long run and creates a paradigm shift to the ever-changing market structures/conditions. Feel free to reach out to her on email >> rachelbrookemonique@gmail.com and WhatsApp +19093663156
There is a small error in results about universe flattness, possibly that is not an error, but rather indication that universe is not flat, it just so large that we can see it as almost flat.
1:30:29 “commons” – exactly the opposite. The tragedy of the commons occurs only in none privately owned properties.
Note that in the database example, the bolded lines are all text given to GPT-3, not GPT-3's output. This is few-shot, not zero-shot.
GPT-3 can do all sorts of crazy things, but it's not robust, and thus it fails all the time too. I strongly recommend against wedding yourself to any one prompt, as there is definitely cherry-picking and lots of these prompts have simpler explanations if you allow for chance, if taken in isolation. It's only when taken as a point in a wider set of examples that one can be confident there's really something there, IMO.
I consider there two primary sorts of implications of GPT-3.
One is about the short-term uses for this sort of technology. Here I do see Keith's point; he's right it's not robust enough for the general use-case. However, there are plenty of cases that is fine, for example: replacing small ML teams building specialized products for eg. sentiment analysis, offering autocomplete functionality in chat support/code/data mangling contexts, triaging and classifying customer support emails. The idea about a programmable voice assistant, say if GPT-Android generated logic in some simple safe language, is also pretty neat. You know, places where approximate intelligence smoothes out the process, even admitting only one nine of reliability.
The other is the long-term implication of GPT-3, which I'd roughly summarize as ‘holy fuck, this was meant to be impossible.’ This one doesn't really care about reliability or whatever. The fact that GPT-3 shows that language models can do reasoning-like tasks over general text just by scaling up implies a degree of generality to the backpropagation neural networks learning algorithm, that it's not clear which facets of human cognition it cannot (at some larger scale) subsume. Maybe it's still not all, but maybe it is, and it's terrifying I can't rule the later out.
Love music