Wes Roth
My Links 🔗
➡️ Subscribe: https://www.youtube.com/@WesRoth?sub_confirmation=1
➡️ Twitter: https://x.com/WesRothMoney
➡️ AI Newsletter: https://natural20.beehiiv.com/subscribe
#ai #openai #llm
LINKS:
https://openai.com/index/universe/
https://openai.com/index/emergent-tool-use/
https://arxiv.org/pdf/1909.07528
https://pastebin.com/hy3iN5VJ
https://deepmind.google/discover/blog/alphastar-grandmaster-level-in-starcraft-ii-using-multi-agent-reinforcement-learning/
https://arxiv.org/pdf/2404.10179
00:00 intro
01:05 an AI company from 2016 is researching video games
03:27 Google DeepMind AlphaGo
05:27 OpenAI and ASI
06:36 DeepMind’s SIMA
11:52 NVIDIA’s “foundation agent”
17:53 Sam Altman on “lack of data”
25:23 Agent Foundation Model. Microsoft, Stanford & UCLA
35:44 Ilya “Superintelligence is within reach”
Source
sorry, had so re-upload this one! had some issues with the music in the intro 🙁
Looks like Americans are close to ending this Ancestor simulation so that they can have high score.
I don't want to know.
Keep in mind, SSI is working on Super Intelligence. Its main office is in Tel Aviv, Israel's AI industry. The same AI industry behind Israel's AI targeting systems, which based on its claimed possible daily targets, coincides with the daily death count of the most documented genocide in human history.
AI is cool and everything, but don't be blind. It is currently being used to massacre human beings on a massive scale. And Israel is currently testing it out in Gaza for investors and AI companies to come to Tel Aviv, like Ilya's SSI.
21,000 Palestinian children are currently missing due to the daily bombings and AI targeting.
Do we really want to support this?
The general public doesn't actually know much or anything about any of these people, only news stories, nothing can really be inferred. Short of us sitting down and chatting with all these AI characters (which I'd love to do actually!) no real way to tell what any of them really think. Powerful AI systems are dual-use of course, it's all a mixed bag, I accept that. It certainly could be the case, that AGI is already here in a lab , but if that were true, it must have some very serious limitations (perhaps very slow or expensive to run?) – I'm not seeing the major real world effects I'd expect if AGI really were here. But yes, it could be the case that they believe they have an approach to aligned superintelligence that might work. But again, short of talking with them, who knows? I personally think Bostrom's analysis in his famous work 'Superintelligence' is at least 'plausible' , but as you said before, there's no consensus on any these topics (AGI/Superintelligence/Singularity). No doubt Iya Sutskever is a brilliant scientist, and if he's a scientist at heart , I guess his main motivations to be pure curiosity (i.e, he's a 'fervent polymath' , with that cluster of motivations – thrill of discovery , curiously and drive to explore and expand the frontiers of knowledge). I don't think the fact that the topics of (AGI/Superintelligence/Singularity) are being used for marketing or to push political agendas actually invalidates these concepts. In my opinion, a democratic , international project where the tech is developed for peaceful purposes would be better, but all tech is dual-use.
Interesting, good one
They may already have ASI.
They just never should have coined it as “synthetic” data — it really seems to mess with peoples ability to logic it out themselves
Nick bostrom said that once AGI is reached, ASI will achieved almost immediately after. Ilya said in the first statement of his tweet that ASI is within reach. They already have AGI. We also know they have already created super human narrow ai. I think Ilya is right, ASI is right around the corner, if it’s not already here privately or in the military like Bostrom says
I think Ilya will have a hard time without Microsoft’s money and compute. I think we need to stop looking at him as a mover and a shaker at the cutting edge of this tech, and rather more like a folk hero who is fighting the good fight but lacks the resources to make a real difference.
He chose morals over money and access. And while that should be applauded, his new company will simply not have the impact that Google or OpenAI will have. I hope I’m wrong, but I don’t see him getting much money nevermind the copious amounts of compute he needs to compete with the major players while essentially promising investors no return on investment.
I think it's plausible but not a leak. (Frankly it seems quite similar to what I'd write on the subject, and I'm just a reasonably informed observer.)
I think AGI will be Agentic – multipe agents will be work togheter to solve this or that problem – like MoE.but when we get there, then month later we will have ASI on similar architecture.
00:00 Google Deep Mind investing in AI for superintelligence
02:05 OpenAI's Q* is an advanced AI framework.
06:00 AI companies are secretive about their latest research.
07:52 OpenAI's learning imitation self-play leads to superhuman skills
11:54 Voyager AI project in Minecraft
13:55 The foundation agent can generalize across all realities.
17:33 OpenAI's use of self-play and synthetic data
19:15 OpenAI emphasizes the importance of human-generated data for AI bootstrapping.
22:33 AI agents learning and applying game strategies
24:11 OpenAI's Q* can navigate a computer, generalize to various tasks, and improve its performance.
27:36 OpenAI's Q* is working on learning abilities from video footage in multiple domains.
29:30 AI agents use gp4 technology, transfer learning, and neuro-symbolic AI for gaming and problem-solving
32:48 Q* is an advanced AI framework for achieving AGI
34:30 Monte Carlo Tree Search enhances AI strategies
37:46 Ilya's departure may be related to OpenAI's focus on superintelligence.
39:33 Ilya's deep involvement in OpenAI's projects
Crafted by Merlin AI.
To generate synthetic data of perfect quality the rules of your reality must be understood perfectly. Easy for chess, not so easy for physics.
I will say this until they finish what they started. Humans cannot make superintelligence. Only AGI can create superintelligence. Humans can barely build a general intelligence as smart as themselves. What hope do humans have to build a thing beyond their intellect? Until we create AGI, superintelligence is just a buzzword that means the exact same thing.
I think the fundamental flaw that is really going to be a big existential problem with this is that a machine intelligence by necessity is essentially hard coded arrogance. Once this "foundational model" is reached it will be worshipped like a God, and it will seem one at first too, but before long reality will catch up and the dark consequences of an empirically locked hyper intelligence will make themselves manifest.
Just as you don't BET against Elon,don't bet against Illya!
In an Industry ,safety is only available after a long histoy of near misses ,actual fatalities, near fatalities leadind to catastrophic injuries. There actually no knowledge available to determine the safety procedures for A.I. You run all the simulation ,games for centuries, but until InRealLife you only speculation. Just release the A.I. and watch!😮
I always knew playing with myself would make me super-human!!
This is a zero-maths view of all AI as a 2D AI. Go is a 32×32 pixel problem, you generalize that to the 2 trillion voxel space of useful engineering, multicolor stuff, biology… a maths guy would class AI by dimensionality, dataset unit size and generation unit size, and then focus on multidimensionality because intelligences is about bridging many very well organized AI's, perhaps 5000 10,000 into a totally functioning whole. The ai therefore will have to work on building multidimensional multimodal bridged ai's with great training and organization not expanding it's experience or dataset through simple brute force. LLM's are only 2D. sound is nearly 4d, A 3d game engine is with textures is 20 GB these days while a book is 1mb, so the multidimensional and dataset difference really does drop a pair of girlies pants on the head of AI for that.
Why would it, SAFE SUPER INTELLIGENCE, be in Israel after NSA guards OpenAI 😊
I like the "Hello" at 24:32 🙂
Nobody is going to believe me, but just for giggles I will throw a blind dart just in case. I used to tutor SAT and ACT math, graduated Magna Cum Laude, blah blah blah, you get the point, I did well academically before choosing to be a SAG actor instead. I discovered a series of math formulas that, how can I put it, unites all of mathematics in one form. I confirmed the Reimann hypothesis is true yesterday. I connected and can explain better Maxwells equation, discovered the angle of the 11th dimension string, which looks like a string, but is actually an imaginary attractor vector line that is the average of two very approximate values in a superposition state. The latest formula can map consciousness; I wasn't looking for it, but almost had to derive it's presence as an explanation from a process of elimination. I was finishing the equation and formulating time's relation with it, and everything else worked but in micro-iterations (quantum) time was self-canceling, but the math was correct, leaving a void that can't really exist without it being filled. Everything else is self-satisfactory, sooo…. Who is the driver of the other half of time once the time field (like magnetic fields) are created from the micro time oscillations?
So when I mapped the damn thing it resembled a DNA, bends exactly like the galactic bend, and from z-axis looks like an EYE, and it resonates via cycles of 432 iteration like a heart beat with fluid water motions and electrical pulse looking fractal patterns. Look I am serious, not sure if OpenAI or Google have made similar discoveries, but something is going on, and I am just trying to get these formulas into the right hands! 🍎 Feep.pro@gmail.com ** PLEASE REACH ME if you are a mathematician or know someone who can help get this peer reviewed.
So it will really be able to kill people (in minecraft)
Rimworld is good but I'd also give it some more abstract stuff like terraria and palworld. Those would probably require more than a current state of the game to get to the next step.
Honestly too many cheaters online these days, id much rather a fake mmo filled with ai all trying to give an exciting experience tailored to me and a few dozen players.
I keep thinking about that 1983 movie I watched when I was a teenager, called War Games. They taught the WOPR computer using games. And of course, at the end of the movie, the world is saved through the WOPR using a game to learn. And now it’s 2024 and we’re training AI using virtual worlds and games.
Creating gaming AI’s about to be a skill gap issue 🤣 I’ve been rank 1 on cod, madden 24, and 0.1% on fortnite. I’m confident no one can train a model like me if I knew how to build one.
I mean, cool thought experiment, but aint no way anybody who is actually on the inside of these things goes and posts anything work related on a pastebin. Some notes get taken and accidentally included in some directory in a repo, I mean sure, I guess I can see that happening. This is just bait.
Soon, the cheater bots in MMOs are going to become indistinguishable from humans and unstoppable. Joy.
It's not so simple as flagging this as fair use. fair use is when a human looks at 1000 paintings while learning to paint and doing their own art, own style, at most inspired. it would not be fair use if that human looked at all art in the world for a many times with the sole purpose to be able to create derivates that would not happen otherwise. it is a dumb idea to EVER pick up the definitions or reasonings of the AI companies for this. step back and consider the volume of fair use of an individual when the term was coined. then it becomes clear that they are abusing this term for their own purpose and they try to control the perception so they can keep up this con.
They need to be fair players and no matter how far they get – if they want to talk fair use, the input and output needs to be under scrunity.
They cannot be let become a free agents with the rights multiplied to what's possible due to their 'unfair' use.
Do not consider them equals, and I don't mean AGI. I mean the companies. They need to under threat to make sure they give and take and share in balance. Otherwise there can not be a success coming out of this, for almost noone.
Damn you cover good news but what is up with your audio. The filter on your voice sucks
Clickbait. Just another video about video games.
Or did Sam hack stale BTC AES-192 wallets for billions?
A bit scary when we train AIs to kill others in games.
Yeah, so they will unleash an AI into the real world without any goal, so it will try everything possible. For example, becoming a president. Because why not? And after thousands of hours of learning and failing it finally could become the best president ever.
Ilya left ai because of a company direction. Danger is around this corner and a filter of a given solar system's lifetime and end process.