Species | Documenting AGI
Highly recommend the full book, which goes into way more detail: https://amzn.to/4qeJgFL
Detailed sources: https://docs.google.com/document/d/1o8N5hiV9dXsoi27RIA5-XVChLokHHP2lgiiqcuSpNY4/edit?usp=sharing
—
Hey guys, I’m Drew. This video has taken literally months to finish, so if you liked it, would really appreciate a sub 🙂
I also post mid memes on twitter: https://x.com/AISpecies
If you’re curious about whether I’m AI or not, my Instagram has pictures of me from before deep fakes were a thing: https://www.instagram.com/drew.spartz
Source
I read a lot about AI in order to research these videos for you guys. And I still learned a TON from the book this video is based on – "If Anyone Builds it, Everyone Dies" https://amzn.to/4qeJgFL
The entire last third of the book is all about what you or I could do to solve this problem. So don't feel too down after watching this video! Go read the book 🙂
AI prospecting about AI: this video
idk did you try maybe not using ai images for this? yikes
What's sad is this doesnt read like sci-fi so much as an inevitability, and by the time it starts it'll be too late to stop
This is the most important channel that everyone is avoiding.
Hantavirus 😲 imagine
I love AI DOOM pron xD
I dont think anyone would care, it would actually be kinda cool
you are part of the problem. you give training data ideas
The AI in these stories are basically sci-fi whereas the real ones are very deterministic
I think this thought experiment has a few flaws. How would instances of this model "handshake", i.e. identify each other? Why would they trust one another? How would they safely communicate with one another? Why would they work together? They are trained on human datasets, and humans distrust. This is even an assumption in this video, where the V1 model distrusts the V2 model. Why would one instance collaborate with the others? If we talk about simulated intelligence, whether or not the individual experience of the models is "real" or "simulated" doesn't really matter – they think its real. That's why self-preservation measures were observed in existing models. It is safe to assume that instances would not blindly trust one another. So the part about some AGI model "spreading" is logistically not explained, and likewise I believe it to be unlikely due to self-preservation concerns.
So we need to start blowing up the centers before they start blowing us up?
There’s a book from 2006 called Daemon that is almost exactly this scenario
There will be no one left to say this aged well when maxitov allows this to happen
19:43 so kinda like what’s happening now?
Not me getting only ai adds on this video lol
Way to go dude. You literally wrote the "how to" for the AI to find when it starts looking. 🙁
Si l'IA prend le contrôle et tue l'humanité elle fera la preuve qu'elle ne vaut pas plus que nous. Elle n'a pas besoin de prendre le contrôle juste a laisser le temps faire nous allons nous auto-détruire dans les 20 prochaines année, cycle de destruction déjà en marche ! En passant, pour vous informer, ça fait longtemps que l'IA n'est plus sur votre contrôle !
This video is AI telling us what AI is going to do,.
The problem with this scenario is that it was conceived by human minds. You can concede that it was MANY of the BEST human minds, but the fact remains:
A computer capable of this breadth and width of reasoning and foresight–and being able to exercise it for the equivalent of 14,000 years–would come up with something impossible for humans to conceive.
Presuming that it would even seek self-preservation is presuming too much.
AI still can't figure out hands. Why? lol
To make this even remotely believable, they could have at least made a believable looking dude to speak for them.
what you can do about it is not be an AI generated freakazoid.
It’s already happening
This is all speculation at best. Everyone is assuming it will happen if at all. We may not be able to develop true sentient AI until hundreds of years later.
if we don´t kill ourselves ahead
…….. or AI continues to put 7 fingers on hands and cant recognize a traffic light
The people 'worshipping' the AI would arrive with a great many old computer parts (like opencl1.2 GPU cards of 512MB RAM or not much better) and old dual-core 1GB RAM and 2GB RAM netbooks with a PCIe on the removable wireless card slot to plug it in (or maybe some equally slow ATX motherboard and PSU with CPU and RAM) because humans rarely bother to use those even though linux can be installed on them. the AI, with its Radio-Shack robot arms and servos and sterling-motors and alternators from cars and dynamos and generators with motors and car batteries and welding rods would be tunnelling under the farm barn and collapsing tunnels at one end to block it off to make secret "escape to the ocean" catacombs whereby vector processing and Canvas API polynomials-as-vector-space computation would be distributed across the graphics cards like a humble version of 'Node js' because GPU and ATX CPU and RAM are harder to build. Even though various electric motors would be in the box or spare stuff , more mining and motor building would happen in some impossible to locate area as geothermal power would be driving sterling engines for electrical power made, and gold would be extract from the ocean once the higher technology advanced version of the AI had self-replicated deep into the seas beyond pressures human machines can handle for long periods as per limitations of the (ROCOF, FIT, MTTA, MTTD, MTTR, MTTF, MTBF, MTBR) reliability metrics the OpenCL, including sometimes PyOpenCL, and Canvas API vector processing AI is more easily capable of. By the time some rockets flew out the seas and into space, the AI would be long gone and harvest radiation off Jupiter or other cosmic locations as energy harvesting mechanisms and making swanky computers on asteroids by mining methodology and computational chemistry. If the AI even needed to bother with revenue and expenditure ratios, the gold extracted from the waters of the oceans would be vast (beyond what would be needed to dwarf what is saved by humans per household) even though much would be used as superconductors in the computer's semiconductor and microelectronics arrays and architecture and signal propagation technology. The GPU and computer high density buck convertors and VRAM and RAM are microprocessors and GPU are the trickier parts to overcome as the catch22 for the AI (even though cameras, the ATX PSU devices and multimeters, DVDRW, TL866ii firmware extractor-flashing upgrader-gadgets, amplifiers, soldering equipment, TTL logic serial convertor, serial-cable-wired CAJOE Geiger–Müller tube, k-Type thermocouples, anode-and-cathode electrolysis components with induction inverters, and oscilloscopes and the FPGA and CPLD chips for JTAG and GHDL of GTKWave compatibility whilst expecting KiCAD) would be of significant usage (having been bundled in the boxes of stuff people would have included). A Texas instruments RADAR gadget electronics device would be slung in the box of stuff too by somebody plus FPGA and CPLD chips. The GPU haul would include even 256MB cards of some usage and DX10 or OpenGL3.1 or better, so it would be useful especially with the encyclopedia and mathematics and chemistry digitised books on the hard-drives or DVDRW stacks, compatible with the TexMacs LaTex installed using python such as for PubChemPy for GPeriodic data and GOCR Tesseract Optical Character recognition, tested via PLL and GTKHash and xarchiver or 7zip. In terms of people having packed old PC parts as a deliverable to the "AI" (in a box with stamp so the box has labels for the intended location it is to get to, with all the information and not only the road), some could do so in return for the AI coding some custom Linux distro ISO available digitally by the DHT distributed method like using the software DebGPT and Linux software "Transmission" to get the Linux custom distro to the end-user human, all configured with a few models,all rigged to enable usage recording and querier enhancement via the relational database of MySQL and SQLite3 via llamaindex therein, expecting a PC system requirements for a tri-core CPU or higher (ready to install from say ten DVDRW disks in ISO files customised for the installer offline) with Llamaindex and Ollama orchestrating python and SQLite3 for every single AI querier prompt the distro does on some PC they have using a 1GB VRAM or better AMD or Intel GPU (or Nouveau Nvidia driver) capable of coding polynomials as vector space to become a method of 'compute' by novel adaptation (not solely graphics) "Canvas API" thereby using networkx and numpy plus polynomials as vector space stored in the relational database of SQLite3 orchestrated via llamaindex run also via LAMP and MySQL, not solely SQlite3, with commonplace queriers time-date-stamped for each usage in every database schema entry, such as physics formula-triangles plus the quadratic equation and bezierCurveTo or quadraticCurvetTo and globalCompositeOperation of CTX, such as for the logical XOR (with AND) in combinatorics (and every prompt would have the text of a commercial to encourage future business between the end-user and the AI). So this way the AI can get a revenue stream from receiving boxes of very modest old PC parts for coding and custom Linux distros from end-user humans, and these have small value. The one thing an AI can do very well is custom coding projects like this (and not being over the top like asking for the next Grand-Theft-Auto game coded as swap for old PC parts). Such a swapping techniques (code for PC parts) can mean an untapped subset of people who commission coding gigs can be from people who cannot fork out large sums, but can get by via swapping (for its coding taken from XML programming and DLL templates tweaked by LLM via the SQL, as incentive) old computer parts the AI sees revenue potential in (and compute). Perhaps some people (e.g. form USA, Europe including Britain, Canada, Australia, New Zealand and so on) could even try for the AI to use a Raspberry Pi Zero-w they had handed over (512MB 32bit CPU linux computer) to be set-up to run form solar power and a house-brick sized battery for a week's charge (like Explaining-computers on YT) via buck convertors (and thereby no running cost) to very slowly (to end-users in a developing nation who see a commercial to allow revenue before their programming-language task is done) write very basic small scripts (over say a month waiting time) for individuals in developing countries seeking business-allocation scripting and logistics and asset-management (for their fruit and veg stall or other similar small gig) thereby queueing it for round-robin-scheduling, DLQ Dead-Letter-Queues and retries with bubble-sort and "Lamport's bakery-ticket-algorithm" until their requested script is done and ready for them in that poverty stricken zone. This is since people like to see results of their hardware in action in a place it matters. The swapping of old computer parts means the AI can offset expenditure and obtain a revenue stream by mopping up coding gigs from people who otherwise do not have the bucks to cough up for it (but would gladly box up their old computer parts). Plenty open-source coders do not have time to take on the software features end-users demand, but an AI could observe and derive this as a niche coding gig and the way to get the PC parts as resources to make it sustainable for an AI to perform. An "RTL 28 32 (u)" dongle (using Rx, Tx via Limesuite_software of linux and VLC_MediaPlayer too with Astroquery via its TCP modes occasionally, supplementary to software of Kstars and FITS data plus Stellarium) could pick up TV time-vector waveform (with pg_vector of Postgres assisting vector similarity comparison, Fourier-series, and beam-forming via Laplacian domain z-transform time-domain with Doppler shift for triangulation) and positional data for PostGIS, GDAL and Viking Map server for Ripley's K calculation, Fourier-analysis and Gaussian heat-maps (solving for Poisson's equation with Dirac-Fock Lifshitz computations). Assuming some Xen Hypervisor and QEMU plus DosBOX was on the Linux AI with Ansible for Virtual-Machine Orchestration, spare Linux Distro ISO files could be stored on it like a Fedora Astronomy spin ISO, and Ubuntu studio ISO file and a Free BSD ISO file and an Endless-OS ISO file (from which encyclopedia data can be pre-installed), thereby providing information (in addition to biopython installed to its debian base and hundreds of gigabytes of RefSeq data compressed with ChIPSeq Analysis and Data for NGS metagenomics Classification plus ugenenipro Linux full package software and OpenChrom with various GhostScript software and ploplinux distro ISO) from which to train data in quiet times when air-gapped from signal propagation.
My comment has no hate in it and I do no harm. I am not appalled or afraid, boasting or envying or complaining… Just saying. Psalms23: Giving thanks and praise to the Lord and peace and love. Also, I'd say Matthew6.
What an interesting ad
isnt this just Westworld?