br>UCSD0dLRGQk_T-D3RvpM5aFQ
The presentation by Eric Steinberger introduces the audience to what AI can’t do yet and why it is important to research into these capabilities. Furthermore, it explores ways to move forward with AI from a research point of view.
The global dev community meets at WeAreDevelopers, an event dubbed by many as the “Woodstock of Developers”. The WeAreDevelopers World Congress 2018 brought together 8,000 techies from 70 countries for 72-hours of pure dev-fun.
Visit the largest developer playground in Europe!
https://www.wearedevelopers.com/
Facebook: https://www.facebook.com/wearedevelopers
Twitter: https://twitter.com/WeAreDevs
Instagram: https://www.instagram.com/_wearedevelopers/
#WeAreDevs
©2018, WeAreDevelopers
good presentation:
27:10 – self play doesn't work or is to computational expensive for a lot of AGI problems, such as communication with other agents (in natural language) for example with humans.
External motivation vs internal motivation is the point that I liked most in this AGI talk. Here is my two cents theory in contribution https://bit.ly/2Id6eL0, http://bit.ly/2eARyEx http://bit.ly/2nR4MAP
thanks for sharing the video
plz share the slides
AGI will kill us all. Not might. It will kill us all if we allow it to exist. It is not possible, due to the halting problem, to program an objective function that matches the desires of humans without extensive research and modeling of lower brain functions that requires AGI technology itself. In other words, we are guaranteeing our own extinction if our goal is to make an AGI. A better goal would be to make domain specific AIs, that are CERTIFIED. In fact, every single piece of software that is written can be considered an AI, and therefore, all software (including things as tiny two line scripts) must be certified by an open process that guarantees it is in agreement with the desires of society. This will prevent businesses, corporations, and governments from misusing AI. The code that certifies all other code will be open and freely available. The rules governing which code passes and which code fails will be a major part of the purpose of government, the same way laws are today. (And this misuse is already happening on a massive scale so if these AI/AGI researchers are serious about AI/AGI safety then they need to make these things happen ASAP.) If we do not make this a reality, we are 100% guaranteeing our own extinction. All countries in the world need to agree on these certification rules. All computers must be built with hardware chips that verify every piece of code that is run. And, honestly, the way things are going, the probability of that happening is nearly zero. So, I hate to repeat myself, but we are as good as extinct already. We might as well, just be trying to create artifacts that survive the AGI apocalypse so that future species that evolve to survive the AGIs will at least know we existed.
31:42 "Having it stop is stupid…" No, that argument is stupid. EVERY technology has potential upsides and downsides. Nobody denies that. But the "downsides" of AGI is an extreme existential threat to humanity, unlike any technology ever seen by humans. And the probability that its goals are in alignment with human goals is nearly zero. Until we solve that problem, we should not even begin to research AGI. Unfortunately, getting all of the world governments and everyone in the world to agree on this is pretty much impossible. Therefore, humanity is 100% doomed. Our extinction is already guaranteed precisely as the "Demiurge" (and/or "Archons") wanted. It is written in stone. All of human technology was created for our eventual extinction.
this guy says they aren't trying to replace us. of course that is exactly what they are doing. as they do it, guys like this, as well as the media, and politicians, will try to soothe us as we are made obsolete. for those of us who they can't soothe into silent acceptance, they will turn others against us. first with humor, then anger and bullying. eventually they will make some of us question our own reluctance to go with the flow. they will continue to use the school system to turn out children against us. eventually we will become so isolated and resentful that we will simply withdraw and quietly die off. this is the process that is used on us over and over and over. mass communication is the most destructive tool ever invented.
We persist on looking at how our own neural networks work, but we forget those might be just the result of some deeper drivers. Those primers maybe are what we should hardcode into AI to achieve one day GAI. Need or better Anage, is what motivates living creatures to display resourcefulness and evolve.
1)Curiosity of the nature of nature. How the physical world works. From chemistry and physics to human physiology and medicine.
2)The need to solve human problems. Since giving it a self-survival worry could turn against us, let's make it an extension of our own worries. To exist for it would be to adress humanity's problems. Better city construction, waste disposal, envirnomental issues, food distribution issues, fighting cancer etc. And all those indivual networks could connect to a higher cloud analysing the compartmentalised data holistically utilising models from several scientific fields.
3)Theology and art. This seems like a counter intuitive and novel approach. But think about it. We build all the admirable monuments of humanity from Stonehedge and Parthenon to the Pyramids and Hagia Sophia, was with the transcending goal of an afterlife. If we make it believe it can reach an afterlife itself, it might be able to "create a soul". The point is not wether a soul exists or not, but if you are motivated to make one manifest with the results produced through that inquiry. It's raison d'etre will be to solve humanity's problems sure, but its teleology, its.. "promised land", shall be abstract and implied. Not "of this world". Since you don't have all the answers and seems impossible to ever do, assume Someone else does. Sounds cruel, but it might be a necessity to help it achieve 3rd level consciouness like us. You will ask, we might have said inquiries cause we already have 3rd level consciousness, so the argument is cyclical. In fact i think that teleological hope is not a human exclusive sentiment and transcends the workings of our species. And if anything you could say its a remnant of our evolution not the evolved trait. (Most atheists will agree here i believe).
4)Auto-debugging, self correcting. It will have its own "immune system" as well as the ability to "learn" from past system failures, bugs and mistakes.
5)Emotions! Trying to emulate how we feel might give it the ability to evolve consciousness. AIEs or Artificially Induced Emotions will resemble a reward/punishment, pleasure/pain mechanism in case of achievements or mistakes and uncalled for deeds. They could be information or memory deleting punishments, or even a literal physical threat inside its circuits. After all, fear of death is the strongest primer on earth, even greater than reproduction or hunger. Why will it care? Because its reason of existence is to adress the above issues of points 1-5. If it ends, it won't be able to do what its made to do. That's were "caring" comes from.
Extension to this would be… a sense of aesthetics. The entirety of human life is a pursuit of Beauty both in the self, in partners, in the arts, and the world around us. Feeling a sense of "euphoria" after completing an orderly task can "motivate" it to work towards even more effective, higher realms of beauty, symmetry and order.
At this point you will say it will already have a "survival instinct". Yes while true, it will be only secondary to its servitude towards humanity and will not put the first above the latter.
6)A more complex programming language. Logos is the beginning of ontollogy. While we have moved towards object oriented languages etc. We might need one conceptual tongue, capable of being reduced to machine-readable script from higher abstract concepts, and understanding symbolism in order to have prospects of evolving. And if it is tied to the emulated emotions part it can have a broader spectrum of "comprehension" than a binary approach could achieve. After all, even our language only poorly describes what we feel and conceive in our heads and bodies. To perceive one's self and ponder about others and the world one needs the linguistic capacity for such abstractions.
Sorry that we got a bit too "philosophical" for the tastes of computer science, but when we are adressing issues like the human consciousness i guess thinking out of the box is the only solution.
This was a purely speculative and theoretical approach but it might sound interesting to those in search of a general AI. Thanks for reading.
The game changer for human level General A. I. – Empathy.
The way to really pass the Turing Test would be to think with the layers of complexity, transference, relevance to others and deep memory that humans do.
A vital component of our being is empathy, taking into consideration the sentiments of others and the impact our words and actions will have. Humans as a social animal, prefer being good from being right (well good humans at least). Because if "right" is not expressed rightly it is not really right.
For example a human would never say to someone he loves "Hey, you have gained lots of weight". (Especially if this someone is a woman and especially if you are a man that avoids having flips flops thrown at him at home with Mach 3 speeds.) Or a doctor would never say "Hey you have cancer, you die in a week". Why should AI do it then?
To present an even more complex problem. There is a family of four, parents and two kids. They are watching a knowledge based TV show. The mother makes a really embarassing mistake at some point attempting to answer a simple question. The kids are oblivious to this fact. The father chooses not to correct her though in front of the children, because the last few weeks the children seem to be quite disobedient and sarcastic against the mother, so he would not like to further undermine whatever respect they have left for her. Instead, he chooses some time later when they are alone, and at the proper aloof moment, to jokingly present her with the answer in a totally non judgmental and non patronizing way. If he was a cold, calculating machine. He could outright correct her in front of the kids like an excellent little encylcopedia, but as a very very poor human nonetheless. Do you see the intricate social dynamics, and multilayered levels of complexity human beings have to process to display adequate feats of empathy?
So for a machine, to qualify as a human level intellect, it should be able to have a deep understanding of human hierarchies, psychology, emotional responsiveness, timing, etiquette, tact and context. The only way to become humane if filtered through OTHERS and not as a closed entity. Empathy is measured on our ability to predict, comprehend and adapt to heteroriginating emotion.
The overall balance sheet of emotional well being and common good amongst a group of people (and people usually do exist in groups), must be calculated in real time, so that a human-resembling decision can really be taken. At some point i expect them to surpass us in empathy, so we'll have to "dumb them down" a little, or "socialise them" a bit (euphimism for barbarising them) in order not to feel intensely judged and out of place in comparison.
If we can understand the neural network for compassion, empathy, and morality for agi alogorithims, then we can achieve better deep learning from a hiearchal standpoint. But we need to integrate defense alogorithms to prevent the opposite negative intentions such as evil thoughts…
"seperation" and "adverserial" ….