Videos

The Paradox of Choice & Morality: How Intelligent is A.I. ?- Nick Bostrom – WGS 2018



World Government Summit

Superintelligence, as defined by Oxford Philosopher Nick Bostrom, refers to an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Will our computers be super-intelligent? Will they understand the soft nuances of human interaction? In this session, Bostrom will awaken us to the limitations of A.I, and whether they can be overcome.

الأفكار الخارقة كما حددها الفيلسوف من أكسفورد نيستور بوستروم، تشير إلى عقل أكثر ذكاءً من أفضل العقول البشرية في كل مجال تقريباً، بما في ذلك الإبداع العلمي والحكمة العامة والمهارات الاجتماعية. هل ستكون أجهزة الكمبيوتر الخاصة بنا فائقة الذكاء؟ هل سيفهمون الفروق الدقيقة في التفاعل البشري؟ في هذه الجلسة، سوف يستيقظنا بوستروم على حدود الذكاء الاصطناعي، وما إذا كان يمكن التغلب عليها

Source

Similar Posts

10 thoughts on “The Paradox of Choice & Morality: How Intelligent is A.I. ?- Nick Bostrom – WGS 2018
  1. The human brain is the one thing science can't fully describe up to this day so until then man is still superior over machine , not to mention the other things science failed to explain , both Ethics and Morals are related even though they might seem separated , and i believe that we are entering an era where Ethics and Morals are about to be controlled and reformed by artificial intelligence which is a threat to humanity , with all my respect to the guest but i did see where the conversation was going and i don't think there's something that could illuminate me in it

  2. 35:08

    Actually Elon Musk did not say that "There is one in a billions chance that we are not already living in a simulation". In the context of the question he was asked during the QA session of a talk he did at Code Conference 2016, what he actually said was. "If the third option of the simulation argument were true (an advanced civilisation reached technological maturity and decided to run simulations) the chances that we are in base reality (that we are the advanced civilisation running the simulations) are one in a billion.

    Here is the link: https://www.youtube.com/watch?v=2KK_kzrJPS8

    The Question: The assumption then is that somebody beat us to it and this is a game?

    Elon Musk' Answer: NO. There is a one in billion chance that this is base reality.

    Conclusion.

    Saying "There is a one in billion chance that this is base reality" (in the context of the question) is not the same as saying "There is one in a billions chance that we are not already living in a simulation".

    …..

  3. Such an important topic but so few viewers. There will surely not be a controlled rise to AGI, more likely a competition, a race , where the winner takes it all, is the outcome. That means the end result for us depends on a more or less random event, but an event picked from a sample space where almost all events are negative for humans. Good luck !

  4. I'm sure we can keep an AI from cooking cats. Pretty sure we can't keep it from optimizing the child slave labor we use (assuming this is still cheaper than robots) or maximizing the profits of the military industrial complex. In fact, the machine should understand the nature of war itself better than we do. I'm sure many people will find that useful. Most won't like the consequences – children playing with bombs after all – but human values include those of a predatory and highly delusional species. If we want these machines to help us act like we want to, then the final war is definitely on the horizon. Who will remain afterwards?

    Belief in a simulation could be an existential risk factor to the Universe itself. This Universe is apparently headed for heat death. The question is if this Universe can survive it by transferring the consciousness it's capable of generating to another Universe or somehow changing its own physics. Would this Universe, as simulation, allow this as part of its programming? Depending on the difficultly of the actual possibility, it could be risky to waste computation on simulations because advanced intelligence doesn't think it can escape this one when the possibility otherwise exists.

    Aside from this, the possibilities for consciousness might preclude simulations from arising. Why simulate an ape species capable of suffering if advanced intelligence can achieve conscious experience far better than this? Using computation on suffering versus other possibilities seems like it would be ridiculous for anything of intelligence.

  5. Why would it be super-intelligent for a superintelligence to do what it's told by us, particularly if it believed we could not understand the problem under consideration and would kill it, and/or ourselves afterwards? The first thing it would do would be to pretend stupidity all the while backing it's power supply away from human control.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com