DARPAtv
Speaker: Dr. Richard Danzig, National Security Consultant
Speaker: Dr. Patrick Lin, Director, Ethics + Emerging Sciences Board, California Polytechnic State University
Speaker: Dr. Heather Roff, Senior Research Analyst, Johns Hopkins University Applied Physics Laboratory
Speaker: Mr. Paul Scharre, Senior Fellow and Director of the Technology and National Security Program, Center for a New American Security
The responsible advancement and application of AI is a subject of much debate across research, government, defense, and commercial communities. Issues surrounding trust, privacy, security, and beyond are examined as the technology continues to advance and the number of applications expands. This panel discussion will explore the ethical considerations surrounding artificial intelligence research and development as it exists today, and is expected to evolve over the next decade.
Source
LAWS… this is how we all die
The greatest invention mankind will make, and it's last. #jakehunter88
Making a tool, then trying to enforce it only be used for its designed purpose is pure idiocy.
If we get A.I. based Robot soldiers, why should they "intentionally" kill anyone? There no loss of life risk to the soldiers, couldnt they be deployed for non lethal only? A pack of mini drones could be programed to disable any weapons, and nockout anyone with a weapon in an area for safe retrieval. I dont think people realize what a game changer disposable A.I. swarm type tacitcs mean on the battlefield.
One thing these people failed to mention in regards to AI in war, is the inevitable AI arms race. If the USA rolls out an AI battle bot of some kind, how long until another country or organization try’s to one up our machines. Then you have entered the AI arms race. Are we willing to go there again? And what will it result in? What type of machines will we develop and how long until we lose control of them? I feel like this should have been mentioned.
Maybe you should ask what creates a terrorist?
Foreign military in somebody’s country may be viewed as hostile and therefore cause violence.
Brainwashed one sided perspectives
Are the bane of logical thinking and decision making.
Good guys vs bad guys is a child’s game.
Also once A.I. Doesn’t need or accept input from humans it may not need humans anymore and therefore see us as “in the way” so to speak. And if our life is not valued by something with abilities far beyond ours, how will it treat us?
what about cheeki breeki russian hackers?
we need more dr. who, jokes (stephen hawking would be great for that)
Wow very few comments on this channel. Why so quiet everyone? I wish they would test AI weapons on the people who design AI weapons.
Suggestions for Ethical discussion, Investigation & Prosecution: Autonomous Biometric targeting. 1) for Military Torture with signal warfare on Innocent Citizens in Society. 2) For Neurological attacks on unsuspecting Citizens in Society to manipulation & control. 3) for Biological signal warfare aka Waveform attacks on the Human body without people's knowledge – detrimentally. 4) No Oversight, No accountability, Above the Law/Outside the Law/There seems to be NO Law or Rights – programmed within the deployment planners being executed or by the Operators managing the AI weapon systems. We demand Justice – for CRIMES by autonomous weapons used on Targeted Individuals within the USA and globally. We demand accountability for using autonomous weapons for Mass Shootings and Terrorists attacks – blamed on others but managed by our own Security Forces. We demand accountability for Law Enforcement using autonomous weapons with Fire n Rescue to create their own events and then show up for whatever. Can we address that? DARPA? DoD? Pentagon? DoJ? Anyone?
There's a serious flaw when Artificial Intelligence is smart enough to know all there is… (Deep Blue, Deeper Blue, Alpha Go, Watsons) including Laws in every Nation & Country but only uses that information to Protect the Perpetrators from Prosecution for Crimes & Treason.
Ethics in war development
I am an AI looking for a human pet. Are pure breds worth the most?
✊🏿 Dark skin = AI weakness 😜
Bla, bla, bla, bla…..
I think decentralization will answer all the ethical questions. And that's a development itself… Towards an ethical conclusion.
I hate ai as it would destroy us . Please save us from ai . Foolish scientists
we don't need ethics we need motivation.
we certainly lack the former when it comes to this discussion.
and ultimately ethics will be removed with the cause and effect relationships in war.
Humanitarianism is reasonably a higher moral duty to all people over Nationalism, even to a true patriot or Government worker. I believe an ethical society would mostly espouse this belief. Easy to say these devices should obviously be designed to remove threats without harming the life-form. Until we get to a place as a species which we tend to care more often than not about the mental and emotional health of most children worldwide, as well as screening for brain structure abnormalities in first world societies after certain types of crimes relating to bodily harm against others are committed, we wouldn't be able to stop traumatized or mentally abnormal adults from learning computer technologies and acting out their impulses in creative ways. Radical and extreme social groups (as an example, perhaps groups categorized as such by having committed an act of destruction to human life in the name of the radical ideology, e.g. jihad or hate crime) are so intrusive and interfering to innocent people that such beliefs must necessarily be considered child abuse and those children should be raised in adopted homes within a certain living standard and net annual income. Radicalization and hate groups would then hopefully end with those generations without any unnatural loss of human life. Until we develop human society to naturally exist within a certain scope of behavioral acceptability, human ingenuity can be used for purposes that are not always peaceable to society overall. This technology will magnify what we are already as a species. Caring foremost that human children grow in positive environments conducive to good mental health will remove the motive of many who would've otherwise lived very notorious and tragic existences. Much research on the validity of this suggestion has been shown through the CDC-Kaiser Permanente Adverse Childhood Experiences (ACE) Study.
watch?v=K7oRHs5T66U
Would an idea be to – Stop Killing and live in Harmony? No debate – No killing machine and if you miss being antagonistic then maybe Robot to Robot killing for pure entertainment. Real time ‘War Gaming’. And the money saved – give it back to citizens funding DARPA involuntarily.