Videos

How computers learn to recognize objects instantly | Joseph Redmon



TED

Ten years ago, researchers thought that getting a computer to tell the difference between a cat and a dog would be almost impossible. Today, computer vision systems do it with greater than 99 percent accuracy. How? Joseph Redmon works on the YOLO (You Only Look Once) system, an open-source method of object detection that can identify objects in images and video — from zebras to stop signs — with lightning-quick speed. In a remarkable live demo, Redmon shows off this important step forward for applications like self-driving cars, robotics and even cancer detection.

Check out more TED talks: http://www.ted.com

The TED Talks channel features the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and more.

Follow TED on Twitter: http://www.twitter.com/TEDTalks
Like TED on Facebook: https://www.facebook.com/TED

Subscribe to our channel: https://www.youtube.com/TED

Source

Similar Posts

41 thoughts on “How computers learn to recognize objects instantly | Joseph Redmon
  1. Of course this is really amazing, but I do find it funny that at certain points the system thought a parrot was a pizza, a stop sin was a frisbee and a tripod was skis :p

  2. A stop sign is a freisbee. A parot is a person. Detects backpack where there is not a backpack. Pretty good, but I wonder how current state of the art systems perform.

  3. I like it how all maniacs use misleading examples like "self driven cars" instead of just stating the truth: we need this kind of technology for instant face recognition for mass surveillance. Good job "Washington University graduates" from all over the world!

  4. Hi, can you send me the code for Matlab, I really very impressed and wants to know more about this.
    If you kindly send me the code for Matlab will be highly appreciated. Thanks again.
    best Regards
    Gul Rukh Khan

  5. I usually dont do this but i recommend cyberspaceintelligence@gmail.com or hackgoodness on instagram for any phone spying or gps tracking services. with their help , I was able to spy on my wifes phone to see alll her text messages, phone calls, facebook messenger chats, whatsapp chats and more! they were able to install my iphone 8 as the mirror phone so i was viewing everything remotely without stress! just contact cyberspaceintelligence@gmail.com or hackgoodness on instagram for help

  6. I'm using Yolo v3 on security camera application. Yolo is perfect for sorting as example cars and people fast with slower hardware. Then you can use other specific algorithms to do other detection's from these sorted objects like car -> license plates, people -> faces (train model to detect persons). Yes it's making mistakes but this computer vision is not nothing simple. This video doesn't answer any questions -> better start from their paper: https://pjreddie.com/media/files/papers/YOLOv3.pdf

  7. can someone tell When I train YOLO for custom detection via my own dataset, it is again and again giving error that "STB Reason : Could not fopen" to the image path stored on Training line being exectuted

  8. still doesn't solve the unsupervised learning problem because if you solved unsupervised learning, you would not need to pretrain any object at all, instead the program could just figure out on its own how to classify what object based on information alone from the outside having nothing to build on to begin with other than the method of the learning algorithms itself. example, what if i wanted to make the network recognize sound or words at the same time as detecting people and animals. what if i wanted to make the program think that the sound it heard was directly related to a portion of the screen and treat that portion as a separate object until the program had done this same process over and over and over making different version of part of the screen as that particular object until it can tell background from actual object. it means the program must find part of a image to be more of one object that other parts where each versions of this audio based guess of graphics narrow down what a object is as a separate object discarding what is not a object or another object from what its previous version was thinking was the object of iterest. let say a program just make a section of the image a cut based on aproximity and angle and start learning what ever pattern in it and decides its a dog even it could be part of a table or bed at the same time but over time of repetition is able to separate the real dog from other objects. what i means is what if you just start by training in junk with certain keywords of jumble and try to make the network over time detect real objects and classify real names to them by guessing. this way, you chould be able to make unsupervised learning work. just think of how a child think the name dog is spelled dol and over time learning its spelled dog. anyway if such a system like yolo could have incorporated somthing like i suggest, it could have a real advantage of realtime tracking of what at first will be detected junk and classified jumble. this way, you don''t need to pretrain any data but let it train itself over time or teach itself over time. what it would do would be like a human getting everything wrong at first past the baby stage and then become more and more human as it learns. i hope there is some real superintelligent human out there that want to make a unsupervised version of yolo that stated out as a complete idiot of a program and over time become very acurate in predicting what object is what, and with what name.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com