Videos

Deep Learning to Solve Challenging Problems (Google I/O'19)



TensorFlow

This talk will highlight some of Google Brain’s research and computer systems with an eye toward how it can be used to solve challenging problems, and will relate them to the National Academy of Engineering’s Grand Engineering Challenges for the 21st Century, including the use of machine learning for healthcare, robotics, and engineering the tools of scientific discovery. He will also cover how machine learning is transforming many aspects of our computing hardware and software systems.

Watch more #io19 here: Inspiration at Google I/O 2019 Playlist → https://goo.gle/2LkBwCF
TensorFlow at Google I/O 2019 Playlist → http://bit.ly/2GW7ZJM
Google I/O 2019 All Sessions Playlist → https://goo.gle/io19allsessions
Learn more on the I/O Website → https://google.com/io

Subscribe to the TensorFlow Channel → https://bit.ly/TensorFlow1
Get started at → https://www.tensorflow.org/

Speaker: Jeff Dean

T0E51E

Source

Similar Posts

29 thoughts on “Deep Learning to Solve Challenging Problems (Google I/O'19)
  1. Please explain how developing artificial intelligence solutions for subsurface data analysis in oil and gas exploration and production is 'socially beneficial'.

  2. Excellent talk. 

    This is a great example that how a true expert talks about innovations and deep learning with simple but accurate words; without overhyping and bombarding buzz words on their audience.

  3. Regarding automobiles, we built the auto interface for humans. We leveraged our built in sensors (eyes and ears) and designed a bipartite system – vehicle and road. But why are we now trying to shoehorn AI into that human centric system? If we were designing a system from scratch for AI and machines, would we build it the same way? Would it not make sense to build telemetry into the road – make the road more intelligent and let it direct vehicles more directly? Do we need vehicles that can go where there are no roads? This would reduce the cost of complex and hack able vehicle based systems.

  4. Regarding autoML, after time there would seem to be an ever increasing corpus of models. Humans, being the limited creatures that tend to have the same problems, might not actually need to have a ‘fresh’ model trained every time their brain perceives a problem that needs solving. That solution probably already exists and has been solved. Rather , it might be faster (and much less energy intensive)to simply archive these models with a set of useful metadata so that a google search can find the model that solves the problem. And metadata selection and assignment to individual models can be automated after they are designed by autoML. The metadata can be considered as the ‘label’ for the model. This metadata can also be used to ‘explain’ to a user ‘why’ the machine selected a particular model/algorithm. in addition, the machine would be able to engage the user in a ‘conversation’ – as it ‘asks for metadata’. The user would perceive this discourse as questions about the dataset/problem that he/she has-meanwhile the machine is building an information tree to sift/sort from its vast library of models. This also addresses the human problem where the user often starts by choosing the wrong approach to solving the problem. Or just as often uses the ‘cooked spaghetti’ approach to model selection – throw them all against the wall of the problem and see what sticks.

  5. I was all happy until the very end whenever I heard them talk about bias. I'm sorry, too many legitimate channels have been brought down for supposed "bias". Until you can 99.9999999999999% chance that a computer is unbiased, please just stick to image recognition, because Google, so far you have been not good regarding bias.

  6. With reference to scientific learning: When you have a lot of data, but no data in the particular point in parameter hyperspace that you are interested in, what do you do? Extrapolate the model will result in bias an loss of accuracy. Experiments in real world systems seems unavoidable , experimental datapoint that is ofthen very expensive. The interaction between Machine Learning modeling and the planing and execution of experiments seems to be new and very interesting research area.

  7. Great talk. One thing I hear being said too much, though, is that humans don't get to pool their experiences whereas robots do. I'm sure the efficiency and integrity of robots sharing knowledge is much higher than with humans, but shared knowledge amongst humans is the basis of civilization. There would be no Google if every person ever born had to learn from scratch. Rant over.

  8. I train my models on GpuClub com and don't worry about maintaining these huge machines. No investment is the best investment…

  9. machine learning is taking machine learning experts' jobs. 🙂
    Superb thank you for uploading
    Thank you a lot for the talk given. The idea of ML automation sounds great.

  10. machine learning is taking machine learning experts' jobs. 🙂
    Very informative and insightful talk. Thank you Google for sharing it with us.
    Very informative and insightful talk. Thank you Google for sharing it with us.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com