Videos

Understanding local minima with a Minecraft Example (not AI playing MineCraft)



Jeff Heaton

Local minima and maxima are important topics when training neural networks and other machine learning algorithms. This video uses Minecraft to demonstrate the quest for better local minima/maxima. For more videos, tutorials and books visit http://www.heatonresearch.com

Source

Similar Posts

25 thoughts on “Understanding local minima with a Minecraft Example (not AI playing MineCraft)
  1. thats a good explanation. my simulation is at a local maximum too atm.
    I have done previous simulations that where more succesfull. I guess I have to start it over.

  2. Good idea and execution.

    Just a detail – Minecraft world/map isn't infinite (after all, what could be infinite on a finite machine, right?), just huge (4,722,366,482,869,645 km2 – source: minecraft gamepedia com/The_Overworld).

  3. 6:32 "But if we look over there, there's a mountain"
    Hahahahahaha I laughed so hard at that XD Not sure if it's just me but the way you said that, I found it hilarious.

    That was an excellent explanation and very clever yet clear/simple use of minecraft to explain the concept. Awesome video!

  4. My best expérience while working on algorithms that does not get trapped on local minima was building optimised phylogeny trees which minimises weights on their branchs. The tricky thing is that you cannot visualize a local minimum as a well on a 3D chart cause we work on graphs and not on real variables

  5. I know virtually next to nothing of the language and concepts employed in this field, so this may be a really stupid question. But would it be possible to develop a machine learning algorithm so that there was some kind of inter-generational parameter tracking for these sort of situations where the A.I. gets stuck at a local minimum for longer periods, and changing the method accordingly, like increasing the random variations and mutations happening between generations to allow for a more speedy discovery of a more efficient solution?

    Again, this may be a stupid question, or something that's common knowledge, but just a thought that came mind as I was watching this. Generally speaking you would want the program to stick with what works, but when it has not progressed and gotten any more efficient for more generations you might want to force it to look more robustly for different solutions.

  6. Give your AI some ray-tracing so that it can 'see' things ahead of it. When it reaches the little molehill it will then begin to scan around it for a better hill. Then when a ray hits that hill, it will instantly become dissatisfied with its current situation and will be VERY compelled to get to that mountain.

  7. to everyone who disliked, i thought it was a video about ai playing minecraft too, but if you watch the whole video at 1.25x speed, you won't be disappointed!

  8. what the heck??? I remember seeing this video years and years ago i never commented or liked it or anything.
    how in the world did i find this same video yet again?
    i only watched it ONCE then never saw watched or went back to it ever again until i came across it just now on my own.

  9. The backpropagation everyone knows and loves is just gradient descend, I hate the mysticism that surrounds it. If you were to take each individual gradient to calculate de error with respect to each weight, a lot of repeated work is done. Backpropagation is the the formula that factors out the repeated work, it turns out, you simply need a single forward pass to update all your weights.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com