deeplizard
Welcome back to this series on reinforcement learning! In this episode, we’ll get started with building our deep Q-network to be able to perform in the cart and pole environment. We’ll be making use of everything we’ve learned about deep Q-networks so far, including the topics of experience replay, fixed Q-targets, and epsilon greedy strategies, to develop our code.
💥🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎💥
👋 Hey, we’re Chris and Mandy, the creators of deeplizard!
👀 CHECK OUT OUR VLOG:
🔗 https://www.youtube.com/channel/UC9cBIteC3u7Ee6bzeOcl_Og
👉 Check out the blog post and other resources for this video:
🔗 https://deeplizard.com/learn/video/PyQNfsGUnQA
💻 DOWNLOAD ACCESS TO CODE FILES
🤖 Available for members of the deeplizard hivemind:
🔗 https://www.patreon.com/posts/27743395
🧠 Support collective intelligence, join the deeplizard hivemind:
🔗 https://deeplizard.com/hivemind
🤜 Support collective intelligence, create a quiz question for this video:
🔗 https://deeplizard.com/create-quiz-question
🚀 Boost collective intelligence by sharing this video on social media!
❤️🦎 Special thanks to the following polymaths of the deeplizard hivemind:
Prash
👀 Follow deeplizard:
Our vlog: https://www.youtube.com/channel/UC9cBIteC3u7Ee6bzeOcl_Og
Twitter: https://twitter.com/deeplizard
Facebook: https://www.facebook.com/Deeplizard-145413762948316
Patreon: https://www.patreon.com/deeplizard
YouTube: https://www.youtube.com/deeplizard
Instagram: https://www.instagram.com/deeplizard/
🎓 Deep Learning with deeplizard:
Fundamental Concepts – https://deeplizard.com/learn/video/gZmobeGL0Yg
Beginner Code – https://deeplizard.com/learn/video/RznKVRTFkBY
Advanced Code – https://deeplizard.com/learn/video/v5cngxo4mIg
Advanced Deep RL – https://deeplizard.com/learn/video/nyjbcRQ-uQ8
🎓 Other Courses:
Data Science – https://deeplizard.com/learn/video/d11chG7Z-xk
Trading – https://deeplizard.com/learn/video/ZpfCK_uHL9Y
🛒 Check out products deeplizard recommends on Amazon:
🔗 https://www.amazon.com/shop/deeplizard
📕 Get a FREE 30-day Audible trial and 2 FREE audio books using deeplizard’s link:
🔗 https://amzn.to/2yoqWRn
🎵 deeplizard uses music by Kevin MacLeod
🔗 https://www.youtube.com/channel/UCSZXFhRIx6b0dFX3xS8L1yQ
🔗 http://incompetech.com/
❤️ Please use the knowledge gained from deeplizard content for good, not evil.
Source
Glad to see a new episode of this DQN series
👉 Check out the blog post and other resources for this video:
🔗 https://deeplizard.com/learn/video/PyQNfsGUnQA
😍😍😍😍
Awesome videos guys, but I think you need to redo your branding and design.
Is the exploration and exploitation switch supposed to be a non-smooth transition?
My brain is just a DQN trained to write code to make DQN's that play cart pole.
Great video. it really helps me to implement DQN. I am very excited for the next video
yay my favorite playlist <3
Thank you for the content! I have a question. I'm trying to make my deep learning framework and as far as I know, fastai has part 2 of the part in deep learning where they frame everything from scratch. will the 2nd part of the course from fastai help me in creating the framework?
When is the next video coming I am excited for it.
Hello and thank you very much for that very interesting video. I have a question about the init function in the DQN class. I don't see the point to take the color into account (with this factor 3): can't we use the grey scaled versions of the screenshots ? Thank you in advance for your feedback ! I
I don’t have many experience in RL.
I thought this video has remember some knowledge before.
I don’t remember much knowledge before.But I happy to try how far I can get.
What I learned:
1、DQN structure is so simple. After I just learn the cnn series.
2、Memory has capacity.If you continue push experience in memory . It may over-write the old one
Question:
1、Why always note the push number. After all the old one will be replaced. I thought maybe after video may tell us.
Thank you for your great work. I personal think the RL will be the most important than others.
Again thanks for the excellent video series. Will you be able to add Deep Q Learning with Keras in either this RL series or in the Keras tutorial series ?
Should strategy be self.strategy?
Why you dont use grayscale images with 1 channel ? why you dont use Convolutional layers in your network ?
10:33 min:
Shouldn't it be: … that we don't have 50 experiences yet instead of 20?
Since 20 are already in our replay-memory (memory []) ?
This is totally nuts! Loved it! 😂😂😂
ugh dumb question under import libraries the very first line of code does not work for me the python console says invalid syntax ?? in my code the % sign does not highlite. If I comment out this line the rest of the library stuff and the set up display work
Why is_ipython is False all the time? Can't plot the graph. Please help.
15:04 i cant find the video mentioned, so i will just ask.
with torch.no_grad() just disables the learning of the NN, right?
Greedings from Germany.
Keep it UP
In the Agent class select_action method, I am pretty sure it should be self.strategy instead of just strategy (14:12)