Videos

AI Agent Changes 0.01-0.03 – Python plays GTA p.16



sentdex

Support via Patreon: https://www.patreon.com/sentdex
Support via Stream: https://twitch.streamlabs.com/sentdex

General AI information: https://psyber.io/

24/7 (ish) live stream: https://www.twitch.tv/sentdex

Text tutorials and sample code: https://pythonprogramming.net/game-frames-open-cv-python-plays-gta-v/

Project Github: https://github.com/sentdex/pygta5

https://twitter.com/sentdex
https://www.facebook.com/pythonprogramming.net/
https://plus.google.com/+sentdex

Source

Similar Posts

31 thoughts on “AI Agent Changes 0.01-0.03 – Python plays GTA p.16
  1. Finally got change to try your code. I noticed using Adam optimizer instead of momentum makes the training quite a bit faster. In my tests, Adam reached better accuracy about 9 times faster. Also did you normalize the images? E.g. subtract mean image and divide each pixel value by standard deviation?

  2. How about giving one of the models some audio input? Comparing GTA with sound and without, it is more difficult even for humans to drive without audio (just test it if you don't trust me). The AI could maybe "hear" its crashes and reverse.

    Sorry if this was asked before and for my bad english. Keep up the great work!

  3. Take your titan X apart and clean the thermal paste from the heatsink. Put some new stuff on and seal it back up. Worked when I was getting the same problem with my 390X.

  4. dude, I think have an idea for driving stability… are you familiar how the maths for an IMU works? I just thought about it watching your videos… it doesn't really improve your line detection, but it will give you a more stable driving experience

  5. I agree with Harrison here https://www.youtube.com/watch?v=aYMUYkk92NY&t=20m20 on cloud computing. Besides the GPU difference between K80 (in the Amazon / AWS or or Google / GCP cloud), there is also a performance cost with the virtualization machines that run on hvm mode. I am pretty sure they're using KVM or qemu and using PCI & GPU passthrough you get to feel that you're all set but they're far slower if you're building your own deeplearning box.

    Also cost perspective, you just would rather stay away from cloud as the bills keep adding up. From $1000 – 3000 you can build a good to great DeepLearning box. Thats what I'm doing right now.

  6. Sadly, your AI has progressed too far in it's evolution, and is now escaping out into the internet to take over the world. My friend, we must make the gnarliest virus of all time to take him down.

  7. You should add the performance taken as a weight (if you already have it make it a bit stronger) and make a program that allows anyone to play gta and on the press of a macro go into self driving car mode.

  8. @sentex Try giving it a destination to go to, points for speed and if it actually arrives . Spawn it at different spots to let it train in different areas. Let the script run and use the most succesful models

  9. How about you translate the Incentive to press a key, subtract the opposite and make it an analogue input – for example, if the car wants to turn left 35%, right 10%, forward 40%, Backward 5%, and no key 10%, it should have the accelerator on 88.8%, and Left 77.7% – Making GTA think you are using a controller. (By 77.7% left, I mean the left analogue stick 77.7% to the left. this would give your program a whole new dimension, and make it very rarely not move).

  10. I just noticed it and thought I'd ask, why does the AI have the option of pressing A or D (left/right)? Under no real circumstances is any driver going purely just turn the steering to the left or right without throttling

  11. you could try and use the resolution on a logarithmic proportion on the screen (like humans see details only in front and less on the periphery) that might make it more able to see stuff in the distance

  12. I have a question for you my friend. I do love to go for a law-abiding drive so I need a speed control. But as we balance the training data we lose the track of how fast we go. The only way, I can think of is to get speed information from the gauge being displayed on the frames. I guess if we want alexnet to be able to detect the speed too we need to have a huge training data, I'm talking about millions of samples so that it predicts when it should reduce the speed especially on turns by scanning that part of the frame that displays the speed gauge. Or is there a way that we detect the speed using image processing and apply it to CNN along with the input frames so in this way we won't need a huge training? I mean is it possible to input multiple data not necessarily from the same sort like images and scalar values (speeds) to a deep model? Or do I have to train 2 different models that I guess in this case the algorithm won't track why on turns we reduce the speed but we keep going forward without reduction because image data are separated.

  13. Why not retrain the net off raw inputs read from zeth inputs? https://raw.githubusercontent.com/zeth/inputs/master/examples/jstest.py (Here's jstest.py – example code to read a gamepad's buttons/sticks and print detailed info) Zeth will read gamepad inputs raw from the device. You would need nodes for left analog x axis (left is negative/right is positive), (y axis is unused in gta) accelerator (cross/B) and brake (square/A) Simulate them with vJoy as you've tested. Such a method only needs 6 nodes, not 8, and would allow your net to understand the clear difference between steering and go/stop/reverse. Axis need looser weights (larger range), button presses need heavier ones (button on or off is a binary choice)

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com