sentdex
Support via Patreon: https://www.patreon.com/sentdex
Support via Stream: https://twitch.streamlabs.com/sentdex
General AI information: https://psyber.io/
24/7 (ish) live stream: https://www.twitch.tv/sentdex
Text tutorials and sample code: https://pythonprogramming.net/game-frames-open-cv-python-plays-gta-v/
Project Github: https://github.com/sentdex/pygta5
https://twitter.com/sentdex
https://www.facebook.com/pythonprogramming.net/
https://plus.google.com/+sentdex
Source
Finally got change to try your code. I noticed using Adam optimizer instead of momentum makes the training quite a bit faster. In my tests, Adam reached better accuracy about 9 times faster. Also did you normalize the images? E.g. subtract mean image and divide each pixel value by standard deviation?
How about giving one of the models some audio input? Comparing GTA with sound and without, it is more difficult even for humans to drive without audio (just test it if you don't trust me). The AI could maybe "hear" its crashes and reverse.
Sorry if this was asked before and for my bad english. Keep up the great work!
Hey, man! You can simply double your dataset just flipping it on a vertical axis.
, How I can install all dependencies packages in all one for python 2.6 2.7. 3.4 3.5.2 from Windows CLI
cordially. Genti
remove the map from the game.
Take your titan X apart and clean the thermal paste from the heatsink. Put some new stuff on and seal it back up. Worked when I was getting the same problem with my 390X.
dude, I think have an idea for driving stability… are you familiar how the maths for an IMU works? I just thought about it watching your videos… it doesn't really improve your line detection, but it will give you a more stable driving experience
I agree with Harrison here https://www.youtube.com/watch?v=aYMUYkk92NY&t=20m20 on cloud computing. Besides the GPU difference between K80 (in the Amazon / AWS or or Google / GCP cloud), there is also a performance cost with the virtualization machines that run on hvm mode. I am pretty sure they're using KVM or qemu and using PCI & GPU passthrough you get to feel that you're all set but they're far slower if you're building your own deeplearning box.
Also cost perspective, you just would rather stay away from cloud as the bills keep adding up. From $1000 – 3000 you can build a good to great DeepLearning box. Thats what I'm doing right now.
Sadly, your AI has progressed too far in it's evolution, and is now escaping out into the internet to take over the world. My friend, we must make the gnarliest virus of all time to take him down.
You need to take out the ramp buggy of it's learning and exploring algorithm, i'ts changing the rules, hes going to get confused
You should add the performance taken as a weight (if you already have it make it a bit stronger) and make a program that allows anyone to play gta and on the press of a macro go into self driving car mode.
What mod/hack (or whatever it is) do you use to spawn vehicles, change wanted level, etc?
@sentex Try giving it a destination to go to, points for speed and if it actually arrives . Spawn it at different spots to let it train in different areas. Let the script run and use the most succesful models
I am using the mac -thought the joystick input is only available for the windows (Try some form of joystick input if possible (especially considering throttle and turning)
What are the alternatives using the python ? Please guide. these are i have found on the pypi
1. https://pypi.python.org/pypi/pyobjc-framework-GameController/3.2.1
2. https://pypi.python.org/pypi/inputs
3. https://www.pygame.org/docs/ref/joystick.html
18:15 I got so confused when the Skype sound came up but I had no message 😀
Try a GTX 1080ti instead of the Titan XP. You should get the STRIX model too, so it runs cool and silent.
Base the main loop or goal to be finding the fractal mendala hidden in binary.the name is Cary .I know what yu seek young Jedi.
How about you translate the Incentive to press a key, subtract the opposite and make it an analogue input – for example, if the car wants to turn left 35%, right 10%, forward 40%, Backward 5%, and no key 10%, it should have the accelerator on 88.8%, and Left 77.7% – Making GTA think you are using a controller. (By 77.7% left, I mean the left analogue stick 77.7% to the left. this would give your program a whole new dimension, and make it very rarely not move).
Or ? Don't be a retard , Don't pay for overpriced GPUs , Go try AMD with better compute performance/dollar .
I just noticed it and thought I'd ask, why does the AI have the option of pressing A or D (left/right)? Under no real circumstances is any driver going purely just turn the steering to the left or right without throttling
Hi, your tutorial is very cool. You can teach Android with Python because I need it very much and I'm just working on Python. Thank you very much.
lmao, why not self playing game in starwars battlefront 2
you could try and use the resolution on a logarithmic proportion on the screen (like humans see details only in front and less on the periphery) that might make it more able to see stuff in the distance
Does it keyboard cat?
I just started a pledge on Patreon. This is amazing stuff. Please continue researching this. It's so awesome.
i just recently bought a GTX 750 ..
HUge thanks for showing me dat dead graphics card not doing machine learning anymore …
Yes oven tip is legit ive done this on motherboards before i dont remember temp times tho
Why are you not using pretrained nets instead of training from scratch resnet has 150 layers and its pretrained weights are available ?
I have a question for you my friend. I do love to go for a law-abiding drive so I need a speed control. But as we balance the training data we lose the track of how fast we go. The only way, I can think of is to get speed information from the gauge being displayed on the frames. I guess if we want alexnet to be able to detect the speed too we need to have a huge training data, I'm talking about millions of samples so that it predicts when it should reduce the speed especially on turns by scanning that part of the frame that displays the speed gauge. Or is there a way that we detect the speed using image processing and apply it to CNN along with the input frames so in this way we won't need a huge training? I mean is it possible to input multiple data not necessarily from the same sort like images and scalar values (speeds) to a deep model? Or do I have to train 2 different models that I guess in this case the algorithm won't track why on turns we reduce the speed but we keep going forward without reduction because image data are separated.
even titans fall
Why not retrain the net off raw inputs read from zeth inputs? https://raw.githubusercontent.com/zeth/inputs/master/examples/jstest.py (Here's jstest.py – example code to read a gamepad's buttons/sticks and print detailed info) Zeth will read gamepad inputs raw from the device. You would need nodes for left analog x axis (left is negative/right is positive), (y axis is unused in gta) accelerator (cross/B) and brake (square/A) Simulate them with vJoy as you've tested. Such a method only needs 6 nodes, not 8, and would allow your net to understand the clear difference between steering and go/stop/reverse. Axis need looser weights (larger range), button presses need heavier ones (button on or off is a binary choice)