For those who want to get right to the good stuff, the installation instructions are below. This is a modification of Google DeepMind’s code: instead of training a computer to play classic Atari games, you train it to play Super Mario Bros.
1. On a linux-like system, execute the following steps:
sudo apt-get install git
git clone https://github.com/ehrenbrav/DeepQNetwork.git
cd DeepQNetwork
sudo ./install_dependencies.sh
This will grab the necessary dependencies, along with the emulator to run Super Mario Bros. Note that if you want to run training and testing on your CUDA-equipped GPU, you’ll need to install the appropriate CUDA toolkit. If you’ve done this and the script doesn’t automatically install cutorch and cunn (it looks for the presence of the Nvidia CUDA Compiler NVCC which might not be installed on Debian systems for example), uncomment the lines at the end of the script and try again.
2. Obtain the ROM for the original Super Mario Bros. from somewhere on the internet, place it in the DeepQNetwork/roms directory, and call it “smb.zip”.
3. Run the following to start the training from DeepQNetwork/ (assuming you have a CUDA-friendly GPU – if not, run the “train_cpu.sh” instead:
./train_gpu.sh smb
Watch Mario bounce around at random at first, and slowly start to master the level! This will run for a very long time – I suggest you at least let it get through 4 million training steps in order to really see some improvement. The progress is logged in the logs directory in case you want to compare results while tweaking the parameters. Once you run out of patience, hit Control-C to terminate it. The neural network is saved in the dqn/ directory with a *.t7 filename. Move this somewhere safe if you want to save it, because it is overwritten each time you train.
If you want to watch the computer play the network you trained, execute the following (again, use the “_cpu” script if you don’t have a compatible GPU):
./test_gpu.sh smb <full-path-to-the-saved-file>
To help get you started, I posted my network trained through about 4 million steps here (over 1GB) – if you want to start with this to just see how it works, or use it as a jumping-off point for your own training so you don’t need to start from scratch, use this by uncommenting the saved_network line in the train script and specifying the complete path to this file.
Background
Earlier this year I happened upon Seth Bling’s fantastic video about training a computer to play Super Mario Bros. Seth used a genetic algorithm approach, described in this paper, which yielded impressive performance. It was fun to get this working and see Mario improve over the span of a couple of hours to the point where he was just tooling the level! The approach Seth used, however, was almost 15 years old and I started wondering what a more modern approach would look like…
Around the same time, I happened upon this video about the Google DeepMind project to train a computer to play a set of classic Atari games (see especially around 10:00). Now this was something! DeepMind used a process that they called a Deep Q Network to master a host of different Atari games. What was interesting about this approach was (i) they only used the pixels and the score as inputs to the network and (ii) they used the same algorithm and parameters on each game. This gets closer to the mythical Universal Algorithm: one single process that can be successfully applied to numerous tasks, almost like how the human mind can be trained to master varied challenges, using only our five senses as inputs. I read DeepMind’s paper in Nature, grabbed their code, and got it running on my machine.
What I especially liked about Google’s reinforcement learning approach is that the machine really doesn’t need to know anything about the task it’s learning – it could be flipping burgers or driving cars or flying rockets. All that really matters are the pixels it sees, the actions it can take at any moment in time, and the rewards/penalties it receives as a consequence of taking those actions. Seth’s code tells the machine: (i) which pixels in the game were nothing, (ii) which were enemies, and (iii) which parts Mario could stand on. He did this by reading off this information from the game’s memory. The number he programmed Mario to optimize was simply how far (and how fast) Mario moved to the right.
What was most exciting to me was unifying these two approaches. Why not use a Deep Q Network to learn to play Super Mario Bros. – a more complex game than the arcade-style Atari titles… Seth called his approach MarI/O, so you can might this version Deep-Q MarI/O.
How It Works
The Google Atari project, like many other machine learning efforts in academia, used something called the Arcade Learning Environment (ALE) to train and test their models. This is an emulator that lets you programmatically run and control Atari games from the comfort of your own computer (or the cloud). So my first task was to create the same sort of thing for Super Mario Bros.
I started with the open source FCEUX emulator for the Nintendo Entertainment System, and ported the ALE code to use FCEUX instead. The code for that is here (though I didn’t implement parts of the ALE interface that I didn’t need). Please reuse this for your own machine learning experiments!
I then modified DeepMind’s code to run Super Mario Bros using this emulator. The first results were…rather disappointing. Super Mario Bros. is just way more complex than most Atari games, and often the rewards for an action come quite a bit of time after the action actually happens, which is a notorious problem for reinforcement learning.
Little by little, over the course of several months and probably a couple solid weeks of machine time, I modified the parameters to get something that worked. But even then, Mario would often just kind of stand around without moving.
To correct this, I decided to give him a reward for moving to the right and a penalty for moving to the left. I didn’t originally want to do this since it seems to detract from the purity of using just the score to calculate rewards, but I justified it by thinking about how humans actually play the game… When you first pick up Super Mario Bros, you basically just try to get as far as possible without dying. Score seemed to be almost an afterthought; something you care about after mastering the game. So it definitely is not quite as pure as DeepMind’s approach here because it does get into some unique aspects of the game, but I think it better matches how a human would play. Once I made this change, things started to improve.
The second issue I noticed was that Mario seemed to have a blithe disregard for his own safety – he would happily run right into Goombas again and again…and again…and again. So I added a penalty for dying, also justifying this by saying that this is how a person would actually play.
With these modifications, I spent extensive time tuning the parameters to improve performance. I noticed a few things. First, there would often be an initial peak after a relatively small number of steps, and then scores would decline steadily with further training. I think this might be related to my increasing the size of the one of the higher levels of the neural network (which handle the more abstract behaviors): the downside to this additional expressive power is overfitting, which could have contributed to that early spike and then steady decline with more training…
The second issue I noticed was that there seemed to be little connection between the network’s confidence in its actions and its actual score. I came across another recent paper on something called Double Q Learning, also courtesy of DeepMind, which substantially improved Google’s original results. Double Q Learning counters the tendency for Q networks to become overconfident in their predictions. I changed Google’s original Deep Q Network to a Double Deep Q Network, and that helped substantially.
Finally, the biggest improvement of all came when I was just more patient. Even running on a powerful machine with a Nvidia 980 GPU, the emulator could only go so fast. As a consequence, one million training steps took about an entire day, with quite a bit of variance in the scores along the way.
Changes to Google’s Code
So here’s a summary of my changes from Google’s Atari Project:
- Changed the Deep Q Nework into a Double Deep Q Network.
- Ported the Atari Learning Environment to Nintendo using the FCEUX emulator.
- Granted rewards equal to points obtained per step, plus a reward for moving to the right minus a penalty for moving to the left.
- Implemented a penalty for dying.
- Clipped the rewards to +/- 10,000 per step.
- Scaled the rewards to [-1,1].
- Doubled the action repeat amount (the number of frames an action is repeated) to 8. I did this because the pace of Super Mario Bros. can be quite a bit slower than many Atari games. Without this, Mario seems a bit hyperactive and in the beginning just jumps around like crazy without going anywhere.
- Increased the size of the third convolutional layer from 64 to 128. This was an attempt to deal with the increased complexity of Super Mario Bros. but definitely slows things down and I’m sure risks overfitting.
How Deep Q Learning Works
Since I was starting from scratch, it took me quite a bit of time to understand reinforcement learning. I thought I’d try to explain it here in layman’s terms, since this sort of explanation would have really helped me when I embarked on this project.
The idea behind reinforcement learning is you allow the machine (or “agent” in the machine learning lingo) to experiment with playing different moves, while the environment provides rewards and penalties. The agent doesn’t know anything about the underlying task and starts out simply doing random stuff and observing the rewards and punishments it receives. Given enough time, so long as the rewards aren’t random, the agent will develop a strategy (called a “policy”) for maximizing its score. This is “unsupervised” learning because the human doesn’t provide any guidance at all beyond setting up the environment – the machine must figure things out on its own.
Our goal is a magical black box that, when you input the current state of the game, it will provide an estimate of the value of each possible move (up, down, right, left, jump, fire, and combinations of these). It then plays the best one. Games like Super Mario Bros. and many Atari games require you to look at more than a single frame to figure out what is going on – you need to be able to see the immediate history. Imagine looking at a single frame of Pong – you can see the ball and the paddles, but you have no idea which direction the ball is traveling! The same is true for Super Mario Bros., so you need to input the past couple of frames into our black box to allow it to provide a meaningful recommendation. Thus in our case, a “state” actually comprises the frame that is currently being shown on the screen plus three frames from the recent past – that is the input to the black box.
In mathematical terms, our black box is a function Q(s, a). Yes, a really, really big nasty complicated function, but a function nonetheless. It takes this input (the state s) and spits out an output (an estimate of the value of each possible move, each of these called an “action” a). This is a perfect job for a neural network, which at its heart is just a fancy way of approximating an arbitrary function.
In the past, reinforcement learning used a big matrix called a transition table instead of a neural network, which kept track of every possible transition for every possible state of the game. This works great for tasks like figuring out your way through a maze, but fails dramatically with more complex challenges. Given that the input to our model is four grayscale 84×84 images, there are a truly gigantic number of different states. There are 15 possible moves (including combinations of moves) in Super Mario Bros., meaning that each of these states would need to keep track of the transitions to all other possible states, which rapidly becomes utterly intractable. What we need is a way to generalize the states – a Goomba on the right of the screen is the same thing as a Goomba on the left, only in a different position and can be represented far more efficiently than specifying every single pixel on the screen.
Over the past few years, one of the hottest topics in computer science is the use of a type of neural network called a “deep convolutional network“. This type of network was inspired by the study of the animal visual cortex – our minds have a fantastic ability to generalize what our eyes see, and recognize shapes and colors regardless of where they appear in our field of view. Convolutional neural networks work well for visual problems since they are robust against translation. In other words, they’ll recognize the same shape no matter where it appears on the screen.
The networks are called “deep” because they incorporate many layers of interconnected nodes. The lowest layers learn to recognize very low-level features of an image such as edges, and each successive layer learns more abstract things like shapes and, eventually, Goombas. Thus the higher up you go in the network, the greater the level of abstraction that layer learns to recognize.
Each layer consists of a bunch of individual “neurons” connected to adjacent layers and each neural connection has a numeric weight associated with it – these weights are really the heart of the network and ultimately control its output. So if a given neuron has enough inputs that fire and, when multiplied by their respective weights, the result is high enough, that neuron itself fires and the neurons that are connected to it from above receive its signal. Together, this set of weights that define our neural network is called theta (θ).
For this project, I used a the following neural network, adapted from Google’s code:
- Input: Eight 84×84 grayscale frames;
- First Layer (convolutional): 32 8×8 kernels with a stride of 4 pixels, followed by a rectifier non-linearity;
- Second Layer (convolutional): 64 4×4 kernels with a stride of 2 pixels, followed by a rectifier non-linearity;
- Third Layer (convolutional): 128 3×3 kernels with a stride of 1 pixel, followed by a rectifier non-linearity;
- Fourth Layer (fully-connected): 512 rectifier units;
- Output: Values for the 15 possible moves.
Armed with this neural network to approximate the function I described above using weights θ (written Q(s,a;θ)), we can start attacking the problem. In classic “supervised” learning tasks, the human trains the machine by showing it examples, each with its own label. This is how, for example, the ImageNet competition for classifying images works: the learning algorithms are given large set of images, each categorized by humans into one of 200 categories (baby bed, dragonfly, spatula, etc.). The machines are trained on these and then set loose on pictures whose categories are hidden to see how well they do. But the issue with supervised learning is that you need someone to actually do all the labeling. In our case, you would need to label the best move in countless different scenarios within the game, which isn’t going to fly… Thus we need to let the machine figure it out on its own.
So how would a human approach learning Super Mario Bros? Well, you’d start by trying out a bunch of buttons. Once you figure out some moves in the game that kind of work, you start to play these. The risk here is that you learn a particular way of playing, and then you get stuck in a rut until you can convince yourself to start experimenting again. In computer science terms, this is known as the Explore/Exploit Dilemma: how much of your time should be spent exploring for new strategies versus exploiting those that you already know work. If you spend all of your time exploring, you don’t rack up the points. If you only exploit, you’re likely to get caught in a rut (in the language of computer science, a local minimum).
In Deep Q Learning, this is captured in a parameter called epsilon (ε): it’s simply the chance that, instead of playing the move recommended by the neural network, you play a random move instead. When the game starts, this is set to 100%. As time goes on and you accumulate experience, this number should slowly ramp down. How fast it ramps down is a key parameter in Deep Q Learning. Try tweaking this parameter and see what difference it makes.
Thus when it comes time for Mario to pick a move, he inputs the current state at time t (called st) into the Q-function and then selects the action at that time (at) that yields the highest value. In mathematical terms, the action Mario selects is:
at = maxaQ(s, a; θ)
There is also the probability ε that instead of playing this action, he’ll select a random action instead.
So how does the learning actually work? The Q function starts out with randomly assigned weights, so somehow these weights need to be modified to allow Mario to learn. Each time a move is played, the computer stores several things: the state of the game before the move at time t, the move played a, the state of the game after the move st+1, and the reward earned r. Each of these is called an experience and all of them together are called the replay memory. As Mario plays, every couple of moves he picks a couple of these experiences at random from the replay memory and uses them to improve the accuracy of his network (this was one of Google’s innovations).
This is the central algorithm of Q-learning: given a state at time t (called st, and remember a state includes the past few frames of gameplay), the value of each possible move is equal to the reward (r) we expect from that move plus the discounted value of the best possible move in the resulting state (st+1). The reason the value of the future state is discounted by a quantity called gamma (γ) is because future rewards should not be valued as highly as immediate rewards. If we set gamma to one, the machine treats rewards far into the future equally as a reward right now. Setting gamma low makes your computer into a pleasure-seeking hedonist, maxing out immediate rewards without regard to potentially bigger rewards in the future. In mathematical terms, the reward of making a move a at state st is expressed as:
r + γmaxaS(st+1, a; θ)
This gives us a way of estimating the value of making a move a at state st. For each learning step, the machine compares its estimate of this with the actual future rewards observed through experience. The weights of the neural network are then tweaked to bring the two into closer alignment using an algorithm called stochastic gradient descent.
The rate at which the network changes its weights is called the learning rate. At first, a high learning rate sounds fantastic for those of us who are impatient, since faster learning is best, right? The problem here is that the network could bounce around between different strategies without ever really settling down, kind of like a dieter following the latest fad without ever really following through a regimen far enough to see results.
As you iterate through enough experiences, hopefully the network learns how to play. By observing enough of these state-action-result memories, we hope that the machine will start to link these together into a general strategy that can handle both short- and long-term goals. That’s the goal at least!
What all this stuff means in practice is that you have a large number of levers to play with. That’s both the fascinating and infuriating thing about machine learning, since you can endlessly tweak these parameters in hopes of getting better results. But each run of the network takes a long time – there’s just so much number crunching to do, even with the rate of play sped up as fast as possible. GPUs help a lot since they can do tons of linear algebra calculations in parallel, but even so there’s only so fast you can go with commonly available hardware. With this project, I tried running it in the cloud using AWS but, even with one of the more powerful instances available, my home machine was faster. And since you pay by the hour with AWS, it quickly became obvious that it would be cheaper just to by more powerful gear for my home computer. If you are Google or Microsoft and have access to near limitless processing power, I’m sure you can do better. But for the average amateur hacker, you just have to be patient!
Conclusion
So those are the basics of Q learning. Please tweak the code and parameters to see if you can improve on my results – I would be very interested to hear!
Could you setup a docker image repository? that would be cool. I generally do not use apt-get based linux distros.
Looking at the shell script I don’t think you should have trouble adapting it to whatever package manager you’ve got.
Yeah – I’ve never used docker but have heard about it…if you do so, please let me know so I can try it out!
You could make it yourself.
Dockerfile that pulls the repo and executes the command thats it.
😉 😉 😉 😉 😉 😉 😉
You are awesome !
Super Mario Awesome. This is great!
4 * 84 * 84 * 256 –> I think you meant 256^84^84^4? Because each state is a unique combination of all the pixels?
Holy crap – I think you’re right…thanks for correcting my math – I’m fixing it in the post. Wow that’s a big number…
Awesome ❗ I added it to https://www.reddit.com/r/WatchMachinesLearn/
so many the way to improve it is to develop a superfast emulator first 🙂
That would be helpful. I stupidly compiled the emulator with the -O0 compiler option at first. When I switched to -O2, it went about 30% faster… But even so, I’d love to figure out ways of speeding up training…
Amazing article! One small nitpick would be that the article you’d linked to talking about Google’s X labs machine learning was using unsupervised learning (they didn’t give the machine labeled data), not supervised learning.
The article was easy to read follow, looking forward to more!
Thanks for pointing this out – I was actually thinking about Google’s ImageNet Competition entry, rather than the cat-finding exercise. I fixed this in the post…
r + γmaxaS(st+1, a; θ)
Is this correct?
I think the expression should be
r + gamma max Q(st+1, a; theta)
Hey, Can you make a ./install_dependencies for OSX?
Would love to! Now if only I had the time…
In the meantime, you could just run a Debian virtual machine and get it installed on that?
So I am getting this when i try to run it.
root@debian:/home/jacobrblocker/DeepQNetwork# ./train_gpu.sh smh
root@debian:/home/jacobrblocker/DeepQNetwork# -framework neswrap -game_path /home/jacobrblocker/DeepQNetwork/roms/ -name DQN3_0_1_smh_FULL_Y -env smh -env_params useRGB=true -agent NeuralQLearner -agent_params lr=0.00025,ep=1,ep_end=0.01,ep_endt=1000000,discount=0.99,hist_len=4,learn_start=50000,replay_memory=1000000,update_freq=4,n_replay=4,network=”convnet_nes”,preproc=”net_downsample_2x_full_y”,state_dim=7056,minibatch_size=32,ncols=1,bufferSize=1024,valid_size=1000,target_q=30000,clip_delta=1,min_reward=-10000,max_reward=10000,rescale_r=1,nonEventProb=nil -steps 5000000 -eval_freq 50000 -eval_steps 10000 -prog_freq 50000 -save_freq 100000 -actrep 8 -gpu 0 -random_starts 0 -pool_frms type=”max”,size=1 -seed 1 -threads 8 -verbose 3 -gameOverPenalty 1
./train_gpu.sh: line 79: ../torch/bin/qlua: No such file or directory
Thanks in advance.
torch seems like a dependency. Did you install the dependencies?
Great article! Works on Ubuntu 16.04 and GTX980.
How do you make use of the TheBrainOfMario.t7 that you have posted with 4 million pre-learned steps?
If I uncomment the -network option and add the full path to the TheBrainofMario.t7 in the train_gpu.sh , Mario always starts from the beginning with step 0.
How do configure the trainer to startup from the last saved steps?
Thanks
It would be cool if there was a way to allow a controller input to override what the AI is doing at any point in time.
Yeah – actually that should be possible…you could just have an override switch that would take input from the controller instead of the neural network. It would still use the results to learn from…in fact, that would be a pretty cool way to teach it. It would of course be a sort of “supervised” learning rather than wholly unsupervised, but it would probably get results way faster with a helping hand from a human…
very interesting stuff. do you think it would be possible to run the q learning on a real nes with robotic gamepad-input and screen capture?
Definitely, but a real pain. The beauty of grabbing the raw pixels directly from the screen is that you don’t need to worry about all the real-world problems of picture quality and noise. I’d love to set up this sort of system, though, for classic non-console games like a pinball machine – you’d hook up some actuators to the paddles, get a good video camera to capture the gameplay and set up essentially the same process. Would be a royal pain to get it working though 😉
first of all, amazing and valuable work! I’ve added this to my personal learning ai selection.
now… please bear with me, barny (giggles).
I went through a very similar process to create the basiux (after watching Seth’s Mar I/O and all)… still I can’t come to agree with all this excitation over tweaking DM’s algo for mario. to me (a highly machine learning uninstructed guy) that’s just toying with a black box. and even if I knew better about algorithms, I don’t think I’d want to go that way…
being old **might** mean less time spent developing a technology in the general sum, but it doesn’t make it less meaningful, necessarily.
in practice, you’ve got a complex computer vision ai and kind of the same neat variation aided by a few very similar parameters (essentially the same key ones, moving right and avoiding monsters, as I can see it right now) to make it effectively play mario a bit.
I find it way more interesting and insightful to tweak the small Seth’s general algorithm. not to mention it can probably lead to greater overall humanity learning about super ai… 🙂
wouldn’t you agree?
Thank you for the great post.
I am getting to 50,000 steps then it stops and says epsilon.
is there anything I can do to correct it,
Hmmmm…can you give me a few more details or better yet the console output when it stops?
The number of steps is configured using the “steps” variable in the train_ script…maybe you have it set to 50,000?
Could you be running out of memory?
Sure, I should have enough memory. I also have a nvidia 1080 graphics card i5 quad core processor.
this is the output:
Steps: 40000 Time: 374.50384998322
Recorded experiences: 40000
Recorded experiences: 41000
Recorded experiences: 42000
Recorded experiences: 43000
Recorded experiences: 44000
Recorded experiences: 45000
Recorded experiences: 46000
Recorded experiences: 47000
Recorded experiences: 48000
Recorded experiences: 49000
Steps: 50000 Time: 375.38538217545
Steps: 50000
Epsilon: 1
This is the output from my script:
# LEARNING OPTIONS
lr=0.00025 # .00025 for Atari.
learn_start=50000 # Only start learning after this many steps. Should be bigger than bufferSize. Was set to 50k for Atari.
replay_memory=1000000 # Set small to speed up debugging. 1M is the Atari setting… Big memory object!
n_replay=4 # Minibatches to learn from each learning step.
nonEventProb=nil # Probability of selecting a non-reward-bearing experience.
clip_delta=1 # Limit the delta to +/- 1.
I think I see so I should raise the learn_start?
Thank you very much for the response.
So it looks like it’s crashing as soon as the learning actually starts. The first “learn_start” steps are solely to record a batch of examples for the network to learn from. Once it has these, it starts using them to train the network.
So a few questions:
1. I take it you ran the install_dependencies.sh script? Did you get any error messages from this?
2. I take it you’re running train_gpu.sh and that you have the right CUDA stuff installed? Does the train_cpu.sh script work? If so, that would help pinpoint the issue as having to do with your GPU configuration.
3. What are you seeing in the emulator? You should see Mario running around randomly. What happens when it crashes?
4. You can try to run Google’s original code for Atari:
https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner
Let me know how these things work and we can go from there…
thanks for the response:
1. I ran the install_dependencies.sh. First on Ubuntu 17 then I downgraded to 16 because I got an error. When I ran it on 16(which I am currently using) it worked successfully without any errors.
2. When I run that the train_gpu I get an error, it does allow me to run the train_cpu, that’s what I have been using.
3. I do see Mario jumping around and stuff (which is amazing!!) but once it gets to the 50,000 the CLI says Epsilon and mario freezes.
I am running this through a VM wondering if I should download a different distro?
Maybe try on a different distro? You can try using a Debian distro, which is what I tested it on.
What is the error you get with train_gpu?
Hey ! How did you manage to open the window where Mario is jumping. I can train the gpu, but I can’t see how it goes. Any advices guys ?? 😀
Thanks for sharing.
I got an Error after running sudo ./install_dependencies.sh command. the message is:
“Error: Could not find header file for GD
No file gd.h in /usr/loca/include
No file gd.h in /usr/include
You may have to install GD in your system and/or pass GD_DIR or GD_INCLUDE to he luarocks command.
Example: luarocks install luagd GD_DIR=/usr/local
Error. Exiting.”
What should I do to fix it? Thanks very much.
Hi guys ! I’m super excited about this project !! I was trying to run all the files in the shell of a Linux terminal. Everything work until the “./test_pu.sh smb “. The file does compile and run. But as soon as I launch the program, the Super Mario window open and close immediately.
Any advice to solve this minor problem will be more than helpfull !
Thanks guys !! 😉
Okay, I installed all the dependencies on a literal fresh install of Linux Mint (debian, like Ubuntu)
But I got these errors when I tried to run it.
qlua: ./initenv.lua:58: module ‘cutorch’ not found:
no field package.preload[‘cutorch’]
no file ‘./cutorch.lua’
no file ‘/home/aurora/DeepQNetwork/torch/share/luajit-2.0.4/cutorch.lua’
no file ‘/usr/local/share/lua/5.1/cutorch.lua’
no file ‘/usr/local/share/lua/5.1/cutorch/init.lua’
no file ‘/home/aurora/DeepQNetwork/torch/share/lua/5.1/cutorch.lua’
no file ‘/home/aurora/DeepQNetwork/torch/share/lua/5.1/cutorch/init.lua’
no file ‘./cutorch.so’
no file ‘/usr/local/lib/lua/5.1/cutorch.so’
no file ‘/home/aurora/DeepQNetwork/torch/lib/lua/5.1/cutorch.so’
no file ‘/usr/local/lib/lua/5.1/loadall.so’
stack traceback:
[C]: at 0x7fd73750cf50
[C]: in function ‘require’
./initenv.lua:58: in function ‘torchSetup’
./initenv.lua:112: in function ‘setup’
train_agent.lua:53: in main chunk
Everything said “success” when I installed the dependencies, so I don’t understand what I’m doing wrong.
DId you find a solution for this?
same here…
ok. solved it by uncommenting last two lines related cutorch in install_dependencies.sh file (as described in the blog)
Okay, I fixed it when I had it train CPU. A temp fix for now.
But is there an easy way to have manual override?
I want to test this out on other NES games, and I need to get past the menu screen.
Hey ehrenbrav,
Thanks for writing this, it was very useful for me to induct myself in DQN & Mario.
I’m at the moment stuck at Mario Dying again and again and again and again state.
I have a theoretical question! I’m using GYM-AI version with KERAS to build DQN & Train
https://github.com/ppaquette/gym-super-mario
When I look at different Q-values for All states (for single episode where he dies straight in first goomba)
All Q-Values for All the States are the same i.e., agent perceives no difference between any of the states (first or death) – which seems a bit off.
However, Q-Values are increasing with more training & episodes.
Do you have a suggestion where I might be going wrong?
Hi Ehrenbrav,
Thanks for this and when i tried to train this using train cpu after 50,000 steps this is taking a lot of time to train for instance it took me Around 2 hrs to train till the 70,000 steps.Is this the normal training speed??
PS:I’m still a beginner!
Hi Ehrenbrav,
I’m trying to modify this to play Dr. Mario, but we keep getting stuck at the start screen. How do we tell the program to press start again to get to the actual game?
Thanks!
It’s been ages since I looked at this so I’m afraid I don’t have an answer off the top of my head.
However, the answer you’re looking for is most likely in one of these two places (or both, depending on what you’re trying to do):
https://github.com/ehrenbrav/neswrap
https://github.com/ehrenbrav/FCEUX_Learning_Environment
You might want to just do some searching through the codebase for these for “start” or “reset” to see how those buttons are activated by the DeepQNetwork. Best of luck,
E
Is it possible to send over an uncompiled libneswrap.so?
In the git repo it’s in torch/lib/lua/5.1/libneswrap.so
How do I change the screen size?
Hi Ehrenbrav,
Everything is working great except that for some reason your pre-made mario brain won’t work on my system. When I download and run it, I get an error message, “Error: /home/…/torch/File.lua:343: unknown Torch class ”
Everything runs fine when I start from scratch… any idea why your version causes an error?
I am not sure, but don’t I need a .params.t7 file as well as the main t7 file? Is there a way I can download that as well?
😥 ❓ 😛 👿
Hey,
can you please provide a short pseudocode how to update Q(s,a) ?
Why don’t you play it online. I like playing Super Mario 63 online than downloading it to PC. freegames66.com is my favorite website whenever I have time to play games.
Get me an error when install the dependecies, just at the end (tested in some pc with same result):
neswrap installation completed
Installing Lua-GD …
Clonando en ‘lua-gd’…
remote: Enumerating objects: 1418, done.
remote: Total 1418 (delta 0), reused 0 (delta 0), pack-reused 1418
Recibiendo objetos: 100% (1418/1418), 1.34 MiB | 1.37 MiB/s, listo.
Resolviendo deltas: 100% (1052/1052), listo.
Warning: variable CFLAGS was not passed in build_variables
gcc -o gd.lo -c `gdlib-config –features |sed -e “s/GD_/-DGD_/g”` -O3 -Wall -fPIC -fomit-frame-pointer `gdlib-config –cflags` `pkg-config lua5.1 –cflags` -DVERSION=\”2.0.33r3\” luagd.c
/bin/sh: 1: gdlib-config: not found
/bin/sh: 1: gdlib-config: not found
luagd.c:2171:33: error: ‘LgdImageCreateFromPng’ undeclared here (not in a function); did you mean ‘gdImageCreateFromPng’?
{ “createFromPng”, LgdImageCreateFromPng },
^~~~~~~~~~~~~~~~~~~~~
gdImageCreateFromPng
luagd.c:2172:33: error: ‘LgdImageCreateFromPngPtr’ undeclared here (not in a function); did you mean ‘gdImageCreateFromPngPtr’?
{ “createFromPngStr”, LgdImageCreateFromPngPtr },
^~~~~~~~~~~~~~~~~~~~~~~~
gdImageCreateFromPngPtr
Makefile:104: recipe for target ‘gd.lo’ failed
make: *** [gd.lo] Error 1
Error: Build error: Failed building.
Error. Exiting.
i get the same error, have you been able to find out whats wrong?
Hi guys –
I haven’t touched the code in a very long time, so I’m sure it’s gotten a bit stale… Versions change and dependencies change, so I think that is what’s happening.
On a linux-like environment, try ‘sudo apt-get install libgd-dev’ and see if that works. I think the issue is that gdlib-config is missing, and the above command might work…
hey, i checked and i get the same error evan after installing libgd-dev
Is it useful for playing game offline? How about game online? I usually play Super Mario 63 online on freegames66 so I think this does not help me.
I’m trying this as a project for school. but i keep getting the same error
neswrap installation completed
Installing Lua-GD …
Clonando en ‘lua-gd’…
remote: Enumerating objects: 1418, done.
remote: Total 1418 (delta 0), reused 0 (delta 0), pack-reused 1418
Recibiendo objetos: 100% (1418/1418), 1.34 MiB | 1.37 MiB/s, listo.
Resolviendo deltas: 100% (1052/1052), listo.
Warning: variable CFLAGS was not passed in build_variables
gcc -o gd.lo -c `gdlib-config –features |sed -e “s/GD_/-DGD_/g”` -O3 -Wall -fPIC -fomit-frame-pointer `gdlib-config –cflags` `pkg-config lua5.1 –cflags` -DVERSION=\”2.0.33r3\” luagd.c
/bin/sh: 1: gdlib-config: not found
/bin/sh: 1: gdlib-config: not found
luagd.c:2171:33: error: ‘LgdImageCreateFromPng’ undeclared here (not in a function); did you mean ‘gdImageCreateFromPng’?
{ “createFromPng”, LgdImageCreateFromPng },
^~~~~~~~~~~~~~~~~~~~~
gdImageCreateFromPng
luagd.c:2172:33: error: ‘LgdImageCreateFromPngPtr’ undeclared here (not in a function); did you mean ‘gdImageCreateFromPngPtr’?
{ “createFromPngStr”, LgdImageCreateFromPngPtr },
^~~~~~~~~~~~~~~~~~~~~~~~
gdImageCreateFromPngPtr
Makefile:104: recipe for target ‘gd.lo’ failed
make: *** [gd.lo] Error 1
Error: Build error: Failed building.
Error. Exiting.
Keep up the good work.
This was a very good post. Check out my web page Webemail24 for additional views concerning about Social Media Marketing.
Sharing is caring the say, and you’ve done a fantastic job in sharing your knowledge on your blog. It would be great if you check out my page, too, at Seoranko about SEO.
What fabulous ideas you have concerning this subject! By the way, check out my website at YR4 for content about Airport Transfer.
Having read your posts. I believed you have given your readers valuable information. Feel free to visit my website 67U and I hope you get additional insights about Thai-Massage as I did upon stumbling across your site.
This was a very good post. Check out my web page QH8 for additional views concerning about Car Purchase.