How to get ML Agents to run in Unity? - unity3d

I've tried going through the documentation but the steps mentioned there aren't quite clear enough. Is there a good step by step video tutorial that can be helpful? The ones I saw on YouTube are pretty old and don't work with the latest updates on GitHub https://github.com/Unity-Technologies/ml-agents

This will help you to setup ml agent version 14.
https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md
I am currently using this version.
I suggest you create a new python environment for only this purpose.
Don't use pip install mlagents in your python environment terminal. This is not updated to 14 when i installed it, so use the link above to see complete guide but here's the important stuff.
Download zip file
Ml Agents master
When you extract this Zip, you should open this extracted folder .
Then open cmd at that location. Activate your python environment and follow these steps:
You should install the packages from the cloned repo rather than from PyPi. To do this, you will need to install ml-agents and ml-agents-envs separately.Open cmd inside ml-agents=master folder and activate your python environment then, From the repo's root directory, run:
cd ml-agents-envs
pip3 install -e ./
cd ..
cd ml-agents
pip3 install -e ./
It is very important that both packages are installed from same ml agent folder, this will not work if the version are not compatible. if installed from same folder both packages will have same version i.e. 14 in this case.
These two packages will help you to use predefined PPo and SAC algo.
I suppose you have installed 2018 or 2019 Unity. Open it and Goto File → Open project
Now in open dialog box select folder Project inside ml-agents-master folder that you have downloaded.
Sorry that most of the things are named project but don't be confused, earlier project was a folder inside ml-agents-master but after opening it you will see a Project toolbar. Follow Assets\ML-Agents\Examples\3DBall\Scenes now double click on 3Dball.
This will open a scene as you can see here. You can also see TFModels and Scirpts they are predefined neural network and code, respectively.
Select Agent in Hierarchy toolbar in left side (this will make change for only that instance of 3Dball it will be better to go to prefabs then double click on 3Dball this will open only one 3Dball whose settings will be applied on all instances now in hierarchy you will see only one 3Dball, now select it's Agent rest is same but now changes will affect all copies of 3Dball, prefabs are used to control all the copies this helps to train multiple agents at same time) then in right side Inspector view will open, inside Behaviors parameters you can see Model and a input in its box. Keep the agent selected otherwise this inspector view will disappear .
Now Goto TFModels folder, you will see a 3DBall file that looks like Neural network. Drag this to that Agent's Behavior parameters Model.
After Following all these steps Click on play option on top . Now the predefined model will start playing and you will see that it can balance the ball pretty well.
Now that you are able to see how trained model work and want to train again using predefined PPO and SAC, follow this
goto: ml-agents-master\config, here you will find a file trainer_config.yaml, now open cmd then activate your environment and enter code
mlagents-learn trainer_config.yaml --run-id=firstRun --train
When the message "Start training by pressing the Play button in the Unity Editor" is displayed on the screen, you can press the ▶️ button in Unity to start training in the Editor. You can press Ctrl+C to stop the training, and your trained model will be at models run-identifier → behavior_name.nn where behavior_name is the name of the Behavior Name of the agents corresponding to the model.
Move your model file into Project/Assets/ML-Agents/Examples/3DBall/TFModels/.
Open the Unity Editor, and select the 3DBall scene as described above.
Select the 3DBall prefab Agent object.
Drag the <behavior_name>.nn file from the Project window of the Editor to the Model placeholder in the Ball3DAgent inspector window.
Press the ▶️ button at the top of the Editor.
Now, for your own RL algorithm in python:
see this jupyter notebook it shows how we activate unity gym and get observations, rewards and reset environment. For this use can also create Unity executable, it's just creating exe files with some settings which you will find here
Hope this works without any issues. And Good Luck with this.

Related

Azure Pipelines: building a C++ project with outside "Include Directories"

I tried searching with as many different terms as I could and couldn't find exactly what I'm looking for.
I have a C++ Project developed in Visual Studio 2019 and I am trying to build and deploy it in Azure Pipelines. It uses Boost and OpenCV. I skipped trying to include these in Azure Artifacts because of a rabbit hole with Azure CLI errors that took me almost half a day.
So it seems that there is a task to publish pipeline artifacts in the .yml file. How do I do this when my project needs to reference a certain directory, instead of one specific file or .dll? Here are images for how this is configured in Visual Studio:
include directory for boost image
include directory settings for opencv image
Edit: Still trying, see my comment. Thinking about switching over to CircleCI.
I found out what to do. Hopefully no one else wastes as much time as I did.
The key was MSBuild. One needs to first find out the values of $(IncludePath) and $(LibraryPath) by doing the following first in Visual Studio:
Right-click on your project, choose "Properties"
Go to the Build Events tab, and click "Pre-Build Event"
Click on and expand the Command Line row, and click "Edit"
Now click the button that says "Macros>>"
You will see a bunch of different variables and their values. Find the values for LibraryPath and IncludePath, copy and past them into a text file.
Now, assuming you already set up a local agent, follow these steps:
Put the text file in the root folder of where your agent is installed. For me, this was "C:\agents"
Have the first line be "LibraryPath=value" and the other line be "IncludePath=value". Use double slashes for the directory paths.
Rename the file to .env. If the agent is currently running, restart it so it can read in the environment variables it will use during your build.
In the MSBuild task of your pipeline, specify arguments. For my case, it was simply this: /p:IncludePath="C:\Program Files\boost_1_77_0;$(IncludePath)" /p:LibraryPath="$(LibraryPath)"
Run the pipeline. You can check your completed build on the local machine. For me, the path it kept going to was "C:\agents_work\2\s"

Workspace with multiple Dart / Flutter projects, how to tell VSCode to run a specific one

I have a workspace with 7 different Dart / Flutter projects. Currently, to choose which project to run, I select a file from that project and then f5. Using this approach seems to be quite error prone as I sometimes have the wrong file open and it loads the wrong project. Other times I it takes a few seconds to find a file and open it to run.
It would be nice if there was a selector for me to choose which project or a way to select a default project to run regardless of which file I have open. Is this possible at all?
You can control this by creating a Launch Configuration (see https://code.visualstudio.com/Docs/editor/debugging#_launch-configurations) file (launch.json). You can create this by clicking the Cog icon on the Debug side bar. It'll be created at .vscode/launch.json.
You can set the cwd or program fields in the config to relative paths from the folder you've opened to control what launches. program lets you specify a specific script, whereas cwd lets you specify a project root (where the Dart plugin will try to guess the best entry point, like bin/main.dart for Dart, or lib/main.dart for Flutter).
Another option is to use VS Code's "Multi-Root Workspaces", where you'll be able to select which workspace folder to debug from the debug side bar, however this generally results in saving a .code-workspace file that some users (including myself) find an annoyance.

Where is my .rpi-sdimg (Yocto Project - Toaster)?

I am currently trying to build a simple image for my raspberry pi 2 using toaster, the web gui of yocto project.
Everything works fine and the build succeds.
Every tutorial I've found is telling me to use dd on the file "core-image-weston-raspberrypi2.rpi-sdimg".
Sadly there is no such file on my pc.
Did someone had the same problem and managed to fix it?
Please find attached the Configuration and BitBake Variables:
Can you check what's the value of the IMAGE_FSTYPES variable in your build? Click the image recipe name in one of your finished builds, then the "configuration" link on the left hand side, and select the "BitBake variables" tab. Once there, search for IMAGE_FSTYPES.
If the value of IMAGE_FSTYPES does not include "rpi-sdimg", you will need to add it. If you are using Toaster from the Yocto Project master branch (it looks like you are from your screenshots), you can do that from Toaster. Go to the "BitBake variables" page in the project configuration, click the "change" icon next to the value of IMAGE_FSTYPES, then type "rpi-sdimg" at the end of the variable value. After that you will need to rebuild the core-image-weston-raspberrypi2 image.
In theory, this should generate the core-image-weston-raspberrypi2.rpi-sdimg file you need (unless something in the meta-raspberrypi layer is dictating otherwise).
If you are not using the master branch, Toaster will have a bug that prevents you from adding custom values to IMAGE_FSTYPES, but you can still do so by editing the configuration files.

PyCharm - automatically set environment variables

I'm using virtualenv, virtualenvwrapper and PyCharm.
I have a postactivate script that runs an "export" command to apply the environment variables needed for each project, so when I run "workon X", the variables are ready for me.
However, when working with PyCharm I can't seem to get it to use those variables by running the postactivate file (in the "before launch" setting). I have to manually enter each environment variable in the Run/Debug configuration window.
Is there any way to automatically set environment variables within PyCharm? Or do I have to do this manually for every new project and variable change?
I was looking for a way to do this today and stumbled across another variation of the same question (linked below) and left my solution there although it seems to be useful for this question as well. They're handling loading the environment variables in the code itself.
Given that this is mainly a problem while in development, I prefer this approach:
Open a terminal
Assuming virtualenvwrapper is being used, activate the virtualenv of the project which will cause the hooks to run and set the environment variables (assuming you're setting them in, say, the postactivate hook)
Launch PyCharm from this command line.
Pycharm will then have access to the environment variables. Likely because of something having to do with the PyCharm process being a child of the shell.
https://stackoverflow.com/a/30374246/4924748
I have same problem.
Trying to maintain environment variables through UI is a tedious job.
It seems pycharm only load env variables through bash_profile once when it startup.
After that, any export or trying to run a before job to change bash_profile is useless
wondering when will pycharm team improve this
In my case, my workaround for remote interpreter works better than local,
since I can modify /etc/environment and reboot the vm
for local interpreter, the best solution I can do are these:
1. Create a template Run/Debug config template and clone it
If your env variables are stable, this is a simple solution for creating diff config with same env variables without re-typing them.
create the template config, enter the env variables you need.
clone them
see picture
2. Change your script
Maybe add some code by using os.environ[] = value at your main script
but I don't want to do this, it change my product code and might be accidentally committed
Hope someone could give better answer, I've been spent too much time on this issue...
Another hack solution, but a straightforward one that, for my purposes, suffices. Note that while this is particular to Ubuntu (and presumably Mint) linux, there might be something of use for Mac as well.
What I do is add a line to the launch script (pycharm.sh) that sources the needed environment variables (in my case I was running into problems w/ cx_Oracle in Pycharm that weren't otherwise affecting scripts run at command line). If you keep environment variables in a file called, for example, .env_local that's in your home directory, you can add the following line to pycharm.sh:
. $HOME/.env_local
Two important things to note here with respect to why I specifically use '.' (rather than 'source') and why I use '$HOME' rather than '~', which in bash are effectively interchangeable. 1) I noticed that pycharm.sh uses the #!/bin/sh, and I realized that in Ubuntu, sh now points to dash (rather than bash). 2) dash, as it turns out, doesn't have the source "builtin", nor will ~ resolve to your home dir.
I also realize that every time I upgrade PyCharm, I'll have to modify the pycharm.sh file, so this isn't ideal. Still beats having to manage the run configurations! Hope it helps.
OK, I found better workaround!
1.install fabric in your virtualenv
go to terminal and
1. workon your virtualenv name
2. pip install fabric
2. add fabric.py
add a python file and named it "fabric.py" under your project root, past the code below,and change the path variables to your own
from fabric.api import *
import os
path_to_your_export_script = '/Users/freddyTan/workspace/test.sh'
# here is where you put your virtualenvwrapper environment export script
# could be .bash_profile or .bashrc depend on how you setup your vertualenvwrapper
path_to_your_bash_file = '/Users/freddyTan/.bash_profile'
def run_python(py_path, virtualenv_path):
# get virtualenv folder, parent of bin
virtualenv_path = os.path.dirname(virtualenv_path)
# get virtualenv name
virtualenv_name = os.path.basename(virtualenv_path)
with hide('running'), settings(warn_only=True):
with prefix('source %s' % path_to_your_export_script):
with prefix('source %s' % path_to_your_bash_file):
with prefix('workon %s' % virtualenv_name):
local('python %s' % py_path)
3. add a external tool
go to
preference-> External tools -> click add button
and fill in following info
Name: whatever
Group: whatever
Program: "path to your virtualenv, should be under '$HOME/.virtualenvs' by default"/bin/fab
Parameter: run_python:py_path=$FilePath$,virtualenv_path=$PyInterpreterDirectory$
Working directory: $ProjectFileDir$
screenshot
wolla, run it
go to your main.py, right click, find the external name (ex. "whatever"), and click it
you could also add shortcut for this external tool
screenshot
drawbacks
this only work on python 2.x, because fabric don't support python 3

Do I have to build my LabVIEW instrument driver under Program Files?

I'm trying to build a LabVIEW plug and play instrument driver project for a device we sell. I followed the instructions to create a project, and it created the project in with the LabVIEW program:
C:\Program Files\National Instruments\LabVIEW 2011\instr.lib
I suppose I could connect that folder to source control and just do all the work there, but it feels weird to be working under Program Files. When I tried to move the project folder out into my regular workspace folder, it broke all the subpalette files (*.mnu). I could recreate them, but I'm afraid they wouldn't work for our customers when they install the driver from the LabVIEW web site.
Is it possible to move a driver project around, or does it have to stay in the default location? If one of our customers has installed LabVIEW in a different location (say on drive D:) will the driver menus not work for them?
I'm not in favour of user.lib for SCC'd items. using several LabVIEW versions at a time is a big problem.
Here is my routine:
Create the instrument library and save all code in a folder starting with an underscore ('_') (_foo)
Create an .mnu file in the parent folder of '_foo' Mylib.mnu, add the icons you need.
With OpenG package builder I create an installer routine that placed the the mnu file and the folder in instr.lib
After a restart of LabVIEW the instrument driver shows up in the instruments palette.
If you keep the code in the same relative position to the mnu file there is no problem with missing VIs.
Ton
Instrument drivers are always located in the 'instr.lib' folder in the current LabVIEW version folder. There is an environmental path set up in LabVIEW for this intrument driver folder so it will always point to the correct drive for the installation of LabVIEW used.
You should keep the folder in the location used by the wizard to ensure that when distributed to your customers the sub palette menus point to the correct location and all the VIs link correctly.
I use source control for user.lib which is in a similar location and have no problems.