I'm trying to use the forest-cli schema:update command, but when I do, I keep getting the error:
× We are not able to detect a Forest CLI project file architecture at this path: /PATH/TO/REPO/ROOT.: Error: No "routes" directory.
There is a routes directory, but within src/ below the repo root. I have tried running forest schema:update from inside there, but I get the exact same error. The command only has options for a config file and an output directory.
Googling has turned up nothing, and there's no obvious hint from forestadmin's documents. Thanks in advance for any assistance!
According to the forest-cli code available here, the forest schema:update command requires the package.json file to be directly accessible in order to run (In the same folder you run the command), to check that the version of the agent you are running is indeed compatible with schema:update.
You can also use the -c/--config option in order to use another location of your config/database.js, and the -o/--outputDirectory to output the result to a new location.
In your case, I would say that forest schema:update -c src/config/database.config.js -o tmp should allow you to generate the files in the tmp directory (Be aware that this directory should not exist).
This command should be run where your package.json is located.
However, I don't think you will be able to export files directly at the right location when using a custom folder structure.
Related
I have a project in which I use multiple python virtual environments. For each directory, I use one different virtual environment. Is it possible to configure this in order to not change environment manually each time I need to execute files in another dir?
Just to be more clear, if my workspace is like this:
dir1
file1.py
file2.py
dir2
file3.py
file4.py
I would like to link dir1 with virtual env venv1 and dir2 with venv2. This way, whenever I run file1.py or file2.py, it would automatically use venv1, and if I run file3.py or file4.py, it should use venv2.
I'm checking this link and my first thought is configuring it with a debugging launch file via the 'python' argument. The problem with this is that I should create multiple launch options and execute each python file in debug mode.
Is there any other way? Like using workspace setttings (json file) but for each subdirectory I have? Or maybe using the settings of the workspace with a custom variable that is changed based on the directory where I execute the python file?
I'm automating my build process, but I wasn't able to change the model_target_rtw folder to something different.
I'm not talking about CodegenFolder, but about the folder that's created inside it during compilation.
I'm currently working this around by renaming the folder after compilation, but it would be grate to remove that step.
The folder you are referring to is the RTW (Real Time Workshop) BuildDirectory.
You can get the value of BuildDirectory by running the command:
RTW.getBuildDir('MyModel')
See:
https://se.mathworks.com/matlabcentral/answers/274082-how-can-i-change-the-build-folder-of-a-model
Also look at this question:
Save generated code in a special folder in "rtwbuild"
If you run this command in MATLAB:
set_param(0, 'CodeGenFolder', 'C:\MyBuildDir')
and then run the RTW.getBuildDir command again you will see that the BuildDirectory has changed.
I had a problem come up when I was forced to change my project directory name.
First Virtualenvwrapper didn't see my projects, so I changed the environment variable of WORKON_HOME to the new project directory. I could then activate my envs. But now when my project is doing anything, it thinks it's in the old directory, not the new one. I can't figure out how to change this. I've looked in the reference material, and looked for the place that actually points to where the projects are, but I had no luck with either. Please help.
It sounds like you want to set an already-created virtual environment to a directory that contains your project. One way that I am familiar with to do the following, based on the virtualenvwrapper documentation.
Activate your desired virtual env
workon myvirtualenv
Change your directory to your desired project directory
$ cd my/project/dir
Set your virtualenv project to the current directory
$ setvirtualenvproject
The default is to use the current directory. The full syntax is:
$ setvirtualenvproject [virtualenv_path project_path]
I hope this helps!
I am trying to run a my MapReduce job by building a jar from eclipse , but while trying to execute the job , I am getting "Not a valid Jar" error.
I have tried to follow the link Not a valid Jar but that didnt help.
Can anyone please give me the instructions on how to build the jar from eclipse, for it to run on Hadoop.
I am aware of the process of building the Jar file from eclipse,however I am not sure, do I have to take any special care for building a jar file, so that it runs on Hadoop.
When you submit the command, make certain you have the following things on the line to do the command:
When you indicate the jar, make certain you are directing to the jar properly. It may be easiest to be certain by using the absolute path. To get the absolute path, if you navigate to the place where the jar is, then run 'readlink -f ' command to get the absolute path. So for you, not just hist.jar, but maybe /home/akash_user/jars/hist.jar or wherever it is on your system. If you are using Eclipse, it may be saving it somewhere funny, so make sure that is not the problem. The jar cannot be run from HDFS storage. must run from local storage.
When you name your main class, in your example Histogram, you must use the fully qualified name of the class, with the package, the project, and the class. So, usually, if the program/project is named Histogram, and there is a HistogramDriver, HistogramMapper, HistogramReducer, and your main() is in HistogramDriver, you need to type Histogram.HistogramDriver to get the program running. (Unless you made your jar runnable, which requires extra stuff at the beginning, making .mdf and things.)
Make sure that the jar you are submitting (hist.jar) is in the current directory from where you are submitting the 'hadoop jar' command.
If the issue is still persisting, please tell the Java, Hadoop and Linux version you are using.
You should not keep the jar file in HDFS when executing the MapReduce job. Make sure Jar is available in the local path. Input path and output directory should be the path from HDFS.
I have just run Doxygen from the command line and am unsure where it put it...
It doesn't show up in the directory I ran it from
Is there an easy way to find it?
From the Doxygen manual:
The default output directory is the directory in which doxygen is started. The root directory to which the output is written can be changed using the OUTPUT_DIRECTORY. The format specific directory within the output directory can be selected using the HTML_OUTPUT, RTF_OUTPUT, LATEX_OUTPUT, XML_OUTPUT, and MAN_OUTPUT tags of the configuration file. If the output directory does not exist, doxygen will try to create it for you (but it will not try to create a whole path recursively, like mkdir -p does).
If you are having some problems getting it to do what you want use doxywizard it makes writing the configuration file much easier.