Where to keep the resources.xml file for Rundeck CLI - rundeck

I am using Rundeck CLI. Where should I keep the resource.xml file so that it is available for the given project? Is there any specific location the file needs to be or it can be in any folder and one just point out to the location?

No specific place is needed for the resources.xml/resources.yaml files (also, you can use any name for that file btw, for example: project_test.xml). You can use any location reachable by the rundeck user (RPM/DEB-based installations) or reachable by the user that launches Rundeck (WAR-based installation). A good place to store these files could be the /var/lib/rundeck/ path.

Related

How to run `forest schema:update` outside project directory?

I'm trying to use the forest-cli schema:update command, but when I do, I keep getting the error:
× We are not able to detect a Forest CLI project file architecture at this path: /PATH/TO/REPO/ROOT.: Error: No "routes" directory.
There is a routes directory, but within src/ below the repo root. I have tried running forest schema:update from inside there, but I get the exact same error. The command only has options for a config file and an output directory.
Googling has turned up nothing, and there's no obvious hint from forestadmin's documents. Thanks in advance for any assistance!
According to the forest-cli code available here, the forest schema:update command requires the package.json file to be directly accessible in order to run (In the same folder you run the command), to check that the version of the agent you are running is indeed compatible with schema:update.
You can also use the -c/--config option in order to use another location of your config/database.js, and the -o/--outputDirectory to output the result to a new location.
In your case, I would say that forest schema:update -c src/config/database.config.js -o tmp should allow you to generate the files in the tmp directory (Be aware that this directory should not exist).
This command should be run where your package.json is located.
However, I don't think you will be able to export files directly at the right location when using a custom folder structure.

Extract ZIP file in specific folder on server using terminal?

I have a problem with my VPS server, more precisely with extracting a .zip file (5GB) through the file manager. I have limited support because it is a self-managed VPS. I need a command for a terminal to extract file "5.zip" to a specific server dir for ex: http:xxxxxxx.com/funny folder on my server. Can someone help me with this? Thanks.
You provide URL, not path to directory. But to extract zip file in particular directory you can use command:
cd /directory/funny
unzip /path/to/5.zip
Change /directory/funny with the real directory which your web service serve.

Deploying config files to PLC

Is it possible include arbitrary files (in this case a .csv) from a TwinCAT project direct to the Boot directory of a PLC?
By using PATH_BOOTPATH in the file open/read FBs it is possible to load files from this directory in a convenient manner regardless of whether using a CE or Windows deployment, However deployment of files to this location seems to be the sticking point.
I know that a copy of the project code is included within the CurrentConfig<Project>.tpzip file, but this file is not easily accessible from code, or updateable.
I've found the 'Additional Files' section within the system configuration, but it makes little sense.
Adding a file from inside the project as a 'Relative' path doesn't seem to do anything
Adding a file from inside the project as an external path includes the file (via symbolic links?) in the 'CurrentConfig.tszip' file, which has the same issues as the .tpzip
Adding an external file as an external path again includes the file inside of the .tszip.
I'm willing to accept that this might not be possible, but it just feels odd that the PATH_BOOTPRJ and PATH_BOOTPATH roots are there and not accessing useful paths.
Deployment
To quote Beckhoff:
Deployment is used to set up commands that are to be executed during the installation and startup of an application.
The event types are essentially at what stage of the deployment process the command is performed, where the command can either be copying a file or execution of a script/program.
Haven't performed extensive testing but between absolute/relative pathing and execution this should solve nearly all issues with deployment configuration.

Elasticsearch Script File in Local Mode

Using ElasticSearch, one can place scripts of various languages in ElasticSearch's /config/scripts directory, and they will be automatically loaded for use in Update requests and other types of operations. In my production environment, I was able to accomplish this and run a successful Update using the script.
So far, however, I've been unsuccessful in getting this feature to work when running a node in local mode for integration tests. I assumed that, since one can configure the ElasticSearch node, with an elasticsearch.yml on the classpath, one should also be able to add a scripts directory and place her desired script there, causing it to be loaded into the local node. That doesn't seem to be the case since when I try to execute an Update utilizing that script it cannot be found.
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: Unable to find on disk script scripts.my_script
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:269)
at org.elasticsearch.script.ScriptService.executable(ScriptService.java:417)
at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:194)
... 6 more
Does anyone know the proper way to do automatic script loading into a local ElasticSearch node for testing?
I am using the basic ElasticSearch client included in "org.elasticsearch:elasticsearch:1.5.2".
After perusing the source code, I discovered that the reason my script was not being picked up by Elasticsearch's directory watcher was because it was watching user.dir, the default configuration directory. The scripts/ subdirectory would have had to have been under there for the node to pick up my script and load it into the ScriptService for it to be used during updates.
The configuration directory can be overridden in your elasticsearch.yml with the key path.conf. Setting that to somewhere in your project would allow you to load scripts during testing and add those scripts to version control as well. Make sure that under that directory is a scripts/ directory; that is where your scripts will be loaded from.

Configuration and content management with automated deployment tools for ZF based app

I am trying to automate deployments of a particular project and a bit lost as to who to handle config file as well as user assets.
(Application is based on Zend Framework based btw).
Main application folder is structured as follows:
./app
./config.ini <----- config file
./modules
./controllers
./models
./views
./libs
./public
That config file is where all the configs are stored.
So 'app' folder contains whole bunch of code in PHP and 'public' contains whole bunch of code in JavaScript, HTML/CSS and stuff like that(web accessible basically).
If I follow Capistrano's model, where each package is expanded into it's own folder that is then symlinked to, how do I handle that config.ini file?
What about all the user content that is uploaded into ./public folder?
Thanks!
The Capistrano approach to this is to have a structure like this on your remote server:
releases/
20100901172311/
20101001101232/
[...]
current/ (symlink to current release)
shared/
in the shared directory you include your config file and any user generated content (e.g. shared/files). Then on each deployment, once you've checked out the code you automatically create symlinks from the checkout into your relevant shared directories. E.g.:
releases/20101001101232/public/files -> shared/files
releases/20101001101232/application/configs/config.ini -> shared/config.ini
that way, when a user uploads a file to public/files it is actually being stored in shared/files.