Upload with capistrano to localhost - capistrano

I want Capistrano to upload a component to one of the servers in my cluster. The upload is done via scp. The upload command looks like this:
upload("...", "...", :via => :scp, :recursive => true)
When I am uploading to any other host, then the upload works fine.
When I try to upload to the same server on which Capistrano itself is running, I get the following error:
*** upload via scp failed on [...]: SCP did not finish successfully () (SCP did not finish successfully ())
Relevant info:
Capistrano v2.9.0
ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-linux]

Fairly sure this is because Capistrano expects the source file to be on the computer on which you've called the cap deploy command, and the destination to be on the computer you're deploying to (and perhaps also for them to be different computers).
If you're trying to get a file from a remote computer onto the computer on which you're running cap deploy, then I think you need to use download instead of upload.

Don't know if you're still looking for the answer, but here's how I solved it:
in the localhost.rb file for my project I added 2 lines underneatth the repo definition:
set :repository, "..."
set :copy_dir,"/tmp/temp/"
set :copy_remote_dir,"/tmp/"

Related

Wildfly 26.1.3 Domain Batch JBeret WFLYCTL0030: No resource definition is registered for address

I am trying to run JBeret Batch on a Wildfly Domain Cluster locally, but I keep getting the error
"WFLYCTL0030: No resource definition is registered for address [ (\"deployment\" => \"ExampleJob.war\"), (\"subsystem\" => \"batch-jberet\") ]"
To host the cluster I tried to use the default configuration. I am using Wildfly 26.1.3 .
To run my setup I am using this commands on my windows machine to run the Master and Slave.
.\bin\domain.bat --host-config=host-master.xml -Djboss.domain.base.dir=domain1
and
.\bin\domain.bat --host-config=host-slave.xml -Djboss.domain.base.dir=host1 -Djboss.domain.master.address=127.0.0.1 -Djboss.management.http.port=9991
After That I deploy any batch application and try to run it in the adminpanel and get the error.
I tried also to use the cli, which didnt change anything.
I also tried to run it without the base.dir config so it uses the same folder but that did not change something.
I also tried to run different JDKs (I am now using JDK11).
To test that it is not working I tried this job from jberet directly. Also I tried other example Jobs like csv2json and keep getting the same error.
In Standalone the example jobs work.

Deploy and run a Go API server on Ubuntu/Centos

I just finished my first backend with Go using Iris framework but now I need to put it on production so I can use it in the Slack app I built.
In order to test the code locally I only run my file with go run main.go and ngrok to test with the Slack API, it's working and it's finished.
I have a droplet with Ubuntu 16.04.3 and other one with Centos 7... I was searching for something like pm2 for go, running the server and using nginx to point that port but I read that with Go it's different and I have to use something like this https://fabianlee.org/2017/05/21/golang-running-a-go-binary-as-a-systemd-service-on-ubuntu-16-04/
But that's a very long configuration for a simple server and my questions are:
Is this the usual way to config the APIs with Go?
Apart of DigitalOcean, do you recommend to use a different service to run my API?
This is really my first time with Go and I just want to learn more, I am a backend developer with Laravel and NodeJS.
You can use pm2 if you want. When you build a go project it creates a binary executable, lets say backend-server, which you can run from terminal and will start the app like this:
$ ./backend-server
If it's not executable or has permission denied issue, add the executable permission to it.
$ chmod +x backend-server
You binary should be ready to run. I like to do it with a json config file (process.json) so that I can pass extra env variables as well and don't have to type a lot in terminal.
My process.json looks something like this:
{
"apps" : [{
"name" : "backend-app",
"script" : "./backend-server",
"env": {
"DB_USER": "db_user",
"PORT": 8080
}
}]
}
Finally you can start the app using pm2 like this:
$ pm2 start process.json
More details about json config can be found in official doc
I think most people use Supervisor for this purpose, including me.
To make it very easy for you, just take a look at my Golang project, isaac-racing-server and use it as a template for yours by replacing isaac-racing-server with the name of your app. (The Supervisor files are in a subdirectory.)

Puppet copy Issue

I am very new to puppet and I configured a puppetmaster and an agent.
Here I am trying to simply copy and paste the file from my local centos7 (master server) to an agent on a specific path and replace an existing file with the new one.
This is how I try to do this:
file { '/tmp/test.txt':
ensure => present,
source => "/tmp/test.txt",
}
However - I am getting error seen below:
Could not evaluate:
Could not retrieve information from environment production source(s) file:/tmp/test.txt
I have gone through a number of links but don't know where is the exact issue.
Any suggestion on this ??

capistrano (v3) deploys the same code on all roles

If I understand correctly the standard git deploy implementation with capistrano v3 deploys the same repository on all roles. I have a more difficult app that has several types of servers and each type has its own code base with its own repository. My database server for example does not need to deploy any code.
How do I tackle such a problem in capistrano v3?
Should I write my own deployment tasks for each of the roles?
How do I tackle such a problem in capistrano v3?
All servers get the code, as in certain environments the code is needed to perform some actions. For example in a typical setup the web server needs your static assets, the app server needs your code to serve the app, and the db server needs your code to run migrations.
If that's not true in your environment and you don't want the code on the servers in some roles, you could easily send a pull request to add the no_release feature back from Cap2 in to Cap3.
You can of course take the .rake files out of the Gem, and load those in your Capfile, which is a perfectly valid way to use the tool, and modify them for your own needs.
The general approach is that if you don't need code on your DB server, for example, why is it listed in your deployment file?
I can confirm you can use no_release: true to disable a server from deploying the repository code.
I needed to do this so I could specifically run a restart task for a different server.
Be sure to give your server a role so that you can target it. There is a handy function called release_roles() you can use to target servers that have your repository code.
Then you can separate any tasks (like my restart) to be independent from the deploy procedure.
For Example:
server '10.10.10.10', port: 22, user: 'deploy', roles: %w{web app db assets}
server '10.10.10.20', port: 22, user: 'deploy', roles: %w{frontend}, no_release: true
namespace :nginx do
desc 'Reloading PHP will clear OpCache. Remove Nginx Cache files to force regeneration.'
task :reload do
on roles(:frontend) do
execute "sudo /usr/sbin/service php7.1-fpm reload"
execute "sudo /usr/bin/find /var/run/nginx-cache -type f -delete"
end
end
end
after 'deploy:finished', 'nginx:reload'
after 'deploy:rollback', 'nginx:reload'
# Example of a task for release_roles() only
desc 'Update composer'
task :update do
on release_roles(:all) do
execute "cd #{release_path} && composer update"
end
end
before 'deploy:publishing', 'composer:update'
I can think of many scenarios where this would come in handy.
FYI, this link has more useful examples:
https://capistranorb.com/documentation/advanced-features/property-filtering/

Capistrano in Open Source Projects and different environments

I consider to use Capistrano to deploy my rails app on my server. Currently I'm using a script, which does all the work for me. But Capistrano looks pretty nice and I want to give it a try.
My first problem/question now is: How to use Capistrano properly in open source projects? I don't want to publish my deploy.rb for several reasons:
It contains sensible informations about my server. I don't want to publish them :)
It contains the config for MY server. For other people, which deploy that open source project to their own server, the configuration may differ. So it's pretty senseless to publish my configuration, because it's useless for other people.
Second problem/question: How do I manage different environments?
Background: On my server I provide two different environments for my application: The stable system using the current stable release branch and located under www.domain.com. And a integration environment for the develop team under dev.domain.com running the master branch.
How do I tell Capistrano to deploy the stable system or the dev system?
The way I handle sensitive information (passwords etc.) in Capistrano is the same way I handle them in general: I use an APP_CONFIG hash that comes from a YAML file that isn't checked into version control. This is a classic technique that's covered e.g. in RailsCast #226, or see this StackOverflow question.
There are a few things you have to do a little differently when using this approach with Capistrano:
Normally APP_CONFIG is loaded from your config/application.rb (so it happens early enough to be usable everywhere else); but Capistrano cap tasks won't load that file. But you can just load it from config/deploy.rb too; here's the top of a contrived config/deploy.rb file using an HTTP repository that requires a username/password.
require 'bundler/capistrano'
APP_CONFIG = YAML.load_file("config/app_config.yml")
set :repo_user, APP_CONFIG['repo_user']
set :repo_password, APP_CONFIG['repo_password']
set :repository, "http://#{repo_user}:#{repo_password}#hostname/repositoryname.git/"
set :scm, :git
# ...
The config/app_config.yml file is not checked into version control (put that path in your .gitignore or similar); I normally check in a config/app_config.yml.sample that shows the parameters that need to be configured:
repo_user: 'usernamehere'
repo_password: 'passwordhere'
If you're using the APP_CONFIG for your application, it probably needs to have different values on your different deploy hosts. So have your Capistrano setup make a symlink from the shared/ directory to each release after it's checked out. You want to do this early in the deploy process, because applying migrations might need a database password. So in your config/deploy.rb put this:
after 'deploy:update_code', 'deploy:symlink_app_config'
namespace :deploy do
desc "Symlinks the app_config.yml"
task :symlink_app_config, :roles => [:web, :app, :db] do
run "ln -nfs #{deploy_to}/shared/config/app_config.yml #{release_path}/config/app_config.yml"
end
end
Now, for the second part of your question (about deploying to multiple hosts), you should configure separate Capistrano "stages" for each host. You put everything that's common across all stages in your config/deploy.rb file, and then you put everything that's unique to each stage into config/deploy/[stagename].rb files. You'll have a section in config/deploy.rb that defines the stages:
# Capistrano settings
require 'bundler/capistrano'
require 'capistrano/ext/multistage'
set :stages, %w(preproduction production)
set :default_stage, 'preproduction'
(You can call the stages whatever you want; the Capistrano stage name is separate from the Rails environment name, so the stage doesn't have to be called "production".) Now when you use the cap command, insert the stage name between cap and the target name, e.g.:
$ cap preproduction deploy #deploys to the 'preproduction' environment
$ cap production deploy #deploys to the 'production' environment
$ cap deploy #deploys to whatever you defined as the default