Deployment with only SSH Key and dockerfile - deployment

Excuse my dev ops naiveté but I assume all you need to deploy to a machine is a proper SSH key, a port to expose, the machine's IP address a login and the code to deploy.
So are there any simple solutions that deploy code to a remote server with the only input being an SSH key, a Dockerfile and the code itself? I'm thinking it could be set up in a deterministic (almost functional) manner where the input is server IP address, login, and the output is a running server.
I've tried setting up Dokku on digital ocean (https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-dokku-application) and that requires a DNS record, and git. I don't need those as dependencies.
Thanks

If I understand your question correctly, you don't needed anything more than scp, ssh and a couple of shell scripts.
Let's say you want to deploy your code from serverA to serverB.
On serverB, create a directory with you Dockerfile. Also, create a shell script, let's call it build_image.sh, that runs your docker build command using sudo.
Also, on serverB, create a shell script that builds your code from source (if necessary).
Finally, on serverB, create a shell script that calls your code build script, your docker build script and at the end runs your new docker image. Let call this script do_it_all.sh.
Make sure that you chmod 755 all shell scripts.
Now, on serverA, you have a directory with the source code. scp that directory to serverB into the directory with the Dockerfile.
Next, from serverA use ssh to call do_it_all.sh on serverB. This will build your code, build your image and deploy a container without the need for extra software, packages, git, DNS records, etc.
You can even automate this process using cron or something else to have nightly deployments, if you wish, or deployments under other conditions.
Example scripts/commands:
On serverB:
build_image.sh:
#!/bin/bash
sudo docker build -t my_image
build_code.sh (optional, adjust to your code):
#!/bin/bash
cd /path/to/my/code
./configure
make
do_it_all.sh:
#!/bin/bash
cd /path/to/my/dockerfile
sudo docker stop my_container #stop the old container
sudo docker rm my_container #remove the old container
sudo docker rmi my_image #remove the old image
./build_code.sh #comment out if not needed
./build_image.sh
sudo docker run -d --name my_container my_image
On serverA:
scp -r /path/to/my/code serverB:/path/to/my/dockerfile
ssh serverB '/path/to/my/dockerfile/do_it_all.sh'
That should be it. Adjust for your system.
To deploy to a brand new system, just write a script on serverA that uses ssh to copy create necessary directories on serverB ssh serverB 'mkdir /path/to/dockerfile'. Next, copy your Dockerfile and your build scripts and your code from serverA to serverB using scp. Then run do_it_all.sh on serverB from serverA using ssh.

Related

Share SSH keys with VS Code Devcontainer running with Docker's WSL2 backend

I'm reading these docs on sharing SSH keys with a dev container, but I can't get it to work.
My setup is as follows:
Windows 10 with Docker Desktop 4.2.0 using the WSL2 backend
A WSL2 distro running Ubuntu 20.04
In WSL2, I have ssh-agent running and aware of my key:
λ ssh-add -l
4096 SHA256:wDqVYQshQBCG/Sri/bsgjEaUFboQDUO/9FJqhFMncdk /home/taschan/.ssh/id_rsa (RSA)
The docs say
the extension will automatically forward your local SSH agent if one is running
But if I do ssh-add -l in the devcontainer, it responds with Could not open a connection to your authentication agent.; and of course starting one (with eval "$(ssh-agent -s)") only starts one that doesn't know of my private key.
What am I missing?
I had basically the same issue. Running Windows 11 with WSL2 and my VSCode Devcontainer wouldn't show any ssh keys (running ssh-add -l inside the container showed an empty list) despite having Git configured on my host machine with working ssh keys.
For me, there were 3 separate instances of ssh-agent on my machine:
WSL2
Git Bash
Windows host 🠆 This is the one VSCode is forwarding to the devcontainer
My existing ssh keys were set up inside Git Bash (as per Github's instructions) so running ssh-add -l only ever showed my ssh keys from inside a Git Bash terminal, nowhere else.
However, as explained in the previous answer, digging through the Devcontainer startup logs shows that VSCode is forwarding only the host machine's ssh-agent, it doesn't look at the WSL2 or Git Bash ones.
Solution: I suggest following the below Microsoft docs page. You need to enable an "Optional Feature" in Windows, then run a few commands in PowerShell (as admin) to activate the ssh-agent service. With this set up, the ssh-agent/ssh-add commands will work from a regular CMD terminal too.
You can use these with the usual keygen commands etc to generate and add new keys on the host (I just ssh-add'ed the same keys generated by Git Bash originally). The added keys should immediately be detected by ssh-add -l inside the container.
https://learn.microsoft.com/en-us/windows-server/administration/openssh/openssh_keymanagement
I tried many things but did not work. Finally after devcontainer is created , I note down the container name and copy the id_rsa and id_rsa.pub key inside container using docker cp command.
syntax:
docker cp <sourcefile> container_id:/dir
Copy both private and public key:
docker cp /root/.ssh/id_ed25519 eloquent_ritchie:/root/.ssh/
docker cp /root/.ssh/id_ed25519.pub eloquent_ritchie:/root/.ssh/
change the permission of private key so that you can do git operations
docker exec eloquent_ritchie chmod 600 /root/.ssh/id_ed25519
eloquent_ritchie is sample container name. Your container name will differ. Use your container name
Then I was able to do Git operations inside devcontainer.
If you rebuild your container again you need to copy the file to devcontainer again.
I also had quite a lot of trouble to get this to work. The following steps might help troubleshooting:
Check that ssh-agent is running on your host and the key is added
Run ssh-agent -l on Windows and expect to see the name of your key
Check that VSCode forwards the socket
Search ssh-agent in the startup log. I had the message
ssh-agent: SSH_AUTH_SOCK in container (/tmp/vscode-ssh-auth-a56c4b60c939c778f2998dee2a6bbe12285db2ad.sock) forwarded to local host (\\.\pipe\openssh-ssh-agent).
So it seems that VSCode is directly forwarding the Windows SSH agent here (and not an SSH agent running in your WSL).

How to deploy docker-compose solution automatically from github to vps over ssh?

What I want to do:
Deploy docker-compose solution from Github to my virtual private server which has docker and docker-compose installed.
I saw that there are Github Actions that allow me to copy files over SSH after push to master, but I don't know how to run docker-compose up on my server after source has been copied.
On my VPS I have Ubuntu 18.4 installed.
I believe Github actions also allow you to run arbitrary commands on remote servers via ssh (there are a few in their library).
Assuming you copy your docker-compose.yml into, /home/user/app/docker-compose.yml, you could run a command like so:
ssh user#yourserver.example.com "cd /home/user/app/ && docker-compose up -d"

Cron in postgresql:alpine docker container

I am using the "plain" postgresql:alpine docker image, but have to schedule a database backup daily. I think this is a pretty common task.
I created a script backupand stored in the container in /etc/periodic/15min, and made it executable:
bash-4.4# ls -l /etc/periodic/15min/
total 4
-rwxr-xr-x 1 root root 95 Mar 2 15:44 backup
I tried executing it manually, that works fine.
My problem is getting crond to run automatically.
If I exec docker exec my-postgresql-container crond, the deamon is started and cron works, but I would like to embed this into my Dockerfile
FROM postgres:alpine
# my backup script, MUST NOT have .sh extension
COPY backup.sh /etc/periodic/15min/backup
RUN chmod a+x /etc/periodic/15min/backup
RUN crond # <- doesn't work
I have no idea how to rewrite or overwrite the commands in the official image. For update reasons I also would like to stay on these images, if possible.
Note: This option if you would like to use the same container with multiple service
Install Supervisord which will makes you able to run crond and postgresql. The Dockerfile will be as the following:
FROM postgres:alpine
RUN apk add --no-cache supervisor
RUN mkdir /etc/supervisor.d
COPY postgres_cron.ini /etc/supervisor.d/postgres_cron.ini
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
And postgres_cron.ini will be as the following:
[supervisord]
logfile=/var/log/supervisord.log ; (main log file;default $CWD/supervisord.log)
loglevel=info ; (log level;default info; others: debug,warn,trace)
nodaemon=true ; (start in foreground if true;default false)
[program:postgres]
command=/usr/local/bin/docker-entrypoint.sh postgres
autostart=true
autorestart=true
[program:cron]
command =/usr/sbin/crond -f
autostart=true
autorestart=true
Then you can start the docker build process and run a container from your new image. Feel free to modify the Dockerfile or postgres_cron.ini as needed
I had the exact same problem a few month ago. The key aspect is that a container can have only one main process defined by the ENTRYPOINT and/or CMD in your Dockerfile.
You cannot just swap out postgres with crond otherwise your database isn't running. It is generally recommended to separate areas of concern by using one service per container.
With that in mind either use a separate container which runs nothing but crond and thus Docker can both track its lifecycle, and restart it when/if it fails, the machine restarts, etc.
Or run the jobs via cron on your host using docker exec.
The third and in my opinion best (but also advanced) solution is pg_cron. It is an postgres extension and therefore runs the jobs in the same database container. Your challenge would be to adapt the configuration and installation of it.
The easy part should be the
postgresql.conf:
# add to postgresql.conf:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'postgres'
Next, you need to add the pg_cron extension to your image by adjusting the Dockerfile, which you can derive from the official alpine postgres image. The installation of it is described here.

Run parallel simulations in remote unix cluster from matlab

I want to run simulations in parallel in a remote Cluster calling them from matlab.
I manage to run them in my local Ubuntu machine using.
unix('parallel -j4 flow > /dev/null :::: Pool.txt');
But when I want it to run it in a remote cluster I really did not mange to make the parallel command to work.
The first problem was to avoid entering the password.
For that I used the sshpass as this
unix('sshpass -p password ssh user#cluster.example.com')
That get me in to the server but it does not continue to the next command line.
I try so many commands that I do not want to reference here.
But basically can some one that understands well the parallel GNU command usage tell me how can I connect to a cluster. and run the simulations there. is it better just to make a script at the server and run it from matlab?
Any expert advice is highly appreciated.
Your problem is not with GNU Parallel but with configuring ssh. First, you must get ssh set up, then the rest is easy.
So, on your local Ubuntu machine, you need to create your keys:
ssh-keygen -t rsa -b 2048
That will make some files in $HOME/.ssh. You now need to copy the public part of those keys to each and every node of the remote cluster where you want to run your parallel jobs:
ssh-copy-id -i $HOME/.ssh/id_rsa.pub CLUSTERUSERNAME#NODE-0
...
ssh-copy-id -i $HOME/.ssh/id_rsa.pub CLUSTERUSERNAME#NODE-15
e.g.
ssh-copy-id -i $HOME/.ssh/id_rsa.pub fred#192.168.0.100
Now, test you can ssh into each node:
ssh fred#node2
Then, on your local Ubuntu box, set up your config file for ssh, so it will be $HOME/.ssh/config
Host node0
Hostname 192.168.0.100
User fred
...
...
Host node15
Hostname 192.168.0.115
User fred
Now you can just use:
ssh node0
and it will know that means fred#192.168.0.100
Now GNU Parallel will work with:
parallel -S node0,node1,node2

JHipster - Using docker-compose on remote server

I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>