How to create a user in a digitalocean droplet and give ssh permissions using terraform? - server

I am using terraform to provision a server on Digitalocean while also using the "provisioner remote-exec block" feature to run some linux commands into the server. But the issue now is I need to run some docker commands with a new user so I must have a new user and all permissions assigned including ssh but I don't seem to find my way around it.
I added the block below to the configuration file and got errors as the user seem not to be able to perform ssh activities and my guess is permissions.
provisioner "remote-exec" {
inline = [
"apt-get update",
"sudo apt install docker.io -y",
"sudo snap install docker",
"useradd -m chainsafe",
"sudo usermod -aG ssh chainsafe",
"apt-get update",
"systemctl enable forest.service",
"docker run -p 1234:1234 --rm --detach ghcr.io/chainsafe/forest:latest --encrypt-keystore false --auto-download-snapshot --chain calibnet",
"systemctl start forest.service",
]
connection {
type = var.type
user = var.user
private_key = file("~/sammy")
host = self.ipv4_address
agent = var.agent
}
}
I was expecting that the user is created and the connection block can use the new user to run the docker command. Terraform gets stuck trying to perform the process but the ssh handshake is flawed hence errors

Related

Got permission denied while trying to connect to the Docker daemon socket while executing docker stop

I have 3 containers running on my docker, and I need to stop all of them using the following:
sudo docker stop $(docker ps -q)
When a run the command I got this message:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/containers/json: dial unix /var/run/docker.sock: connect: permission denied
See 'docker stop --help'.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop one or more running containers
I made some search, and the cases that message show does not apply to my environment. I'm using Ubuntu 16.04 LTS with Docker version 17.09.0-ce, build afdb6d4
What does this message mean?
sudo usermod -a -G docker $USER
Reboot then run:
docker container run hello-world
it worked for me on ubuntu 18.2
If you are getting "permission denied" that probably means you haven't added yourself in users group which can operate upon docker. To fix that, go to your terminal and type:
sudo usermod -aG docker <name-of-user-to-grant-permission>
The 'docker' parameter is group created upon installing docker, and you can check that by typing:
getent group | grep docker
And the second parameter is the user you are adding to the group. The list of users you can check by typing:
getent passwd
For more information about command usermod you can find here.
UPDATED:
I installed docker again and just remembered that when you apply this command you need to restart your machine.
It seems your user cannot use docker command, so you need to run it via sudo in parentheses as well:
sudo docker stop $(sudo docker ps -q)

Unable to login as Jenkins user

Im trying to setup Jenkins for one of my project but get this host key verification failed error.
Now, Im trying to setup a ssh key for my jenkins user but have issues logging as jenkins user.
sudo su -s /bin/bash jenkins
When I try this above command it takes me to
bash-4.1$
instead of bash user.

postgresql: pg_ctl status shows no server running when the server is running as a windows service

I have PostgreSQL 9.4(not installed, rather self configured) which is also installed as a Windows service. Now I am trying to check the status of the server using pg_ctl.exe status -D data_dir_path, but it only shows the status when I start the console as admin.
My final goal is to be able to shutdown/ start the database server without admin rights. Is it possible to configure PostgreSQL so that I can start/stop the servers locally without admin rights?
As far I read in the PostgreSQL documentation, the services can be registered to a user using [-U username] [-P password] arguments but I am not sure whether this is the database user or the local windows user. I tried registering the service using the following code but it does not install it. And I do not see any logs too. The commnd follows:
pg_ctl.exe register -N service_name -U database_user -P database_user_password -D data_dir_path -S auto -o "-p port"
Thanks in advance

Deployment with only SSH Key and dockerfile

Excuse my dev ops naiveté but I assume all you need to deploy to a machine is a proper SSH key, a port to expose, the machine's IP address a login and the code to deploy.
So are there any simple solutions that deploy code to a remote server with the only input being an SSH key, a Dockerfile and the code itself? I'm thinking it could be set up in a deterministic (almost functional) manner where the input is server IP address, login, and the output is a running server.
I've tried setting up Dokku on digital ocean (https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-dokku-application) and that requires a DNS record, and git. I don't need those as dependencies.
Thanks
If I understand your question correctly, you don't needed anything more than scp, ssh and a couple of shell scripts.
Let's say you want to deploy your code from serverA to serverB.
On serverB, create a directory with you Dockerfile. Also, create a shell script, let's call it build_image.sh, that runs your docker build command using sudo.
Also, on serverB, create a shell script that builds your code from source (if necessary).
Finally, on serverB, create a shell script that calls your code build script, your docker build script and at the end runs your new docker image. Let call this script do_it_all.sh.
Make sure that you chmod 755 all shell scripts.
Now, on serverA, you have a directory with the source code. scp that directory to serverB into the directory with the Dockerfile.
Next, from serverA use ssh to call do_it_all.sh on serverB. This will build your code, build your image and deploy a container without the need for extra software, packages, git, DNS records, etc.
You can even automate this process using cron or something else to have nightly deployments, if you wish, or deployments under other conditions.
Example scripts/commands:
On serverB:
build_image.sh:
#!/bin/bash
sudo docker build -t my_image
build_code.sh (optional, adjust to your code):
#!/bin/bash
cd /path/to/my/code
./configure
make
do_it_all.sh:
#!/bin/bash
cd /path/to/my/dockerfile
sudo docker stop my_container #stop the old container
sudo docker rm my_container #remove the old container
sudo docker rmi my_image #remove the old image
./build_code.sh #comment out if not needed
./build_image.sh
sudo docker run -d --name my_container my_image
On serverA:
scp -r /path/to/my/code serverB:/path/to/my/dockerfile
ssh serverB '/path/to/my/dockerfile/do_it_all.sh'
That should be it. Adjust for your system.
To deploy to a brand new system, just write a script on serverA that uses ssh to copy create necessary directories on serverB ssh serverB 'mkdir /path/to/dockerfile'. Next, copy your Dockerfile and your build scripts and your code from serverA to serverB using scp. Then run do_it_all.sh on serverB from serverA using ssh.

Capistrano is hanging when prompting for SUDO password to an Ubuntu box

I have a capistrano deployment recipe I've been using for some time to deploy my web app and then restart apache/nginx using the sudo command. Recently cap deploy is hanging when I try to execute these sudo commands. I see the output:
"[sudo] password for "
With my server name and the remote login, but this is not a secure login prompt. The cap shell is just hanging waiting for more output and does not allow me to type my password in to complete the remote sudo command.
Is there a way to fix this or a decent work around? I did not want to remove the sudo password prompt of my remote user for web restart commands.
This seems to happen when connecting to CentOS machines as well. Add the following line in your capistrano deploy file:
default_run_options[:pty] = true
Also make sure to use the sudo helper instead of executing sudo in your run commands directly. For example:
# not
run "sudo chown root:root /etc/my.cnf"
# but
sudo "chown root:root /etc/my.cnf"
The other advice may be sound, but I found that once I updated to Capistrano 2.5.3 the problem went away. I have to make sure I stop running the default versions of tools that came with my O/S.
# prevent sudo prompting for password
set :sudo_prompt, ""