How to run a Ansible playbook to a group of servers - deployment

I have a master playbook with a bunch of roles in it:
---
- hosts: target.machine.com
roles:
- role: software-install
become: yes
become_user: myself
tags: sw_setup
- role: another-softeware-install
become: yes
become_user: notmyself
tags: another_installation
In my hosts file for Ansible, I have
[myservers]
server-one.com
server-two.com
I would like to run software-install role on group of servers under [my-servers] group. Sorry I am new to this. Any help will be appreciated. Thanks.
Edit:
I also tried to run the following
ansible-playbook -s masterPlaybook.yml -K -l myservers --tags another_installation
but it gives an error stating that invalid host pattern [server-one].

So turns out I was going in the right direction. It is -l myservers that is correct. When it said that it was an invalid host pattern it was because that I need to add the hosts under myservers to hosts variable under my masterPlaybook.yml. This link helped SO link.

Related

requested access to the resource is denied [duplicate]

I am using Laravel 4.2 with docker. I setup it on local. It worked without any problem but when I am trying to setup online using same procedure then I am getting error:
pull access denied for <projectname>/php, repository does not exist or may require 'docker login'
is it something relevant to create repository here https://cloud.docker.com/ or need to docker login in command?
After days of study I am still not able to figure out what could be the fix in this case and what are the right steps?
I have the complete code. I can paste here if need to check certain parts.
Please note that the error message from Docker is misleading.
$ docker build deploy/.
Sending build context to Docker daemon 5.632kB
Step 1/16 : FROM rhel7:latest
pull access denied for rhel7, repository does not exist or may require 'docker login'
It says that it may require 'docker login'.
I struggled with this. I realized the image does not exist at https://hub.docker.com any more.
Just make sure to write the docker name correctly!
In my case, I wrote (notice the extra 'u'):
FROM ubunutu:16.04
The correct docker name is:
FROM ubuntu:16.04
The message usually comes when you put the wrong image name. Please check your image if it exists on the Docker repository with the correct tag.
It helped me.
docker run -d -p 80:80 --name ngnix ngnix:latest
Unable to find image 'ngnix:latest' locally
docker: Error response from daemon: pull access denied for ngnix, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
$ docker run -d -p 80:80 --name nginx nginx:latest
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
I had the same issue. In my case it was a private registry. So I had to create a secret as shown here
and then we have to add the image pull secret to the deployment.yaml file as shown below.
pods/private-reg-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
November 2020 and later
If this error is new, and pulling from Docker Hub worked in the past, note Docker Hub now introduced rate limiting in Nov 2020
You will frequently see messages like:
Warning: No authentication provided, using CircleCI credentials for pulls from Docker Hub.
From Circle CI and other similar tools that use Docker Hub. Or:
Error response from daemon: pull access denied for cimg/mongo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
You'll need to specify the credentials used to fetch the image:
For CircleCI users:
- image: circleci/mongo:4.4.2
# Needed to pull down Mongo images from Docker hub
# Get from https://hub.docker.com/
# Set up at https://app.circleci.com/pipelines/github/org/sapp
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
I had the same issue
pull access denied for microsoft/mmsql-server-linux, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Turns out the DockerHub was moved to a different name
So I would suggest you re check-in docker hub
I solved this by inserting a language at the front of the docker image
FROM python:3.7-alpine
I had the same error message but for a totally different reason.
Being new to docker, I issued
docker run -it <crypticalId>
where <crypticalId> was the id of my newly created container.
But, the run command wants the id of an image, not a container.
To start a container, docker wants
docker start -i <crypticalId>
In my case I was using a custom image and docker baked into Minikube on my local machine.
I had specified the pull policy incorrectly:-
imagePullPolicy: Always
But it should have been:-
imagePullPolicy: IfNotPresent
Because the custom image was only present locally after I'd explicitly built it in the minikube docker environment.
I had this because I inadvertantly remove the AS tag from my first image:
ex:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
should have been:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64 AS installer
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
I had the same issue when working with docker-composer. In my case it was an Amazon AWS ECR private registry. It seems to be a bug in docker-compose
https://github.com/docker/compose/issues/1622#issuecomment-162988389
After adding the full path "myrepo/myimage" to docker compose yaml
image: xxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/myrepo:myimage
it was all fine.
This error message might possibly indicate something else.
In my case I defined another Docker-Image elsewhere from which the current Docker inherited its settings (docker-compos.yml):
FROM my_own_image:latest
The error message I got:
qohelet$ docker-compose up
Building web
Step 1/22 : FROM my_own_image:latest
ERROR: Service 'web' failed to build: pull access denied for my_own_image, repository does not exist or may require 'docker login'
Due to a reinstall the previous Docker were gone and I couldn't build my docker using docker-compose up with this command:
sudo docker build -t my_own_image:latest -f MyOwnImage.Dockerfile .
In your specific case you might have defined your own php-docker.
If the repository is private you have to assign permissions to download it. You have two options, with the docker login command, or put in ~/.docker/docker.config the file generated once you login.
if you have over two stage in the docker build process read this solution:
this error message is completely misleading.
if you have a two-stage (context) dockerfile and want to copy some data from the first to the second stage, you must label the first context (ex: build) and access it by that label
#stage(1)
from <image> as build
.
.
#stage(2)
From <image>
copy --from=build /sourceDir /distinationDir
Docker might have lost the authentication data. So you'll have to reauthenticate with your registry provider. With AWS for example:
aws ecr get-login --region us-west-2 --no-include-email
And then copy and paste that resulting "docker login..." to authenticated docker.
Source: Amazon ECR Registeries
If you're downloading from somewhere else than your own registry or docker-hub, you might have to do a separate agreement of terms on their site, like the case with Oracle's docker registry. It allows you to do docker login fine, but pulling the container won't still work until you go to their site and agree on their terms.
Make sure the image exists in docker hub. To me, I was trying to pull MongoDB using the command docker run mongodb which is incorrect. In the docker hub, the image name is mongo.
If you don't have an image with that name locally, docker will try to pull it from docker hub, but there's no such image on docker hub.
Or simply try "docker login".
If you are using multiple Dockerfiles you should not forget to run build for all of it. That was my case.
I had to run docker pull first, then running docker-compose up again and then it worked.
docker pull index.docker.io/youruser/yourrepo:latest
Try this in your docker-compose.yml file
image: php:rc-zts-alpine
When I run the command multiple times "docker pull scrapinghub/splash" in Power shell then it solve the issue.
if it was caused with AWS EC2 and ECR, due to name issue(happens with beginners!)
Error response from daemon: pull access denied for my-app, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
when using docker pull use Image URI of the image, available in ECR-row itself as Copy URI
docker pull Image_URI
I have seen this message and thought something was wrong about my Docker authentication. However, I've realized that Docker only allows 1 private repository per free plan. So it is quite possible that you are trying to pull your private repository and see this error because have not upgraded your plan.
Got the same problem but nothing worked. And then I understood I need run .sh (.ps1) script first before doing docker-compose.
So I have the following files:
docker-compose.yml
docker-build.sh
docker-build.ps1
Dockerfile
And I had to first run docker-build.sh on Unix (Mac) machine or docker-build.ps1 on Windows:
sh docker-build.sh
It will build an image in my case.
And only then after an image has been built I can run:
docker-compose up --build
For references. Here is my docker-compose file:
version: '3.8'
services:
api-service:
image: x86_64/prediction-service:0.8.1
container_name: api-service
expose:
- 8060
ports:
- "8060:80"
And here is docker-build.sh:
VERSION="0.8.1"
ARCH="x86_64"
APP="prediction-service"
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
docker build -f $DIR/Dockerfile -t $ARCH/$APP:$VERSION .
I had misspelled nginx to nignx in Dockerfile
In my case the solution was to re-create docker-file through visual studio and all worked perfeclty.
I heard the same issue.
I solved by login
docker login -u your_user_name
then I was prompt to enter docker hub password
The rest command work perfect after login successfull
Someone might come across the same error for different reasons than what is already presented, so let me share:
I got the same error when using docker multistage builds (Multiple: FROM <> as <>).
And I forgot to remove one (COPY --from=<> <>)
After removing that COPY then it worked fine.
Exceeded Docker Hub's Limit on Free Repos:
Despite first executing:
docker login -u <dockerhub uname>
and "Login Succeeded" being returned, I received the error in this question.
In the webgui in Settings > Visibility Settings I remarked:
Using 2 of 1 private repositories.
Which told me that I had exceeded the limit on Docker Hub's free account limits. However, removing a previous image didn't clear the error...
The Fix:
Indeed, the error message in my case was a red herring- it's nothing related to authentication issues.
Deleting just the images exceeding the allowed limit did NOT clear the error however!
To get past the error you need to delete ALL the images in your FREE Docker Hub account, then run a new build pushing the image to your account.
Your pull command will now succeed.

How to connect postgres to ansible?

Objective:
My objective is to connect postgres 9.3 using Ansible 2.8.3, and perform postgres operations using ansible.
I have created a yaml file to install postgres, this file also creates a database using the yaml script.
I tried resolving this error by changing the contents of the sudoer file, but it damaged the file forcing me to reinstall ubuntu and ansible.
Ansible Code:
- hosts: localhost
become: yes
gather_facts: no
tasks:
- name: ensure apt cache is up to date
apt: update_cache=yes
- name: ensure packages are installed
apt: name={{item}}
with_items:
- postgresql
- libpq-dev
- python-psycopg2
- hosts: localhost
become: yes
become_user: emgda
gather_facts: no
vars:
dbname: myapp
dbuser: emgda
dbpassword: Entrib!23
tasks:
- name: ensure database is created
postgresql_db: name={{dbname}}
- name: ensure user has access to database
postgresql_user: db={{dbname}} name={{dbuser}} password={{dbpassword}} priv=ALL
- name: ensure user does not have unnecessary privilege
postgresql_user: name={{dbuser}} role_attr_flags=NOSUPERUSER,NOCREATEDB
- name: ensure no other user can access the database
postgresql_privs: db={{dbname}} role=PUBLIC type=database priv=ALL state=absent
...
After running this file I have come across below error:
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "sudo: a password is required\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
NOTE: Can anyone kindly help me resolve this issue. I am new to Ansible. I am following this link to practice already running Ansible script.
You've set become: yes in your playbook so ansible trying to switch to the root user. According to the error message - sudo: a password is required, you didn't set the --ask-become-pass option during the playbook run and didn't set passwordless sudo for your ansible user.
So you need to run your playbook with --ask-become-pass option, or setup ability to use sudo without password for user that you user for Ansible.
Escalation works fine in the first play
- hosts: localhost
become: yes
The default become_user is root. This means that the user who is running the playbook (see ansible_user) is able to escalate privilege sudo su -.
The second play escalates to user emgda. This means that the user who is running the playbook shall escalate privilege sudo su emgda
- hosts: localhost
become: yes
become_user: emgda
This requires a password which is missing, resulting in the error
sudo: a password is required
The solutions are
1) Provide the password in the command-line with --ask-become-pass, or
2) Provide the password in the variable ansible_become_password, or
3) configure sudoers to escalate the privilege without password
<user-running-playbook> ALL=(ALL) NOPASSWD: ALL

Kubernetes Container Command

I'm working with Neo4j in Kubernetes.
For a showcase, I want to fill the Neo4j in the Pods with initial data which i can do with a cypher-file, I'm having in the /bin folder using the cypher-shell.
So basically I start the container and run cat bin/initialData.cypher | bin/cypher-shell.
I've validated that this works by running it in the kubectl exec -it <pod> /bin/bash bash.
However, no matter how I try to map to the spec.container.command, it fails.
Currently my best guess is
spec:
containers:
command:
- /bin/bash
- -c
- |
cd bin
ls
cat initialData.cypher | cypher-shell
which does not work. It displays the ls correctly but throws a connection refused afterwards, where I have no idea where its coming from.
edit: Updated
You did valid spec, but with a wrong syntax.
Try like this
spec:
containers:
command: ["/bin/bash"]
args: ["-c","cat import/initialData.cypher | bin/cypher-shell"]
Update:
In your neo4j.conf you have to uncomment the lines related to using the neo4j-shell
# Enable a remote shell server which Neo4j Shell clients can log in to.
dbms.shell.enabled=true
# The network interface IP the shell will listen on (use 0.0.0.0 for all interfaces).
dbms.shell.host=127.0.0.1
# The port the shell will listen on, default is 1337.
dbms.shell.port=1337
Exec seems like the better way to handle this but you wouldn’t usually use both command and args. In this case, probably just put the whole thing in command.
I've found out what my problem was.
I did not realize that the commands are not linked to the initialisation lifecycles meaning the command was executed, before the neo4j was started in the container.
Basically, using the command was the wrong approach for me.

How to execute a specific task as an unpriviliged user from an Ansible role when connecting as root

I tried to use the escalation feature of Ansible to run a specific task within a role as an unprivliged user, but the task still executed by the root user with which I execute my playbook calling the role.
My problem is related to the creation of a new DATABASE on DB2 for LUW after installing and configuring the DB2 product using the root user on the same role. I have a shell script which creates a new DATABASE, but should be run as db2inst1 (not root).
I tried with become, become_user and become_method as suggested on the official Ansible docs and some threads here on stackoverflow.
Here is an extract of my Ansible role:
- name: Execution of the creation script
become: yes
become_method: su
become_user: db2inst1
shell: /home/CreateDb.sh TESTDB
OR:
- name: Creation of a test DB
script: CreateDb.sh TESTDB
become: yes
become_method: su
become_user: db2inst1
I have also added this line to my ansible.cfg :
allow_world_readable_tmpfiles=True
I have also upgraded Ansible package from 2.0.1 to 2.1; but this has no effect and the task still run as root.
I run my playbook as follow:
ansible-playbook playbooks/db2-test.yml -u root -k
I don't know what I am missing; plase help me.
Thanks in advance!
It is important to distinguish between the user ansible connects to the target machine as and the user the task runs as (becomes). Both of the examples you pasted (script module and shell module) look roughly correct. What are the indications you see that those tasks are still running as root? I would add -vvvv to your ansible-playbook run to see what ansible is doing in higher detail, including user information.

Ansible synchronize asking for a password

I am using Ansible (1.9.2) to deploy some files to a Redhat 6.4 server.
The playbook looks something like this
- name: deploy files
hosts: web
tasks:
- name sync files
sudo: no
synchronize:
src={{ local_path }}
dest={{ dest_path }}
And to kick this off I run something like the following
ansible-playbook -i myinventory myplaybook.yml -u DOMAIN\\user --ask-pass
When I start the play I enter my password at the prompt, facts are then obtained successfully, however as soon as the synchronize task is reached another prompt asks for my password again, like the following
DOMAIN\user#hostname's password:
If I enter my password again the deploy completes correctly.
My questions are
How can I fix or work around this, so that I do not have to enter my password for every use of the synchronize module?
Is this currently expected behaviour for the synchronize module? Or is this a bug in Ansible?
I cannot use ssh keys due to environment restrictions.
I do not want to use the copy module for scalability reasons.
Things I have tried
I have seen a number of other questions on this subject but I have
not been able to use any of them to fix my issue or understand if
this is expected behavior.
Ansible synchronize prompts passphrase even if already entered at the beginning
Ansible prompts password when using synchronize
https://github.com/ansible/ansible/issues/5934
https://github.com/ansible/ansible/issues/7071
The Ansible docs are generally excellent but I have not been able to find anything about this on the offical docs.
I have tried specifiying the user and password in the inventory file and not using the --ask-pass and -u parameters. But while I then do not have to enter the password to collect facts, the synchronize module still requests my password.
I have tried setting the --ask-sudo-pass as well, but it did not help
I have been using a CentOS 7 control box, but I have also tried an Ubuntu 14.04 box
Can anyone help?
Why not use inventory like below encrypted with Vault (ansible-playbook –ask-vault-pass …)?:
[targets]
other1.example.com ansible_connection=ssh ansible_ssh_user=mpdehaan ansible_ssh_pass=foobar
other2.example.com ansible_connection=ssh ansible_ssh_user=mdehaan ansible_ssh_pass=foobar123
Synchronize will ask you for password if your ansible server credential is different from you target host. I've tried many proposed workarounds however none of them worked...
Eventually I had to go back to file module using --sftp-extra-args to achieve what I needed. It did the trick.
To pass a password to synchronize module you can use --password-file option like so.
tasks:
- name: test_rsync
synchronize:
mode: pull
src: rsync://user#host/your/remote/path
dest: /your/local/path/
rsync_opts:
- "--password-file=/path/to/password_file"
I used the Shell for that.
- name: test_rsync
shell: rsync -a --delete --rsh='/usr/bin/sshpass -p "{{ pass }}" ssh -o StrictHostKeyChecking=no -l $RemoteUser' {{ local_path }} $RemoteUser#{{ inventory_hostname }}:/{{ dest_path }}
become: false
delegate_to: localhost #If needed
The password is encrypted with Ansible-Vault and saved under /vars/main.yml