I wanted to give deploying django projects with ansible a shot but I'm stuck on what seems to be a pretty basic issue.
I've created a basic playbook to deploy my Postgres server.
---
- hosts: default
remote_user: myusername
become: yes
become_method: sudo
become_user: postgres
vars:
- include: vars/databases.yml
tasks:
- name: Ensure Postgres server is running
service: name=postgresql state=started enabled=yes
- name: Create postgres database
postgresql_db:
name: '{{ db_name }}'
state: present
encoding: 'UTF-8'
I run the playbook and I get this error
fatal: [default]: FAILED! => {"failed": true, "msg": "'db_name' is undefined"}
In order to keep all of my passwords and such out of version control I've created a directory vars. It's located in my project structure like this with all my ansible yaml files in deploy and all my vars files in the vars subdirectory.
..
├── deploy
│ └── vars
..
├── myproject
├── manage.py
└── utils
# var/databases.yml
db_name: <database name>
What's going on here?
Update: Added contents of var/databases.yml as requested.
The variable db_name must be assigned a value before trying to create a database. the error clearly stating that <"'db_name' is undefined">, the variable db_name is undefined/unassigned.
See below sample code for an example
vars:
dbname: myapp
dbuser: django
dbpassword: mysupersecretpassword
tasks:
- name: ensure database is created
postgresql_db: name={{dbname}}
Can you change the the vars with vars_files like this:
vars_files:
- vars/databases.yml
Related
How can I configure Google Cloud Build so that a docker-compose setup can use a secret file the same way as it does when it is run locally on my machine accessing a file?
My Docker-compose based setup uses a secrets entry to expose an API key to a backend component like this (simplified for example):
services:
backend:
build: docker_contexts/backend
secrets:
- API_KEY
environment:
- API_KEY_PATH=/run/secrets/api_key
secrets:
API_KEY:
file: ./secrets/api_key.json
From my understanding docker-compose places any files in the secrets section in /run/secrets on the local container for access, that's why the target location is hard-coded to /run/build.
I would like to deploy my docker-compose setup on Google Cloud Build to use this configuration, but the only examples I've seen in documentation have been to load the secret as an environment variable. I have tried to provide this secret to the secret manager and copy it to a local file at /run/secrets like this:
steps:
- name: gcr.io/cloud-builders/gcloud
# copy to /workspace/secrets so docker-compose can find it
entrypoint: 'bash'
args: [ '-c', 'echo $API_KEY > /workspace/secrets/api_key.json' ]
volumes:
- name: 'secrets'
path: /workspace/secrets
secretEnv: ['API_KEY']
# running docker-compose
- name: 'docker/compose:1.29.2'
args: ['up', '-d']
volumes:
- name: 'secrets'
path: /workspace/secrets
availableSecrets:
secretManager:
- versionName: projects/ID/secrets/API_KEY/versions/1
env: API_KEY
But when I run the job on google cloud build, I get this error message after everything is built: ERROR: for backend Cannot create container for service backend: invalid mount config for type "bind": bind source path does not exist: /workspace/secrets/api_key.json.
Is there a way I can copy the API_KEY environment variable at the cloudbuild.yaml level so it is accessible to the docker-compose level like it is when I run it on my local filesystem?
If you want to have the value of API_KEY taken from Secret Manager and placed into a text file at /workspace/secrets/api_key.json then change your step to this:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "mkdir -p /workspace/secrets && echo $$API_KEY > /workspace/secrets/api_key.json"]
secretEnv: ["API_KEY"]
This will:
Remove the unnecessary volumes attribute which is not necessary as /workspace is already a volume that persists between steps
Make sure the directory exists before you try to put a file in it
Use the $$ syntax as described in Use secrets from Secret Manager so that it echoes the actual secret to the file
Note this section:
When specifying the secret in the args field, specify it using the environment variable prefixed with $$.
You can double-check that this is working by adding another step:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "cat /workspace/secrets/api_key.json"]
This should echo out the contents of the file as the build step, allowing you to confirm that:
The previous step read the secret
The previous step wrote the secret to the file
The file was written to a volume that persists across steps
From there you can configure docker-compose to read the contents of that persisted file.
I'm trying to reload the test database with data using a dump.
The idea is the prefill postgres:14.1 with the dump, before running the test.
So far, I have the following .gitlab-ci.yml but the DB can't file the dump file.
image: "custom_image:latest"
services:
- "postgres:14.1"
variables:
RAILS_ENV: test
POSTGRES_DB: test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGPASSWORD: postgres
POSTGRES_HOST_AUTH_METHOD: trust
DATABASE_URL: "postgresql://postgres:postgres#postgres:5432/test"
pg_restore:
stage: build
image: postgres:14.1
script:
- pg_restore --version
- pg_restore --no-privileges --no-owner --dbname=postgresql://postgres:postgres#0.0.0.0:5432/test db/test.dump
artifacts:
paths:
- ./db
test:
stage: test
dependencies:
- pg_restore
script:
- bundle exec rake db:migrate
- bundle exec rake test
--dbname=postgresql://postgres:postgres#0.0.0.0:5432/test
In the case where you use services: the database is not on localhost. So 0.0.0.0 is not the correct host to use here. Instead, it will be the hostname postgres. The DATABASE_URL is the value you want to use instead. postgresql://postgres:postgres#postgres:5432/test
You can also define an explicit hostname alias:
services:
- name: "postgres:14.1"
alias: mydatabasehostname
Additionally, services are ephemeral and do not carry state between jobs. So your test: job won't see any changes made to the database in any other job. You must setup the DB in every job.
I'm trying to set up a CI/CD pipeline in GitHub Actions for my Elixir project.
I can fetch dependencies, compile them, check formatting, credo... But when the tests starts, I'm not able to reach the PostgreSQL service declared on the YAML.
How can I link both containers? (Elixir and PostgreSQL)
According to the logs shown on GitHub Actions, both containers are on the same Docker network, so they should be reachable from each other using their network aliases. However, when I try to connect to the postgres one, it says NXDOMAIN. Also the ping doesn't work, as expected.
The content of my workflow:
name: Elixir CI
on: push
jobs:
build:
runs-on: ubuntu-18.04
container:
image: elixir:1.9.1
services:
postgres:
image: postgres
ports:
- 5432:5432
env:
POSTGRES_USER: my_app
POSTGRES_PASSWORD: my_app
POSTGRES_DB: my_app_test
steps:
- uses: actions/checkout#v1
- name: Install Dependencies
env:
MIX_ENV: test
run: |
cp config/test.secret.ci.exs config/test.secret.exs
mix local.rebar --force
mix local.hex --force
apt-get update -qqq && apt-get install make gcc -y -qqq
mix deps.get
- name: Compile
env:
MIX_ENV: test
run: mix compile --warnings-as-errors
- name: Run formatter
env:
MIX_ENV: test
run: mix format --check-formatted
- name: Run Credo
env:
MIX_ENV: test
run: mix credo
- name: Run Tests
env:
MIX_ENV: test
run: mix test
Also, on Elixir I have set up the test task to connect to postgres:5432, but it says the host does not exist.
According to some tutorials and examples I found on the Internet, this configurations looks like valid, but nothing I could do made it work.
You need to pass the name of the service ("postgres") as POSTGRES_HOST to the application and set the port POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }} (spaces matter.)
Github CI dynamically routes port and host to it.
I wrote a blog post on the subject a couple of days ago.
I'm writing an Ansible-playbook to insert a list of secret object into Kubernetes.
I'm using k8s_raw syntax and I want to import this list from a group_vars file.
I can't find the right syntax to import the list of secret into my data field.
playbook.yml
- hosts: localhost
tasks:
- name: Create a Secret object
k8s_raw:
state: present
definition:
apiVersion: v1
kind: Secret
data:
"{{ secrets }}"
SKRT: "c2trcnIK"
metadata:
name: "test"
namespace: "namespace-test"
type: Opaqueroot
vars_files:
- "varfile.yml"
varfile.yml
secrets:
TAMAGOTCHI_CODE: "MTIzNAo="
FRIDGE_PIN: "MTIzNAo="
First, what does it actually say when you attempt the above? It would help to have the result of your attempts.
Just guessing but try moving the var_files to before the place where you try to use the variables. Also, be sure that your indentation is exactly right when you do.
- hosts: localhost
vars_files:
- /varfile.yml
tasks:
- name: Create a Secret object
k8s_raw:
state: present
definition:
apiVersion: v1
kind: Secret
data:
"{{ secrets }}"
metadata:
name: "test"
namespace: "namespace-test"
type: Opaqueroot
Reference
side note: I would debug this immediately without attempting the task. Remove your main task and after trying to use vars_files, attempt to directly print the secrets using the debug play. This will allow you to fine tune the syntax and keep fiddling with it until you get it right without having to run and wait for the more complex play that follows. Reference.
To import this list from a group_vars file
Put the localhost into a group. For example a group test
> cat hosts
test:
hosts:
localhost:
Put the varfile.yml into the group_vars/test directory
$ tree group_vars
group_vars/
├── test
└── varfile.yml
Then running the playbook below
$ cat test.yml
- hosts: test
tasks:
- debug:
var: secrets.TAMAGOTCHI_COD
$ ansible-playbook -i hosts test.yml
gives:
PLAY [test] ***********************************
TASK [debug] **********************************
ok: [localhost] => {
"secrets.TAMAGOTCHI_CODE": "MTIzNAo="
}
PLAY RECAP *************************************
localhost: ok=1 changed=0 unreachable=0 failed=0
The problem was the SKRT: "c2trcnIK" field just under the "{{ secrets }}" line. I deleted it and now it works ! Thank you all.
I have an testApp.war which I'd like to deploy on Tomcat through docker (docker is on 10.0.2.157). My testApp will work properly only with postgres DB and specified user testUser and password testUserPasswd. I built such a structure:
.
├── db
│ ├── Dockerfile
│ ├── pg_hba.conf
│ └── postgresql.conf
├── docker-compose.yml
└── web
├── context.xml
├── Dockerfile
├── software
│ └── testApp.war
└── tomcat-users.xml
Content of all these files are attached below. I start my containers with command:
docker-compose up -d
However when I go to Tomcat on webbrowser (http://10.0.2.157:8282/manager/html) and try to start my testApp I got:
HTTP Status 404 – Not Found
Type Status Report
Message /testApp/
Description The origin server did not find a current representation
for the target resource or is not willing to disclose that one exists.
Apache Tomcat/8.5.20
What I'm doing wrong? Could you help me with this?
db/Dockerfile
FROM postgres:9.5
MAINTAINER riwaniak
ENV POSTGRES_USER testUser
ENV POSTGRES_PASSWORD testUserPasswd
ENV POSTGRES_DB testUser
ADD pg_hba.conf /etc/postgresql/9.5/main/
ADD postgresql.conf /etc/postgresql/9.5/main/
db/pg_hba.conf
local all all trust
host all all 127.0.0.1/32 md5
host all all 0.0.0.0/0 md5
host all
db/postgresql.conf
listen_addresses='*'
web/context.xml
<?xml version="1.0" encoding="UTF-8"?>
<Context antiResourceLocking="false" privileged="true" >
<!--
<Valve className="org.apache.catalina.valves.RemoteAddrValve"
allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
-->
</Context>
web/Dockerfile
FROM tomcat:8.5.20-jre8
MAINTAINER riwaniak
COPY ./software /usr/local/tomcat/webapps/
CMD ["catalina.sh", "run"]
web/tomcat-users.xml
<?xml version="1.0" encoding="UTF-8"?>
<tomcat-users xmlns="http://tomcat.apache.org/xml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
version="1.0">
<role rolename="tomcat"/>
<role rolename="admin-gui"/>
<role rolename="manager-gui"/>
<user username="tomcat" password="tomcat" roles="tomcat,admin-gui,manager-gui"/>
</tomcat-users>
and finally docker-compose.yml
version: '2'
services:
testApp:
build: ./web
volumes:
- /path/to/tomcat/folder/web/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
- /path/to/tomcat/folder/web/context.xml:/usr/local/tomcat/webapps/HelpdeskApp/META-INF/context.xml
- /path/to/tomcat/folder/web/context.xml:/usr/local/tomcat/webapps/host-manager/META-INF/context.xml
- /path/to/tomcat/folder/web/context.xml:/usr/local/tomcat/webapps/manager/META-INF/context.xml
ports:
- "8282:8080"
links:
- testAppdb
networks:
- testAppnet
testAppdb:
build: ./db
ports:
- "5555:5432"
volumes:
- /srv/docker/postgresql:/var/lib/postgresql
- /path/to/tomcat/folder/db/postgresql.conf:/etc/postgresql/9.5/main/postgresql.conf
- /path/to/tomcat/folder/db/pg_hba.conf:/etc/postgresql/9.5/main/pg_hba.conf
command: postgres -c config_file=/etc/postgresql/9.5/main/postgresql.conf
networks:
- testAppnet
networks:
testAppnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
OK, I got the solution!
Thanks #Tarun Lalwani for supporting and suggestion.
I had wrong application.yml configuration in Tomcat container. Docker mapped ip addresses of my containers but I shouldn't write just "10.0.2.157" but name of containers. So in my example I've got smth like below:
(...)
environments:
development:
dataSource:
dbCreate: update
url: jdbc:postgresql://10.0.2.157:5432/helpdesk_dev
(...)
However right solution was to map name of postgres container (testAppdb), so correct conf is:
(...)
environments:
development:
dataSource:
dbCreate: update
url: jdbc:postgresql://testAppdb:5432/test_dev
(...)