Suppose I have my docker networks defined in a variable such as:
docker_networks:
- name: default
driver: bridge
- name: proxy
driver: bridge
ipam_options:
subnet: '192.168.100.0/24'
- name: socket_proxy
driver: bridge
ipam_options:
subnet: '192.168.101.0/24'
How would I go about running this with a loop to create these docker networks?
I tried the following, however the ipam_config parameter causes it to fail if no subnet is defined:
- name: Create networks
docker_network:
name: '{{ item.name }}'
driver: '{{ item.driver | default(omit) }}'
ipam_config:
- subnet: '{{ item.ipam_options.subnet | default(omit) }}'
loop: '{{ docker_networks }}'
If you modify your docker_networks variable so that the value of the ipam_options key is a list of dictionaries:
docker_networks:
- name: proxy
driver: bridge
ipam_options:
- subnet: '192.168.100.0/24'
- name: socket_proxy
driver: bridge
ipam_options:
- subnet: '192.168.101.0/24'
- name: no_subnet
driver: bridge
Then you can rewrite your task like this:
- name: Create networks
community.docker.docker_network:
name: '{{ item.name }}'
driver: '{{ item.driver | default(omit) }}'
ipam_config: "{{ item.ipam_options | default(omit) }}"
loop: '{{ docker_networks }}'
(I would also just rename the ipam_options key to ipam_config, so
that it matches the parameter name.)
Related
We are trying to run a chaos experiment and running into this error:
ubuntu#ip-x-x-x-x:~$ kubectl logs -f pod/chaos-testing-hn8c5 -n <ns>
[2022-12-08 16:05:22 DEBUG] [cli:70] ###############################################################################
[2022-12-08 16:05:22 DEBUG] [cli:71] Running command 'run'
[2022-12-08 16:05:22 DEBUG] [cli:75] Using settings file '/root/.chaostoolkit/settings.yaml'
Usage: chaos run [OPTIONS] SOURCE
Try 'chaos run --help' for help.
Error: no such option: --var-file /tmp/token.env
Here is the spec file:
spec:
serviceAccountName: {{ .Values.serviceAccount.name }}
restartPolicy: Never
initContainers:
- name: {{ .Values.initContainer.name }}
image: "{{ .Values.initContainer.image.name }}:{{ .Values.initContainer.image.tag }}"
imagePullPolicy: {{ .Values.initContainer.image.pullPolicy }}
command: ["sh", "-c", "curl -X POST https://<url> -H 'Content-Type: application/x-www-form-urlencoded' -d 'grant_type=client_credentials&client_id=<client_id&client_secret=<client_secret>'| jq -r --arg prefix 'ACCESS_TOKEN=' '$prefix + (.access_token)' > /tmp/token.env;"]
volumeMounts:
- name: token-path
mountPath: /tmp
- name: config
mountPath: /experiment
readOnly: true
containers:
- name: {{ .Values.image.name }}
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: {{ .Values.image.repository }}
args:
- --verbose
- run
- --var-file /tmp/token.env
- /experiment/terminate-all-pods.yaml
env:
- name: CHAOSTOOLKIT_IN_POD
value: "true"
volumeMounts:
- name: token-path
mountPath: /tmp
- name: config
mountPath: /experiment
readOnly: true
resources:
limits:
cpu: 20m
memory: 64Mi
requests:
cpu: 20m
memory: 64Mi
volumes:
- name: token-path
emptyDir: {}
- name: config
configMap:
name: {{ .Values.experiments.name }}
We have also tried using the --var "KEY=VALUE" which also failed with the same error.
Any help with this is appreciated. We have hit the wall at this point in time
Docker image being used is: https://hub.docker.com/r/chaostoolkit/chaostoolkit/tags
The kubernetes manifest is slight incorrect.
The environment variable injection worked when passing it like this:
args:
- --verbose
- run
- --var-file
- /tmp/token.env
- /experiment/terminate-all-pods.yaml
The option flag and its value needed to be on two separate lines
I provisioned alertmanager using Helm (and ArgoCD).
I need to insert smtp_auth_password value but not as a plain text.
smtp_auth_username: 'apikey'
smtp_auth_password: $API_KEY
How can I achieve it? I heard about "external secret" but this should be the easiest way?
Solution
if you use prometheus-community/prometheus which includes this alertmanager chart as a dependency, then you can do the following:
create secret in the same namespace where your alertmanager pod is running:
k create secret generic alertmanager-secrets \
--from-literal="opsgenie-api-key=YOUR-OPSGENIE-API-KEY" \
--from-literal="slack-api-url=https://hooks.slack.com/services/X03R2856W/A14T19TKEGM/...."
mount that secret via use of extraSecretMounts
alertmanager:
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
# contains secret values for opsgenie and slack receivers
extraSecretMounts:
- name: secret-files
mountPath: /etc/secrets
subPath: ""
secretName: alertmanager-secrets
readOnly: true
use them in your receivers:
receivers:
- name: slack-channel
slack_configs:
- channel: '#client-ccf-ccl-alarms'
api_url_file: /etc/secrets/slack-api-url <-------------------THIS
title: '{{ template "default.title" . }}'
text: '{{ template "default.description" . }}'
pretext: '{{ template "slack.pretext" . }}'
color: '{{ template "slack.color" . }}'
footer: '{{ template "slack.footer" . }}'
send_resolved: true
actions:
- type: button
text: "Query :mag:"
url: '{{ template "alert_query_url" . }}'
- type: button
text: "Silence :no_bell:"
url: '{{ template "alert_silencer_url" . }}'
- type: button
text: "Karma UI :mag:"
url: '{{ template "alert_karma_url" . }}'
- type: button
text: "Runbook :green_book:"
url: '{{ template "alert_runbook_url" . }}'
- type: button
text: "Grafana :chart_with_upwards_trend:"
url: '{{ template "alert_grafana_url" . }}'
- type: button
text: "KB :mag:"
url: '{{ template "alert_kb_url" . }}'
- name: opsgenie
opsgenie_configs:
- send_resolved: true
api_key_file: /etc/secrets/opsgenie-api-key <-------------------THIS
message: '{{ template "default.title" . }}'
description: '{{ template "default.description" . }}'
source: '{{ template "opsgenie.default.source" . }}'
priority: '{{ template "opsgenie.default.priority" . }}'
tags: '{{ template "opsgenie.default.tags" . }}'
If you want to use email functionality of email_config
then simply use the same approach with:
[ auth_password_file: <string> | default = global.smtp_auth_password_file ]
I need to build a docker compose based on a yml file. In the next yml it will be the name, the image and the version of each service.
"services":
- "service": "front"
"image": "acalls-caselog-web-app"
"version": "latest"
- "service": "back"
"image": "acalls-caselog-web-service"
"version": "latest"
- "service": "vb"
"image": "acalls-caselog-vb-service"
"version": "latest"
- "service": "salesforce"
"image": "acalls-caselog-salesforce-app-service"
"version": "latest"
- "service": "tts"
"image": "ydilo-tts-service"
"version": "latest"
- "service": "ai classifier"
"image": "acalls-caselog-ai-classifier-service"
"version": "latest"
Up to now I had an array to set each image in the docker compose, like this
version: "3.3"
services:
front:
image: url/{{services[0].image}}:{{services[0].version}}
ports:
- "81:81"
extra_hosts:
- "backend:172.32.3.46"
environment:
profile: preproduction
back:
image: url/{{services[1].image}}:{{services[1].version}}
ports:
- "20101:20101"
environment:
profile: preproduction
saleforce:
image: url/{{services[2].image}}:{{services[2].version}}
ports:
- "20103:20103"
environment:
profile: preproduction
But I need to find a way to make this dynamically with a loop in the ansible task, for example, without the array position in the docker-compose file.
Main.yml
---
- name: stop container
ignore_errors: yes
become: True
shell:
cmd: "docker-compose down"
chdir: dir
- name: set docker-compose
template:
src: docker-compose-acalls.yml.j2
dest: dir/docker-compose.yml
mode: 0700
- name: Run container
become: True
shell:
cmd: "nohup docker-compose -f docker-compose.yml up -d"
chdir: dir
you create a template file: you have to play with whitespace , %- and -% to adjust the position, i have just given the general idea
version: "3.3"
services:
{% for item in services %}
{{ item.service }}:
{% if item.image is defined %}
image: url/{{item.image}}:{{item.version}}
{%- endif %}
{% if item.ports is defined %}
ports:
- "{{ item.ports[0] }}"
{%- endif %}
{% if item.extra_hosts is defined %}
extra_hosts:
- "{{ item.extra_hosts[0] }}"
{%- endif %}
{% if item.environment is defined %}
environment:
- profile: {{ item.environment.profile }}
{% endif %}
{% endfor %}
your playbook:
- name: test
hosts: localhost
vars_files:
- reference.yml
tasks:
template:
src: fileconf.j2
dest: composedocker.yml
and your reference.yml file:
services:
- service: "front"
image: "acalls-caselog-web-app"
ports:
- "81:81"
extra_hostss:
- "backend:172.32.3.46"
environment:
profile: preproduction
version: "latest"
- service: "back"
image: "acalls-caselog-web-service"
version: "latest"
ports:
- "20101:20101"
environment:
profile: preproduction
- service: "vb"
image: "acalls-caselog-vb-service"
version: "latest"
- service: "salesforce"
image: "acalls-caselog-salesforce-app-service"
version: "latest"
ports:
- "20103:20103"
environment:
profile: preproduction
- service: "tts"
image: "ydilo-tts-service"
version: "latest"
- service: "ai classifier"
image: "acalls-caselog-ai-classifier-service"
version: "latest"
result:
version: "3.3"
services:
front:
image: url/acalls-caselog-web-app:latest
ports:
- "81:81"
extra_hosts:
- "backend:172.32.3.46"
environment:
- profile: preproduction
back:
image: url/acalls-caselog-web-service:latest
ports:
- "20101:20101"
environment:
- profile: preproduction
vb:
image: url/acalls-caselog-vb-service:latest
salesforce:
image: url/acalls-caselog-salesforce-app-service:latest
ports:
- "20103:20103"
environment:
- profile: preproduction
tts:
image: url/ydilo-tts-service:latest
ai classifier:
image: url/acalls-caselog-ai-classifier-service:latest
I'm trying to install Kubernetes on Google Cloud Instance using ansible, and it says it can't find the match over and over again,,
when I run ansible-playbook -i inventory/mycluster/inventory.ini -v --become --become-user=root cluster.yml :
[WARNING]: Could not match supplied host pattern, ignoring: kube-master
PLAY [Add kube-master nodes to kube_control_plane] ***********************************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: kube-node
PLAY [Add kube-node nodes to kube_node] **********************************************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: k8s-cluster
My inventory.ini :
[all]
instance-1 ansible_ssh_host=10.182.0.2 ip = 34.125.199.45 etcd_member_name=etcd1
instance-2 ansible_ssh_host=10.182.0.3 ip = 34.125.217.86 etcd_member_name=etcd2
instance-3 ansible_ssh_host=10.182.0.4 ip = 34.125.112.124 etcd_member_name=etcd3
instance-4 ansible_ssh_host=10.182.0.5 ip = 34.125.251.168
instance-5 ansible_ssh_host=10.182.0.6 ip = 34.125.231.40
# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
instance-1
instance-2
instance-3
[etcd]
instance-1
instance-2
instance-3
[kube-node]
instance-4
instance-5
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
calico-rr
My cluster.yml :
---
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- hosts: k8s_cluster:etcd
strategy: linear
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts
tags: always
import_playbook: facts.yml
- hosts: k8s_cluster:etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: download, tags: download, when: "not skip_downloads" }
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
I changed - to _ and did some other renaming work, And it still doesn't find its match. I don't understand how it works... would you please help me fix this...?
I have the same error, and noticed that error does not exist in release-2.15(example), and node groups are written by "-", not by "_" initially. So if you don't care about release number, use 2.15. At least it helped me.
I have the following volume definition in a k8s Deployment manifest
volumes:
- name: mypvc
gcePersistentDisk:
pdName: "{{ .Values.disk.name }}"
fsType: "{{ .Values.disk.fsType }}"
Is it possible/allowed to add some mount options, as in
volumes:
- name: mypvc
gcePersistentDisk:
pdName: "{{ .Values.disk.name }}"
fsType: "{{ .Values.disk.fsType }}"
mountOptions:
- rsize=10240
- wsize=10240
- timeout=600
- retry=5
The GCEPersistentDiskVolumeSource v1 currently allow only 4 configuration parameters.
fsType - ext4 | xfs | ntfs
partition - 0 | 1 ....
pdName - [GCE PD Name)
readOnly - true | false