I want to know how multiple values files are applied in helmsman.
I have something like below in helmsman dsf file.I want to know whether values-common.yaml will be applied first and then values-dev.yaml so the common values will be overridden by dev values file? Or the order does not matter and it just gets merged .If there is a conflict like commons values has enabled:false,and dev values contains enabled:true,which one will be picked up?
prometheus-msteams:
enabled: true
group: infra
namespace: kube-system
chart: prometheus-msteams/prometheus-msteams
version: 0.4.3
valuesFiles:
- config/prometheus-msteams/values-common.yaml
- config/prometheus-msteams/values-dev.yaml
priority: -450
Related
I am trying to filter out the the deployments which are not of my current version using ansible.
- name: Filter and get old deployment
kubernetes.core.k8s_info:
api_version: v1
kind: Deployment
namespace: "my_namespace"
label_selectors:
- curr_version notin (1.1.0)
register: old_deployments
Expected the output to give the list of deployments not having curr_version equal to 1.1.0
But I am getting this error
{"level":"error","ts":1665557104.5018141,"logger":"proxy","msg":"Unable to convert label selectors for the client","error":"invalid selector: [curr_version notin (1.1.0)]","stacktrace":"net/http.serverHandler.ServeHTTP\n\t/usr/lib/golang/src/net/http/server.go:2879\nnet/http.(*conn).serve\n\t/usr/lib/golang/src/net/http/server.go:1930"}
I referenced the pattern matching from here - https://github.com/abikouo/kubernetes.core/blob/08596fd05ba7190a04e7112270a38a0ce32095dd/plugins/module_utils/selector.py#L39
According to the pattern the above selector looks fine.
Even I tried to change the selector line as this (for testing purpose) -
- curr_version notin ("1.1.0")
But getting error as below.
{"level":"error","ts":1665555657.2939646,"logger":"requestfactory","msg":"Could not parse request","error":"unable to parse requirement: values[0][curr_version]: Invalid value: \"\\\"1.1.0\\\"\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')","stacktrace":"net/http.serverHandler.ServeHTTP\n\t/usr/lib/golang/src/net/http/server.go:2879\nnet/http.(*conn).serve\n\t/usr/lib/golang/src/net/http/server.go:1930"}
{"level":"error","ts":1665555657.2940943,"logger":"proxy","msg":"Unable to convert label selectors for the client","error":"invalid selector: [curr_version notin (\"1.1.0\")]","stacktrace":"net/http.serverHandler.ServeHTTP\n\t/usr/lib/golang/src/net/http/server.go:2879\nnet/http.(*conn).serve\n\t/usr/lib/golang/src/net/http/server.go:1930"}
I am not sure where am I wrong. I tried to find out the possible workaround but was not able to find it anywhere.
Although I am guessing second issue is just because the label selector is a string and the pattern doesn't allows to have quotes in the string. Which is understood.
Other information which might be useful
kubernetes.core.k8s version - 2.2.3
operator-sdk version - 1.23.0
ansible version - 2.9.27
python - 3.6.8
EDIT
I am using ose-ansible-operator image v4.10 to build an operator. I am unable to see the same error in local. But I am able to when going to operator.
My k8s.yaml inventory file is:
plugin: k8s
connections:
- kubeconfig: '/Users/user1/Documents/Learning/ansible/kubeconfig.test.yaml'
context: 'user1#testeks.us-east-1.eksctl.io'
ansible playbook:
test_new.yml
- hosts: localhost
tasks:
- name: Create a k8s namespace
k8s:
name: testing3
api_version: v1
kind: Namespace
state: present
Looks like the ansibleplaybook command is not picking up the inventory k8s.yaml.Also I am not sure why I am getting Warning invalid characters {'-' in group name warnings.
Please let me know if the above inventory file and ansible playbook files look good or are there anything I am missing?
ansible-playbook -vvvv -i k8s.yaml -vvv ./test_new.yml
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /Users/user1/Documents/Learning/ansible/k8s.yaml as it did not pass its verify_file() method
script declined parsing /Users/user1/Documents/Learning/ansible/k8s.yaml as it did not pass its verify_file() method
Not replacing invalid character(s) "{'-', '9'}" in group name (909676E2B4F81625BF5994625D3353C9-yl4-us-east-1-eks-amazonaws-com)
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
Not replacing invalid character(s) "{'-'}" in group name (namespace_add-ons)
Not replacing invalid character(s) "{'-'}" in group name (namespace_add-ons_pods)
Not replacing invalid character(s) "{'.', '/', '-'}" in group name (label_app.kubernetes.io/instance_aws-cluster-autoscaler)
I'm not sure where you got that you need the Kubernetes parameters specified in your inventory file. If you look at the k8s module documentation it says that kubeconfig and context are specified in the playbook or as environment variables.
Your inventory should look something like this:
all:
hosts:
host.where.can.access.the.kubeapiserver.com:
Then your playbook:
- name: Create a k8s namespace
k8s:
name: testing3
api_version: v1
kind: Namespace
state: present
kubeconfig: '/Users/user1/Documents/Learning/ansible/kubeconfig.test.yaml' 👈 this can replaced by the K8S_AUTH_KUBECONFIG env variable
context: 'user1#testeks.us-east-1.eksctl.io' 👈 this can replaced by the K8S_AUTH_CONTEXT env variable
Based on the formatting of your post, it looks like your inventory file contains improper syntax. It should look like this:
plugin: k8s
connections:
- kubeconfig: '/Users/user1/Documents/Learning/ansible/kubeconfig.test.yaml'
context: 'user1#testeks.us-east-1.eksctl.io'
Remember that spaces are important.
For deprecation warnings, be sure to read up on these issues:
https://github.com/ansible/ansible/issues/56930
https://github.com/kubernetes-sigs/kubespray/issues/4830
Usage of hyphens in inventory group names was deprecated in Ansible 2.8 due to Python parser errors when using dot syntax. Auto-transformation can be disabled by adding force_valid_group_names = never to your Ansible config file. Similarly, deprecation warnings can be suppressed by adding deprecation_warnings = False though this is not recommended.
Below is how im trying to add a custom fiels name in my filebeat 7.2.0
filebeat.inputs:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
processors:
- add_fields:
fields:
application: oasis
and with this, im expecting a new field called application whose data entries will be 'oasis'.
But i dont get any.
I also tried
fields:
application: oasis/'oasis'
Help me with this.
If you want to add a customized field for every log, you should put the "fields" configuration in the same level of type. Try the following:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
fields.application: oasis
There are two ways to add custom fields on filebeat, using the fields option and using the add_fields processor.
To add fields using the fields option, your configuration needs to be something like the one below.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
fields:
custom_field: 'custom field value'
fields_under_root: true
To add fields using the add_fields processor, you can try the following configuration.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
processors:
- add_fields:
target: ''
fields:
custom_field: 'custom field value'
Both configurations will create a field named custom_field with the value custom field value in the root of your document.
The fields option can be used per input and the add_fields processor is applied to all the data exported by the filebeat instance.
Just remember to pay attention to the indentation of your configuration, if it is wrong filebeat won't work correctly or even start.
I'm trying to create a chart with multiple subcharts ( 2 instances of ibm-db2oltp-dev). Is there a way to define in the same values.yaml file, different configuration for each instance?
I need two databases:
db2inst.instname: user1
db2inst.password: password1
options.databaseName: dbname1
db2inst.instname: user2
db2inst.password: password2
options.databaseName: dbname2
I saw it could be done via alias but I didn't find an example explaining how to do it. Is it possible?
Yes, it is possible:
In Chart.yaml for Helm 3 or in requirements.yaml for Helm 2:
dependencies:
- name: ibm-db2oltp-dev *(full chart name here)*
repository: http://localhost:10191 *(Actual repository url here)*
version: 0.1.0 *(Required version)*
alias: db1inst *(The name of the chart locally)*
- name: ibm-db2oltp-dev
repository: http://localhost:10191
version: 0.1.0
alias: db2inst
parentChart/values.yaml:
someParentChartValueX: x
someParentChartValueY: y
db1inst:
instname: user1
db2inst: password1
db2inst:
instname: user2
db2inst: password2
Actually it cannot be achieved in Helm (by aliases too) because values resolving doesn't work for aliased charts. The only way is to define values for chart name:
<chart_name not alias>:
var1: value
var2: value
The source issue: https://github.com/helm/helm/issues/7093
Say I want to run a task only when a specific tag is NOT in the list of tags supplied on the command line, even if other tags are specified. Of these, only the last one will work as I expect in all situations:
- hosts: all
tasks:
- debug:
msg: 'not TAG (won't work if other tags specified)'
tags: not TAG
- debug:
msg: 'always, but not if TAG specified (doesn't work; always runs)'
tags: always,not TAG
- debug:
msg: 'ALWAYS, but not if TAG in ansible_run_tags'
when: "'TAG' not in ansible_run_tags"
tags: always
Try it with different CLI options and you'll hopefully see why I find this a bit perplexing:
ansible-playbook tags-test.yml -l HOST
ansible-playbook tags-test.yml -l HOST -t TAG
ansible-playbook tags-test.yml -l HOST -t OTHERTAG
Questions: (a) is that expected behavior? and (b) is there a better way or some logic I'm missing?
I'm surprised I had to dig into the (undocumented, AFAICT) variable ansible_run_tags.
Amendment: It was suggested that I post my actual use case. I'm using ansible to drive system updates on Debian family systems. I'm trying to notify at the end if a reboot is required unless the tag reboot was supplied, in which case cause a reboot (and wait for system to come back up). Here is the relevant snippet:
- name: check and perhaps reboot
block:
- name: Check if a reboot is required
stat:
path: /var/run/reboot-required
get_md5: no
register: reboot
tags: always,reboot
- name: Alert if a reboot is required
fail:
msg: "NOTE: a reboot required to finish uppdates."
when:
- ('reboot' not in ansible_run_tags)
- reboot.stat.exists
tags: always
- name: Reboot the server
reboot:
msg: rebooting after Ansible applied system updates
when: reboot.stat.exists or ('force-reboot' in ansible_run_tags)
tags: never,reboot,force-reboot
I think my original question(s) still have merit, but I'm also willing to accept alternative methods of accomplishing this same functionality.
For completeness, and since only #paul-sweeney has offered any alternative solution, I'll answer my own question with my current best solution and let people pick / up-vote their favorite:
---
- name: run only if 'TAG' not specified
debug:
msg: 'ALWAYS, but not if TAG in ansible_run_tags'
when: "'TAG' not in ansible_run_tags"
tags: always
I know it's an old(ish) question, but I had a similar requirement.
It's probably something best implemented another way ... but ... sometimes it can be useful.
I'd achieve it by setting a fact if the tag IS specified, then outputting the message only if the fact is not set, something like:
---
- name: "test task runs only if tag missing"
hosts: all
tasks:
- name: "suppress message if tag given"
set_fact: suppress_message=yes
tags: reboot,never
- name: "message"
debug:
msg: "You didn't say 'reboot'"
when: suppress_message is not defined
I think that we have states for controlling (example: started, restarted, stopped), states for installing (present,absent) and components (webserver, db,...).
Ansible is lacking a good separation of those 3 dimensions and mixing those 3 dimensions in a single tag system is leading to confusion.
For example, if you have a 'webserver' and a 'DB' tag, you want to 'restart' the DB and not the webserver using a 'restart' tag.
But it won't work if the 'restart' tasks of the DB and the webserver are in the same tasks file with the same 'restart' tag as the 'restart' tag will start both the DB and the webserver...
So you will have probably to separate webserver and DB tasks in 2 separate files and use the tag at the level of the include.
Using tags means that you have a tree of options, not a matrix of options.
I like the tag concept but the fact that it is not possible to use it in conditional expressions is making it less appealing.
What I recommend is to declare tags in a role but map them into variables as a first task. So the 'restart' and 'db' tags would become boolean variables in my role and use when: instead of tags:
ansible-playbook has a skip-tags option. The example from the docs is
ansible-playbook example.yml --skip-tags "packages"
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html