I am trying to use the Kustomize with Helmfile by following the instructions given in Readme, but I am getting below error when I try to run the sync command.
helmfile --environment dev --file=helmfile.yaml sync
in ./helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: unmarshal errors:
line 4: cannot unmarshal !!seq into state.HelmState
helmfile.yaml
- name: common-virtual-services
chart: ./common-virtual-services
hooks:
- events: ["prepare", "cleanup"]
command: "./helmify"
args: ["{{`{{if eq .Event.Name \"prepare\"}}build{{else}}clean{{end}}`}}", "{{`{{.Release.Chart}}`}}", "{{`{{.Environment.Name}}`}}"]
Environment:
helmfile version v0.119.0
kustomize - Version:3.6.1
OS - Darwin DEM-C02X5AKLJG5J 18.7.0 Darwin Kernel Version 18.7.0
Please let me know if you need more details.
Related
I am trying to run the following repository in my own environment:
https://github.com/piomin/sample-micronaut-kubernetes
I applied the kubernetes yamls file in each k8s directory using kubectl apply -f file.yaml
as indicated in the tutorial I also did
kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default
tutorial is available at https://piotrminkowski.com/2020/01/07/guide-to-micronaut-kubernetes/
when I run Skaffold dev on the employee folder I will get the error:
unable to decode "STDIN": Object 'Kind' is missing in '{"data":{"application.yaml":"mongodb:\n collection: employee\n database: admin\nin-memory-store.enabled: true\ntest.employees:\n - id: 1\n organizationId: 1\n departmentId: 1\n name: John Smith\n age: 22\n position: Developer\n - id: 2\n organizationId: 1\n departmentId: 2\n name: Paul Walker\n age: 33\n position: Tester"}}'
unable to decode "STDIN": Object 'Kind' is missing in '{"data":{"mongodb.uri":"bW9uZ29kYjovL21pY3JvbmF1dDptaWNyb25hdXRfMTIzQG1vbmdvZGI6MjcwMTcvYWRtaW4="}}'
It does look like Kind is indeed missing in both data sections, what should be the value of kind here? I believe my version of kubernetes is more recent than the one used in the tutorial. Is there any website/reference I could use to fix the discrepencies?
Minikube is 1.29.0
Skaffold is 2.1.0
Kubectl: Client Version: v1.26.1
Kustomize Version: v4.5.7
Server Version: v1.26.1
The problem was that the file was using windows line endings instead of unix line endings...
I am currently developing a project where I need to get the pod names of a Kubernetes Cluster running on Rancher using Ansible. The main thing here is that I have a couple of problems that are preventing me from advance.
I am currently executing a playbook to try to retrieve this information, instead of running a CLI command, because I want to manipulate those Rancher machines later one (e.g. install an rpm file).
Here is the playbook that I am executing tot try to retrieve the pods' names from Rancher:
---
- hosts: localhost
connection: local
remote_user: root
roles:
- role: ansible.kubernetes-modules
- role: hello-world
vars:
ansible_python_interpreter: '{{ ansible_playbook_python }}'
collections:
- community.kubernetes
tasks:
-
name: Gather openShift Dependencies
python_requirements_facts:
dependencies:
- openshift
-
name: Get the pods in the specific namespace
k8s_info:
kubeconfig: '/etc/ansible/RCCloudConfig'
kind: Pod
namespace: redmine
register: pod_list
-
name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].status.podIP') }} "
- set_fact:
pod_names: "{{pod_list|json_query('resources[*].metadata.name')}}"
The problem is that I am getting a Kubernetes module error each time I am trying to run the playbook:
ERROR! the role 'ansible.kubernetes-modules' was not found in community.kubernetes:ansible .legacy:/etc/ansible/roles:/home/jcp/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/ roles:/etc/ansible
The error appears to be in '/etc/ansible/GetKubectlPods': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- role: ansible.kubernetes-modules
^ here
If I remove that line on the code, Where I try to retrieve that role, I still get a similar error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'kubernetes'
fatal: [localhost]: FAILED! => {"changed": false, "error": "No module named 'kubernetes'", "msg": "Failed to import the required Python library (openshift) on localhost.localdomain's Python /usr/bin/python3.6. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
I have already tried to install ansible-galaxy kubernetes module on the machine and openshift.
Not sure what I am doing wrong since there are so many possibilities for what could be going wrong here.
Ansible Version Output:
ansible 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/jcp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jcp/.local/lib/python3.6/site-packages/ansible
executable location = /home/jcp/.local/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
I've debugged my python_required_info output from openshift dependencies and this is what I have:
ok: [localhost] => {
"openshift_dependencies": {
"changed": false,
"failed": false,
"mismatched": {},
"not_found": [],
"python": "/usr/bin/python3.6",
"python_system_path": [
"/tmp/ansible_python_requirements_info_payload_5_kb4a7s/ansible_python_requirements_info_payloa d.zip",
"/usr/lib64/python36.zip",
"/usr/lib64/python3.6",
"/usr/lib64/python3.6/lib-dynload",
"/home/jcp/.local/lib/python3.6/site-packages",
"/usr/local/lib/python3.6/site-packages",
"/usr/local/lib/python3.6/site-packages/openshift-0.10.0.dev1-py3.6.egg",
"/usr/lib64/python3.6/site-packages",
"/usr/lib/python3.6/site-packages"
],
"python_version": "3.6.8 (default, Nov 21 2019, 19:31:34) \n[GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]" ,
"valid": {
"openshift": {
"desired": null,
"installed": "0.10.0.dev1"
}
}
}
}
Thanks for your help in advance!
Edit: The below answer was given for OP's specific Ansible version (i.e. 2.9.9) and is still valid if you still use it. Since version 2.10, you also need to install the relevant ansible collection if not already present
ansible-galaxy collection install kubernetes.core
See the latest module documentation for more information
In Ansible 2.9.9, you're not supposed to do anything special to use the module except installing the needed python dependencies. See the module documentation for your Ansible version
remove the line - role: ansible.kubernetes-modules, unless it is a module of yours in which case you have to tell us more because this is not a correct declaration.
remove the collection declaration
Add the following task somewhere before using the module:
- name: Make sure python deps are installed
pip:
name: openshift
Your actual python_requirement_facts task is doing nothing else than reporting the dependency is not found. Register the result and debug it to see for yourself.
Now use the k8s_info module normally.
I have integrated my github repo with Google cloud build to automatically build a docker images after every commit in github. This is working fine, but now I want to do sonarqube analysis on code before Docker image building process. So for that I have integrated the sonarqube part in cloudbuild.yaml file. But not able to run it.
I have followed the steps provided in link: https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/sonarqube
and pushed the sonar-scanner image in google container registry.
My sonarqube server is running on a GCP instance. On every commit in github, cluod build automatically triggered and start doing task mentioned in cloudbuild.yaml file
Dockerfile:
FROM nginx
COPY ./ /usr/share/nginx/html
cloudbuild.yaml :
steps:
- name: 'gcr.io/PROJECT_ID/sonar-scanner:latest'
args:
- '-Dsonar.host.url=sonarqube_url'
- '-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19'
- '-Dsonar.projectKey=sample-project'
- '-Dsonar.sources=.'
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/PROJECT_ID/html-css-website', '.' ]
images:
- 'gcr.io/PROJECT_ID/html-css-website'
Error:
Status: Build failed
Status detail: failed unmarshalling build config cloudbuild.yaml: yaml: line 3: did not find expected key
If the formatting you've pasted actually matches what you've got in your project then your issue is that the args property within the first steps block is indented too far: it should be aligned with the name property above it.
---
steps:
- name: "gcr.io/PROJECT_ID/sonar-scanner:latest"
args:
- "-Dsonar.host.url=sonarqube_url"
- "-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19"
- "-Dsonar.projectKey=sample-project"
- "-Dsonar.sources=."
- name: "gcr.io/cloud-builders/docker"
args:
- "build"
- "-t"
- "gcr.io/PROJECT_ID/html-css-website"
- "."
images:
- "gcr.io/PROJECT_ID/html-css-website"
How to set node-exporter of Prometheus for collecting host metrics in docker-swarm
version: '3.3'
services:
node-exporter:
image: prom/node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
- '--collector.textfile.directory=/etc/node-exporter/'
- '--collector.enabled="conntrack,diskstats,entropy,filefd,filesystem,loadavg,mdadm,meminfo,netdev,netstat,stat,textfile,time,vmstat,ipvs"'
ports:
- 9100:9100
i am getting this error:- node_exporter: error: unknown long flag '--collector.enabled', try --help
what's wrong about last line under command section in this docker-compose file & if wrongly set/passed, how to pass it correctly.
Try to use --collector.[collector_name] (e.g. --collector.diskstats) keys instead of --collector.enabled as it does not work anymore since 0.15 version or higher.
For multiple collectors you can try as below after version "< 0.15":
--collector.processes --collector.ntp ...... so on
In the older version " > 0.15 " we were using as below for specific collectors:
--collectors.enabled meminfo,loadavg,filesystem
Can I use a custom kubernetes version in which I have made some code modifications? I wanted to use the --kubernetes-version string flag to use a customized kubernete localkube binary. It is possible??
Minikube documentation says:
--kubernetes-version string The kubernetes version that the minikube VM will use (ex: v1.2.3)
OR a URI which contains a localkube binary (ex: https://storage.googleapis.com/minikube/k8sReleases/v1.3.0/localkube-linux-amd64) (default "v1.7.5")
But even, when I try that flag with official localkube binaries, it fails:
minikube start --kubernetes-version https://storage.googleapis.com/minikube/k8sReleases/v1.7.0/localkube-linux-amd64 --v 5
Invalid Kubernetes version.
The following Kubernetes versions are available:
- v1.7.5
- v1.7.4
- v1.7.3
- v1.7.2
- v1.7.0
- v1.7.0-rc.1
- v1.7.0-alpha.2
- v1.6.4
- v1.6.3
- v1.6.0
- v1.6.0-rc.1
- v1.6.0-beta.4
- v1.6.0-beta.3
- v1.6.0-beta.2
- v1.6.0-alpha.1
- v1.6.0-alpha.0
- v1.5.3
- v1.5.2
- v1.5.1
- v1.4.5
- v1.4.3
- v1.4.2
- v1.4.1
- v1.4.0
- v1.3.7
- v1.3.6
- v1.3.5
- v1.3.4
- v1.3.3
- v1.3.0
Many thanks!
Two options come to mind:
You can launch minikube with --vm-driver=none, so the binaries are installed in your local filesystem. Then replacing the binaries should not be a difficult process.
You can create your own minikube iso and then use the --iso-url flag. In order to build the ISO, you can follow this guide https://github.com/kubernetes/minikube/blob/master/docs/contributors/minikube_iso.md