ansible dynamic inventory kubernetes - kubernetes

I am trying to use the kubernetes plugin in ansible to be able to use a dynamic inventory based on my k8 cluster. I have followed this doc https://docs.ansible.com/ansible/latest/scenario_guides/kubernetes_scenarios/k8s_inventory.html however i keep getting a failed to parse error.
# ansible-inventory --list -i k8s.yaml
[WARNING]: * Failed to parse /etc/ansible/k8s.yaml with ansible_collections.kubernetes.core.plugins.inventory.k8s plugin: Invalid value "kubernetes.core.k8s" for configuration option "plugin_type: inventory
plugin: ansible_collections.kubernetes.core.plugins.inventory.k8s setting: plugin ", valid values are: ['k8s']
[WARNING]: Unable to parse /etc/ansible/k8s.yaml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
extract from ansible.cfg
# egrep -i "\[inventory\]|kubernetes" ansible.cfg
[inventory]
enable_plugins = kubernetes.core.k8s
k8s.yaml
# cat k8s.yaml
plugin: kubernetes.core.k8s
The error suggests that kubernetes.core.k8s is an invalid value and that valid values are ['k8s']. yet this is exactly whats in the documentation, I have tried all manor of altering the plugin name with no success.
Can anyone steer me on what i am missing here?

So I managed to get it working by editing /usr/lib/python3/dist-packages/ansible_collections/kubernetes/core/plugins/inventory/k8s.py it seems my version only listed k8s as a pluggin name, I replaced with, kubernetes.core.k8s and it worked
options:
plugin:
description: token that ensures this is a source file for the 'k8s' plugin.
required: True
choices: ['kubernetes.core.k8s']
I did plan to raise it as a PR on the project but seems this was already updated several months back so I must have just had outdated files.
https://github.com/ansible-collections/kubernetes.core/blob/60933457e81fcfa1000f556b2bc3425bbf080602/plugins/inventory/k8s.py#L27

Related

Azure devops - ansible azure rm issue

I have an issue with running the pipeline, i'm trying to run ansible using the dynamic inventory azure_rm
does anyone has any idea how to fix this?
i get several message errors like the one below :
[WARNING]: * Failed to parse
/home/vsts/work/1/s/ansible/inventory/myazure_rm.yml with auto plugin: name
'azure_cloud' is not defined
[WARNING]: * Failed to parse
/home/vsts/work/1/s/ansible/inventory/myazure_rm.yml with yaml plugin: Plugin
configuration YAML file, not YAML inventory
[WARNING]: * Failed to parse
/home/vsts/work/1/s/ansible/inventory/myazure_rm.yml with ini plugin: Invalid
host pattern 'plugin:' supplied, ending in ':' is not allowed, this character
is reserved to provide a port.
[WARNING]: Unable to parse /home/vsts/work/1/s/ansible/inventory/myazure_rm.yml
as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
it is pretty much a simple inventory:
plugin: azure_rm
include_vm_resource_groups:
- rg-test
auth_source: auto
fail_on_template_errors: False
location: "france central"

Invalid characters were found in group names but not replaced ansible k8s

My k8s.yaml inventory file is:
plugin: k8s
connections:
- kubeconfig: '/Users/user1/Documents/Learning/ansible/kubeconfig.test.yaml'
context: 'user1#testeks.us-east-1.eksctl.io'
ansible playbook:
test_new.yml
- hosts: localhost
tasks:
- name: Create a k8s namespace
k8s:
name: testing3
api_version: v1
kind: Namespace
state: present
Looks like the ansibleplaybook command is not picking up the inventory k8s.yaml.Also I am not sure why I am getting Warning invalid characters {'-' in group name warnings.
Please let me know if the above inventory file and ansible playbook files look good or are there anything I am missing?
ansible-playbook -vvvv -i k8s.yaml -vvv ./test_new.yml
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /Users/user1/Documents/Learning/ansible/k8s.yaml as it did not pass its verify_file() method
script declined parsing /Users/user1/Documents/Learning/ansible/k8s.yaml as it did not pass its verify_file() method
Not replacing invalid character(s) "{'-', '9'}" in group name (909676E2B4F81625BF5994625D3353C9-yl4-us-east-1-eks-amazonaws-com)
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
Not replacing invalid character(s) "{'-'}" in group name (namespace_add-ons)
Not replacing invalid character(s) "{'-'}" in group name (namespace_add-ons_pods)
Not replacing invalid character(s) "{'.', '/', '-'}" in group name (label_app.kubernetes.io/instance_aws-cluster-autoscaler)
I'm not sure where you got that you need the Kubernetes parameters specified in your inventory file. If you look at the k8s module documentation it says that kubeconfig and context are specified in the playbook or as environment variables.
Your inventory should look something like this:
all:
hosts:
host.where.can.access.the.kubeapiserver.com:
Then your playbook:
- name: Create a k8s namespace
k8s:
name: testing3
api_version: v1
kind: Namespace
state: present
kubeconfig: '/Users/user1/Documents/Learning/ansible/kubeconfig.test.yaml' 👈 this can replaced by the K8S_AUTH_KUBECONFIG env variable
context: 'user1#testeks.us-east-1.eksctl.io' 👈 this can replaced by the K8S_AUTH_CONTEXT env variable
Based on the formatting of your post, it looks like your inventory file contains improper syntax. It should look like this:
plugin: k8s
connections:
- kubeconfig: '/Users/user1/Documents/Learning/ansible/kubeconfig.test.yaml'
context: 'user1#testeks.us-east-1.eksctl.io'
Remember that spaces are important.
For deprecation warnings, be sure to read up on these issues:
https://github.com/ansible/ansible/issues/56930
https://github.com/kubernetes-sigs/kubespray/issues/4830
Usage of hyphens in inventory group names was deprecated in Ansible 2.8 due to Python parser errors when using dot syntax. Auto-transformation can be disabled by adding force_valid_group_names = never to your Ansible config file. Similarly, deprecation warnings can be suppressed by adding deprecation_warnings = False though this is not recommended.

How to use Kubectl commands to Acess a Rancher Cluster through Ansible

I am currently developing a project where I need to get the pod names of a Kubernetes Cluster running on Rancher using Ansible. The main thing here is that I have a couple of problems that are preventing me from advance.
I am currently executing a playbook to try to retrieve this information, instead of running a CLI command, because I want to manipulate those Rancher machines later one (e.g. install an rpm file).
Here is the playbook that I am executing tot try to retrieve the pods' names from Rancher:
---
- hosts: localhost
connection: local
remote_user: root
roles:
- role: ansible.kubernetes-modules
- role: hello-world
vars:
ansible_python_interpreter: '{{ ansible_playbook_python }}'
collections:
- community.kubernetes
tasks:
-
name: Gather openShift Dependencies
python_requirements_facts:
dependencies:
- openshift
-
name: Get the pods in the specific namespace
k8s_info:
kubeconfig: '/etc/ansible/RCCloudConfig'
kind: Pod
namespace: redmine
register: pod_list
-
name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].status.podIP') }} "
- set_fact:
pod_names: "{{pod_list|json_query('resources[*].metadata.name')}}"
The problem is that I am getting a Kubernetes module error each time I am trying to run the playbook:
ERROR! the role 'ansible.kubernetes-modules' was not found in community.kubernetes:ansible .legacy:/etc/ansible/roles:/home/jcp/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/ roles:/etc/ansible
The error appears to be in '/etc/ansible/GetKubectlPods': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- role: ansible.kubernetes-modules
^ here
If I remove that line on the code, Where I try to retrieve that role, I still get a similar error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'kubernetes'
fatal: [localhost]: FAILED! => {"changed": false, "error": "No module named 'kubernetes'", "msg": "Failed to import the required Python library (openshift) on localhost.localdomain's Python /usr/bin/python3.6. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
I have already tried to install ansible-galaxy kubernetes module on the machine and openshift.
Not sure what I am doing wrong since there are so many possibilities for what could be going wrong here.
Ansible Version Output:
ansible 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/jcp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jcp/.local/lib/python3.6/site-packages/ansible
executable location = /home/jcp/.local/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
I've debugged my python_required_info output from openshift dependencies and this is what I have:
ok: [localhost] => {
"openshift_dependencies": {
"changed": false,
"failed": false,
"mismatched": {},
"not_found": [],
"python": "/usr/bin/python3.6",
"python_system_path": [
"/tmp/ansible_python_requirements_info_payload_5_kb4a7s/ansible_python_requirements_info_payloa d.zip",
"/usr/lib64/python36.zip",
"/usr/lib64/python3.6",
"/usr/lib64/python3.6/lib-dynload",
"/home/jcp/.local/lib/python3.6/site-packages",
"/usr/local/lib/python3.6/site-packages",
"/usr/local/lib/python3.6/site-packages/openshift-0.10.0.dev1-py3.6.egg",
"/usr/lib64/python3.6/site-packages",
"/usr/lib/python3.6/site-packages"
],
"python_version": "3.6.8 (default, Nov 21 2019, 19:31:34) \n[GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]" ,
"valid": {
"openshift": {
"desired": null,
"installed": "0.10.0.dev1"
}
}
}
}
Thanks for your help in advance!
Edit: The below answer was given for OP's specific Ansible version (i.e. 2.9.9) and is still valid if you still use it. Since version 2.10, you also need to install the relevant ansible collection if not already present
ansible-galaxy collection install kubernetes.core
See the latest module documentation for more information
In Ansible 2.9.9, you're not supposed to do anything special to use the module except installing the needed python dependencies. See the module documentation for your Ansible version
remove the line - role: ansible.kubernetes-modules, unless it is a module of yours in which case you have to tell us more because this is not a correct declaration.
remove the collection declaration
Add the following task somewhere before using the module:
- name: Make sure python deps are installed
pip:
name: openshift
Your actual python_requirement_facts task is doing nothing else than reporting the dependency is not found. Register the result and debug it to see for yourself.
Now use the k8s_info module normally.

Bazel Kubernetes Object Error: no objects passed to apply (Google Container Registry)

I have a k8s_object rule to apply a deployment to my Google Kubernetes Cluster. Here is my setup:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "image",
data = [":lib", "//:package.json"],
entry_point = ":index.ts",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "k8s_deployment",
template = ":gateway.deployment.yaml",
kind = "deployment",
cluster = "gke_cents-ideas_europe-west3-b_cents-ideas",
images = {
"gcr.io/cents-ideas/gateway:latest": ":image"
},
)
But when I run bazel run //services/gateway:k8s_deployment.apply, I get the following error
INFO: Analyzed target //services/gateway:k8s_deployment.apply (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //services/gateway:k8s_deployment.apply up-to-date:
bazel-bin/services/gateway/k8s_deployment.apply
INFO: Elapsed time: 0.113s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
$ /snap/bin/kubectl --kubeconfig= --cluster=gke_cents-ideas_europe-west3-b_cents-ideas --context= --user= apply -f -
2020/02/12 14:52:44 Unable to publish images: unable to publish image gcr.io/cents-ideas/gateway:latest
error: no objects passed to apply
error: no objects passed to apply
It doesn't push the new image to the Google Container Registry.
Strangely, this worked a few days ago. But I didn't change anything.
Here is the full code if you need to take a closer look: https://github.com/flolude/cents-ideas/blob/069c773ade88dfa8aff492f024a1ade1f8ed282e/services/gateway/BUILD
Update
I don't know if this has something to do with this issue but when I run
gcloud auth configure-docker
I get some warnings:
WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
WARNING: Your config file at [/home/flolu/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"eu.gcr.io": "gcloud"
}
}
Adding credentials for all GCR repositories.
WARNING: A long list of credential helpers may cause delays running 'docker build'. We recommend passing the registry name to configure only the registry you are using.
gcloud credential helpers already registered correctly.
I had google-cloud-sdk installed via snap install. What I did to make it work is to remove google-cloud-sdk via
snap remove google-cloud-sdk
and then followed those instructions to install it via
sudo apt install google-cloud-sdk
Now it works fine

Golang dep unable to resolve dependencies

I am using kubebuilder to create kubernetes operator project. After running the project init command described in quickstart guide
kubebuilder init --domain k8s.io --license apache2 --owner "The Kubernetes Authors"
dep ensure returns with error log given below.
Solving failure: No versions of k8s.io/client-go met constraints:
v8.0.0: Could not introduce k8s.io/client-go#v8.0.0, as it is not allowed by constraints from the following projects:
kubernetes-1.10.1 from (root)
kubernetes-1.10.1 from sigs.k8s.io/controller-runtime#master
v7.0.0: Could not introduce k8s.io/client-go#v7.0.0, as it is not allowed by constraints from the following projects:
kubernetes-1.10.1 from (root)
kubernetes-1.10.1 from sigs.k8s.io/controller-runtime#master
v6.0.0: Could not introduce k8s.io/client-go#v6.0.0, as it is not allowed by constraints from the following projects:
kubernetes-1.10.1 from (root)
kubernetes-1.10.1 from sigs.k8s.io/controller-runtime#master
Try using the latest kubebuilder from here. It's likely that the dependencies for the version in the quick start are out of date.
It works fine for me with v1.0.3
~/go/src/github.com $ kubebuilder init --domain k8s.io --license apache2 --owner "The Kubernetes Authors"
Run `dep ensure` to fetch dependencies (Recommended) [y/n]?
y
dep ensure
Running make...
make
go generate ./pkg/... ./cmd/...
go fmt ./pkg/... ./cmd/...
go vet ./pkg/... ./cmd/...
go run vendor/sigs.k8s.io/controller-tools/cmd/controller-gen/main.go all
CRD manifests generated under '/root/go/src/github.com/config/crds'
RBAC manifests generated under '/root/go/src/github.com/config/rbac'
go test ./pkg/... ./cmd/... -coverprofile cover.out
? github.com/pkg/apis [no test files]
? github.com/pkg/controller [no test files]
ok github.com/pkg/errors 0.207s coverage: 100.0% of statements
? github.com/cmd/manager [no test files]
go build -o bin/manager github.com/cmd/manager
Next: Define a resource with:
$ kubebuilder create api