ansible module for EKS cluster - kubernetes

i'm trying to automate deployment in eks cluster using k8s ansible module.
It's seem that k8s module doesn't support EKS.
does anyone have an example of managing objects in eks using k8s ansible module.
Thanks in advance.

Thanks everyone, for your comments, it finally works.
I just reconfigure the file ~/.kube/kubeconfig and set the good config in ~/.aws/.
Snippet of the Ansible task:
- name: "deploy app"
k8s:
kubeconfig: "{{ kube_config }}"
namespace: "default"
state: "present"
src: "{{ item }}"
with_items:
- "{{ data_dir }}/{{ instance_name }}/deployment/deployment_file_1.yml"
- "{{ data_dir }}/{{ instance_name }}/deployment/deployment_file_2.yml"
- "{{ data_dir }}/{{ instance_name }}/deployment/deployment_file_3.yml"

Related

Integration test for kubernetes deployment with helm on openshift

I am trying to use ansible or helm test to test all resources are up and running after the deployment of ansible automation platform (automation controller, private-automation-hub) on openshift.
Currently, I am using ansible assertion to check the deployments but seems like I can use --atomic with helm commands and check the all resources are up after the helm deployment.
Can you help me with ansible to check all the resources (not only deployments but all resources I deployed with helm chart)? maybe example code or also if possible with helm test some examples?
Thank you.
- name: Test deployment
hosts: localhost
gather_facts: false
# vars:
# deployment_name: "pah-api"
tasks:
- name: gather all deployments
shell: oc get deployment -o template --template '{{"{{"}}range.items{{"}}"}}{{"{{"}}.metadata.name{{"}}"}}{{"{{"}}"\n"{{"}}"}}{{"{{"}}end{{"}}"}}'
register: deployed_resources
# - name: print the output of deployments
# debug:
# var: deployed_resources.stdout_lines
- name: Get deployment status
shell: oc get deployment {{ item }} -o=jsonpath='{.status.readyReplicas}'
with_items: "{{ deployed_resources.stdout_lines }}"
register: deployment_status
failed_when: deployment_status.rc != 0
- name: Verify deployment is running
assert:
that:
- deployment_status.stdout != 'null'
- deployment_status.stdout != '0'
fail_msg: 'Deployment {{ deployed_resources }} is not running.'
Currently I only check for deployments but it would be nice to check all resources (I deployed with helm chart) with ansible or via helm test?
You could use the Ansible Helm module. The atomic parameter is available out of the box: https://docs.ansible.com/ansible/latest/collections/kubernetes/core/helm_module.html

How to create Google Kubernetes (GKE) cluster in Ansible with custom image?

I've used this pattern in the past to create a GKE and it's worked great, but now I need to define a custom image type to use.
Here's the ansible playbook i'm working with.
- name: GCE
hosts: localhost
gather_facts: no
vars_files:
- vars/default.yml
tasks:
- name: create cluster
gcp_container_cluster:
name: "{{ cluster_name }}"
initial_node_count: "{{ node_count}}"
initial_cluster_version: "{{ cluster_kubernetes_version }}"
master_auth:
username: admin
password: "{{ cloud_admin }}"
node_config:
machine_type: e2-medium
disk_size_gb: "{{ disk_size_gb }}"
location: "{{ cluster_zone}}"
project: "{{ project }}"
auth_kind: "{{ auth_kind }}"
service_account_file: "{{ service_account_file }}"
state: present
scopes: "{{ scopes }}"
register: cluster
- name: create a node pool
google.cloud.gcp_container_node_pool:
name: default-pool
autoscaling:
enabled: yes
min_node_count: "{{ node_count}}"
max_node_count: "{{ max_node_count }}"
initial_node_count: "{{ node_count }}"
cluster: "{{ cluster }}"
location: "{{ cluster_zone}}"
config:
machine_type: e2-medium
disk_size_gb: "{{ disk_size_gb }}"
project: "{{ gce_project}}"
auth_kind: serviceaccount
service_account_file: "{{ service_account_file }}"
state: presen
I'm trying to use an E2 based image with 16 cores and 70GB of RAM. The spec don't matter as much as the fact that I can't specify a 'machine type' that's already preconfigured.
Is it possible to still use ansible to create the cluster? Do I need to create a custom image type to reference?
Just to clarify, there are no errors being thrown out. defining the machine_type as e2-medium doesn't allow me to allocate the resources I need and define an instance with the resources required. I'm asking how to say use e2-medium as a base and increase the ram allocation to 70GB or if that is feasible?
IIUC, you should be able to reference your machine type as e2-custom-16-71680
i.e.:
- name: your-cluster
google.cloud.gcp_container_cluster:
...
node_config:
machine_type: e2-custom-16-71680
disk_size_gb: "{{ disk_size_gb }}"
...
The (hidden) documentation for specifying custom machine types:
https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type#gcloud

How to connect to Kubernetes using ansible?

I want to connect to Kubernetes using Ansible. I want to run some ansible playbooks to create Kubernetes objects such as roles and rolebindings using ansible k8s module. I want to know if the Ansible K8s module is standard Kubernetes client that can use Kubeconfig in the same way as helm and kubectl.
Please let me know how to configure Kubeconfig for ansible to connect to K8s cluster.
You basically specify the kubeconfig parameter in the Ansible YAML file. (It defaults to ~/.kube/config.json). For example:
---
- hosts: localhost
gather_facts: false
vars_files:
- vars/main.yml
tasks:
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '~/.kube/config'
state: present
loop: "{{ lookup('template', 'myapp/mysql-pass.yml') | from_yaml_all | list }}"
no_log: k8s_no_log
...
You can also make it a variable:
...
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '{{ k8s_kubeconfig }}'
...
Thankyou..It worked for me..I tried the below.
- hosts: localhost
gather_facts: false
tasks:
- name: Create a k8s namespace
k8s:
kubeconfig: '~/Documents/sample-project/eks-kubeconfig'
name: testing1
api_version: v1
kind: Namespace
state: present
state: present

Why awx don't see pip module?

I use AWX 8.0.0.0. Have job on my SCM, that job connect to GCP and create instance. When i run this job under console like ansible-playbook job.yml its done fine. But when i run it from web UI i get error
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Please install the google-auth library"}
So it oblivious mean that i don't have this library. But I install it by
pip install google-auth and it work fine when I run it from console. This is my playbook:
- name: Create jenkins vm
hosts: localhost
connection: local
gather_facts: no
vars:
service_account_email: ansible#secret-app.iam.gserviceaccount.com
credentials_file: /etc/conf/awx/awx.json
project_id: geocitizen-app
machine_type: f1-micro
machine_name: jenkins-node-1
image: https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-7-v20191014
zone: europe-north1-a
tasks:
- name: Launch instances
gcp_compute_instance:
auth_kind: serviceaccount
name: "{{ machine_name }}"
machine_type: "{{ machine_type }}"
#service_account_email: "{{ service_account_email }}"
service_account_file: "{{ credentials_file }}"
project: "{{ project_id }}"
zone: "{{ zone }}"
network_interfaces:
- network:
access_configs:
- name: External NAT
type: ONE_TO_ONE_NAT
disks:
- auto_delete: 'true'
boot: 'true'
initialize_params:
source_image: "{{ image }}"
What I am doing wrong?
So the problem was that I was looking on my host machine. I install AWX via docker so I need to look in my docker container.

Gitlab 10.1 Deploy to Google Kubernetes Engine

How does one deploy a node app from Gitlab-ci to GKE? I already have cluster integration enabled and functional. But the documentation on what that means is almost non existent. I don't know what variables having a GKE cluster connected gives me or how to use it in my CI.
Here's my gitlab-ci.yml, it puts the image in gitlabhq Registry, meaning I'll have to copy it to google or somehow setup GKE to use a private registry, which no one seems to have managed to do.
image: docker:git
services:
- docker:dind
stages:
- build
- test
- release
- deploy
variables:
DOCKER_DRIVER: overlay2
CONTAINER_TEST_IMAGE: registry.gitlab.com/my-proj:$CI_BUILD_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/my-proj:latest
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
build:
stage: build
script:
- docker build -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
.test1:
stage: test
script:
- docker run $CONTAINER_TEST_IMAGE npm run eslint
.test2:
stage: test
script:
- docker run $CONTAINER_TEST_IMAGE npm run mocha
release-image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
- docker push $CONTAINER_RELEASE_IMAGE
only:
- master
deploy:
??????
I haven't used Auto DevOps integration, but I can try and generalize a working approach.
If you have tiller installed on the k8s cluster, it's best if you create a helm chart for your application. If you haven't done that already, there is a a tutorial on how to do that here:
https://github.com/kubernetes/helm/blob/master/docs/charts.md (check Using Helm to Manage Charts)
A basic deployment.yaml managed by helm would look like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "name" . }}
labels:
app: {{ template "name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
and the corresponding values in the .Values file:
image:
repository: registry.gitlab.com/my-proj
tag: latest
A sample .gitlab-ci.yml file should look like this:
...
deploy:
stage: deploy
script:
- helm upgrade <your-app-name> <path-to-the-helm-chart> --install --set image.tag=$CI_BUILD_REF_NAME
The build phase publishes the docker image and the deploy phase installs a helm chart which tries to download that image from registry.gitlab.com/my-proj.
I take that the k8s cluster has access to that registry. If the registry is private, you need to create a secret in kubernetes that holds the authorization token (unless it is automatically created):
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
The default pipeline image you're using (image: docker:git) doesn't have the helm CLI installed, so you should change that image with one that has helm and kubectl installed.
In the gitlab tutorial, they seem to be doing the installation on each run:
https://gitlab.com/gitlab-org/gitlab-ci-yml/blob/master/Auto-DevOps.gitlab-ci.yml (check function install_dependencies())