How to use AWS CLI and kubectl in .gitlab-ci.yml - kubernetes

I have a .gitlab-ci.yml file in which I want to do the following:
Build a Docker image and push it to AWS ECR
Restart a specific deployment in my EKS cluster that uses this Docker image
Building and pushing the Docker image works fine, however I'm failing to connect to my EKS cluster.
My idea is to use aws eks to update my kubeconfig file, and kubectl to restart my deployment, but I don't know how to use the AWS CLI and Kubectl in my .gitlab-ci.yml file.
I have AWS_ACCESS_KEY_ID, AWS_ACCOUNT_ID, and AWS_DEFAULT_REGION defined in my CI/CD variables. I've got the following .gitlab-ci.yml file:
stages:
- build
- deploy staging
<build stage omitted for brevity>
staging:
stage: deploy staging
image: bitnami/kubectl:latest
only:
- staging
script: |
# install AWS CLI
apk add --no-cache python3 py3-pip \
&& pip3 install --upgrade pip \
&& pip3 install awscli \
&& rm -rf /var/cache/apk/*
aws eks update-kubeconfig --region eu-west-1 --name my-cluster-name
kubectl rollout restart deployment my-deployment
This pipeline fails with the error:
error: unknown command "sh" for "kubectl"
Did you mean this?
set
cp
I've found this issue and solution, but changing the .gitlab-ci.yml file accordingly prevents me from using apk and installing the AWS cli:
stages:
- build
- deploy staging
<build stage omitted for brevity>
staging:
stage: deploy staging
image:
name: bitnami/kubectl:latest
entrypoint: [""]
only:
- staging
script: |
# install AWS CLI
apk add --no-cache python3 py3-pip \
&& pip3 install --upgrade pip \
&& pip3 install awscli \
&& rm -rf /var/cache/apk/*
aws eks update-kubeconfig --region eu-west-1 --name my-cluster-name
kubectl rollout restart deployment my-deployment
Results in the error:
$ # install AWS CLI # collapsed multi-line command
/bin/bash: line 140: apk: command not found
/bin/bash: line 144: aws: command not found
So that leads me to the following question: how do I use both the AWS CLI and Kubectl in my .gitlab-ci.yml file? Or is there another easier way that allows me to restart a deployment in my EKS cluster?

I solved it myself. For future readers: using the alpine/k8s image solves my problem. It has both Kubectl and AWScli installed.

Related

Gitlab-agent with Helm: Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp [::1]:8080: connect: connection refused

i installed the new gitlab agent for kubernetes cluster. This works when I use KUBECTL and gives this error when I try to deploy in Azure Cloud with Helm chart.
my .gitlab-ci.yml
variables:
#registry variable
REGISTRY: registry.gitlab.com
#docker-image tag
DOCKER_IMAGE_TAG: ${CI_COMMIT_SHA}
#target variable
TARGET: metrix9/wysiwys-ic
stages:
- build
- package
- deploy
#job to build gradle application and save the jar file in artifacts
build docker image:
image: gradle
stage: build
before_script:
- chmod +x ./gradlew
script:
- ./gradlew jib -Djib.to.auth.username=$CI_REGISTRY_USER -Djib.to.auth.password=$CI_REGISTRY_PASSWORD -Djib.from.auth.username=$CI_REGISTRY_USER -Djib.from.auth.password=$CI_REGISTRY_PASSWORD
# job to push file-server docker-imagedocker
package wysiwys image:
stage: package
image: docker.io/library/docker
#dependencies:
# - build
services:
- name: docker:dind
before_script:
- IMAGE=${CI_REGISTRY}/${TARGET}
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull "${IMAGE}:latest" || true
script:
#- docker build --tag "${IMAGE}:latest" .
- docker push "${IMAGE}:latest"
#job to package and push the file-server helm chart
package wysiwys-ic helm:
stage: package
image:
name: alpine/helm
entrypoint: [""]
before_script:
- helm repo add --username $CI_REGISTRY_USER --password $CI_REGISTRY_PASSWORD wysiwys-ci-repo https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/packages/helm/stable
- helm plugin install https://github.com/chartmuseum/helm-push
script:
- helm package wysiwys-helm
- helm cm-push ./wysiwys-helm-0.1.0.tgz wysiwys-ci-repo
#job to install convert2pdf with helm chart
install wysiwys-ic:
stage: deploy
image:
name: alpine/helm
entrypoint: [""]
before_script:
- helm repo add bitnami https://charts.bitnami.com/bitnami -n Convert2pdf-repo
script:
- helm upgrade --install wysiwys-ci ./wysiwys-helm
gitlab agent:
i tryed export the KUBECONFIG and to run helm repo update in the pipeline..
but the same error comes out...
I was struggling with the same issue. First use image with helm and kubectl(f.e. registry.gitlab.com/gitlab-org/cluster-integration/cluster-applications) and try adding the following changes in the deployment part:
deploy app:
stage: deploy-app
variables:
KUBE_CONTEXT: -->gitlabproject<--:-->name of the installed agent<--
before_script:
- if [ -n "$KUBE_CONTEXT" ]; then kubectl config use-context "$KUBE_CONTEXT"; fi

/bin/bash: line 123: kubectl: command not found

This is my first time using GitLab for EKS and I feel so lost. I've been following the docs and so far I
Created a project on GitLab that contains my kubernetes manifest files
Created a config.yaml in that project in the directory .gitlab/agents/stockagent
Here's the config.yaml, my project name is "Stock-Market-API-K8s" and my k8s manifests are in the root directory of that project
ci_access:
projects:
- id: "root/Stock-Market-API-K8s"
In my root directory of my project, I also have a .gitlab-ci.yml file and here's the contents of that
deploy:
image:
name: mpriv32/stock-api:latest
entrypoint: ['']
script:
- kubectl config get-contexts
- kubectl config use-context .gitlab/agents/stockagent
- kubectl get pods
Using the default example from the docs, it seems that the get-contexts script is the one that failed. Here's the full error from my logs
Executing "step_script" stage of the job script
00:01
Using docker image sha256:58ddf823e9d7ee4c0e75779b7e01dab9b11ac0d985d1b2d2fe6c6b95a849573d for mpriv32/stock-api:latest with digest mpriv32/stock-api#sha256:a2e79a2c3a57327f93e36ec55297a606626e4dc8d72e469dd4dc2f3c1f589bac ...
$ kubectl config get-contexts
/bin/bash: line 123: kubectl: command not found
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
Here's my job.yaml file for my kubernetes pod, just in case it plays a factor at all
apiVersion: v1
kind: Pod
metadata:
name: stock-api
labels:
app: stock-api
spec:
containers:
- name: stock-api
image: mpriv32/stock-api:latest
envFrom:
- secretRef:
name: api-credentials
restartPolicy: Never
In your case, I guess the image(mpriv32/stock-api:latest) that you are using doesn't have a dependency kubectl as a global executable, please use an image as an example - bitnami/kubectl which "contains" kubectl
deploy:
image:
name: bitnami/kubectl
the image keyword is the name of the Docker image the Docker executor uses to run CI/CD jobs.
For more information https://docs.gitlab.com/ee/ci/docker/using_docker_images.html
Or you can build your docker image on top of bitnami/kubectl
FROM bitnami/kubectl:1.20.9 as kubectl
FROM ubuntu-or-whatever-image:tag
# Do whatever you need to with the
# ubuntu-or-whatever-image:tag image, then:
COPY --from=kubectl /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/
Or you can go with the approach of building an image from the scratch by
installing there the dependencies that you are using
smth like
FROM ubuntu:18.10
WORKDIR /root
COPY bootstrap.sh ./
RUN apt-get update && apt-get -y install --no-install-recommends \
gnupg \
curl \
wget \
git \
apt-transport-https \
ca-certificates \
zsh \
&& rm -rf /var/lib/apt/lists/*
ENV SHELL /usr/bin/zsh
# Install kubectl
RUN curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && \
apt-get update && apt-get -y install --no-install-recommends kubectl

CircleCI message "error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

I am facing an error while deploying deployment in CircleCI. Please find the configuration file below.
When running the kubectl CLI, we got an error between kubectl and the EKS tool of the aws-cli.
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#6.3.0
docker: circleci/docker#0.5.18
rollbar: rollbar/deploy#1.0.1
kubernetes: circleci/kubernetes#1.3.0
deploy:
version: 2.1
orbs:
aws-eks: circleci/aws-eks#1.0.0
kubernetes: circleci/kubernetes#1.3.0
executors:
default:
description: |
The version of the circleci/buildpack-deps Docker container to use
when running commands.
parameters:
buildpack-tag:
type: string
default: buster
docker:
- image: circleci/buildpack-deps:<<parameters.buildpack-tag>>
description: |
A collection of tools to deploy changes to AWS EKS in a declarative
manner where all changes to templates are checked into version control
before applying them to an EKS cluster.
commands:
setup:
description: |
Install the gettext-base package into the executor to be able to run
envsubst for replacing values in template files.
This command is a prerequisite for all other commands and should not
have to be run manually.
parameters:
cluster-name:
default: ''
description: Name of the EKS Cluster.
type: string
aws-region:
default: 'eu-central-1'
description: Region where the EKS Cluster is located.
type: string
git-user-email:
default: "deploy#mail.com"
description: Email of the git user to use for making commits
type: string
git-user-name:
default: "CircleCI Deploy Orb"
description: Name of the git user to use for making commits
type: string
steps:
- run:
name: install gettext-base
command: |
if which envsubst > /dev/null; then
echo "envsubst is already installed"
exit 0
fi
sudo apt-get update
sudo apt-get install -y gettext-base
- run:
name: Setup GitHub access
command: |
mkdir -p ~/.ssh
echo 'github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==' >> ~/.ssh/known_hosts
git config --global user.email "<< parameters.git-user-email >>"
git config --global user.name "<< parameters.git-user-name >>"
- aws-eks/update-kubeconfig-with-authenticator:
aws-region: << parameters.aws-region >>
cluster-name: << parameters.cluster-name >>
install-kubectl: true
authenticator-release-tag: v0.5.1
update-image:
description: |
Generates template files with the specified version tag for the image
to be updated and subsequently applies that template after checking it
back into version control.
parameters:
cluster-name:
default: ''
description: Name of the EKS Cluster.
type: string
aws-region:
default: 'eu-central-1'
description: Region where the EKS Cluster is located.
type: string
image-tag:
default: ''
description: |
The tag of the image, defaults to the value of `CIRCLE_SHA1`
if not provided.
type: string
replicas:
default: 3
description: |
The replica count for the deployment.
type: integer
environment:
default: 'production'
description: |
The environment/stage where the template will be applied. Defaults
to `production`.
type: string
template-file-path:
default: ''
description: |
The path to the source template which contains the placeholders
for the image-tag.
type: string
resource-name:
default: ''
description: |
Resource name in the format TYPE/NAME e.g. deployment/nginx.
type: string
template-repository:
default: ''
description: |
The fullpath to the repository where templates reside. Write
access is required to commit generated templates.
type: string
template-folder:
default: 'templates'
description: |
The name of the folder where the template-repository is cloned to.
type: string
placeholder-name:
default: IMAGE_TAG
description: |
The name of the placeholder environment variable that is to be
substituted with the image-tag parameter.
type: string
cluster-namespace:
default: sayway
description: |
Namespace within the EKS Cluster.
type: string
steps:
- setup:
aws-region: << parameters.aws-region >>
cluster-name: << parameters.cluster-name >>
git-user-email: dev#sayway.com
git-user-name: deploy
- run:
name: pull template repository
command: |
[ "$(ls -A << parameters.template-folder >>)" ] && \
cd << parameters.template-folder >> && git pull --force && cd ..
[ "$(ls -A << parameters.template-folder >>)" ] || \
git clone << parameters.template-repository >> << parameters.template-folder >>
- run:
name: generate and commit template files
command: |
cd << parameters.template-folder >>
IMAGE_TAG="<< parameters.image-tag >>"
./bin/generate.sh --file << parameters.template-file-path >> \
--stage << parameters.environment >> \
--commit-message "Update << parameters.template-file-path >> for << parameters.environment >> with tag ${IMAGE_TAG:-$CIRCLE_SHA1}" \
<< parameters.placeholder-name >>="${IMAGE_TAG:-$CIRCLE_SHA1}" \
REPLICAS=<< parameters.replicas >>
- kubernetes/create-or-update-resource:
get-rollout-status: true
namespace: << parameters.cluster-namespace >>
resource-file-path: << parameters.template-folder >>/<< parameters.environment >>/<< parameters.template-file-path >>
resource-name: << parameters.resource-name >>
jobs:
test:
working_directory: ~/say-way/core
parallelism: 1
shell: /bin/bash --login
environment:
CIRCLE_ARTIFACTS: /tmp/circleci-artifacts
CIRCLE_TEST_REPORTS: /tmp/circleci-test-results
KONFIG_CITUS__HOST: localhost
KONFIG_CITUS__USER: postgres
KONFIG_CITUS__DATABASE: sayway_test
KONFIG_CITUS__PASSWORD: ""
KONFIG_SPEC_REPORTER: true
docker:
- image: 567567013174.dkr.ecr.eu-central-1.amazonaws.com/core-ci:test-latest
aws_auth:
aws_access_key_id: $AWS_ACCESS_KEY_ID_STAGING
aws_secret_access_key: $AWS_SECRET_ACCESS_KEY_STAGING
- image: circleci/redis
- image: rabbitmq:3.7.7
- image: circleci/mongo:4.2
- image: circleci/postgres:10.5-alpine
steps:
- checkout
- run: mkdir -p $CIRCLE_ARTIFACTS $CIRCLE_TEST_REPORTS
# This is based on your 1.0 configuration file or project settings
- restore_cache:
keys:
- v1-dep-{{ checksum "Gemfile.lock" }}-
# any recent Gemfile.lock
- v1-dep-
- run:
name: install correct bundler version
command: |
export BUNDLER_VERSION="$(grep -A1 'BUNDLED WITH' Gemfile.lock | tail -n1 | tr -d ' ')"
echo "export BUNDLER_VERSION=$BUNDLER_VERSION" >> $BASH_ENV
gem install bundler --version $BUNDLER_VERSION
- run: 'bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs=4 --retry=3'
- run:
name: copy test.yml.sample to test.yml
command: cp config/test.yml.sample config/test.yml
- run:
name: Precompile and clean assets
command: bundle exec rake assets:precompile assets:clean
# Save dependency cache
- save_cache:
key: v1-dep-{{ checksum "Gemfile.lock" }}-{{ epoch }}
paths:
- vendor/bundle
- public/assets
- run:
name: Audit bundle for known security vulnerabilities
command: bundle exec bundle-audit check --update
- run:
name: Setup Database
command: bundle exec ruby ~/sayway/setup_test_db.rb
- run:
name: Migrate Database
command: bundle exec rake db:citus:migrate
- run:
name: Run tests
command: bundle exec rails test -f
# By default, running "rails test" won't run system tests.
- run:
name: Run system tests
command: bundle exec rails test:system
# Save test results
- store_test_results:
path: /tmp/circleci-test-results
# Save artifacts
- store_artifacts:
path: /tmp/circleci-artifacts
- store_artifacts:
path: /tmp/circleci-test-results
build-and-push-image:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: aws-ecr/default
steps:
- checkout
- run:
name: Pull latest core images for cache
command: |
$(aws ecr get-login --no-include-email --region $AWS_REGION)
docker pull "${AWS_ECR_ACCOUNT_URL}/core:latest"
- docker/build:
image: core
registry: "${AWS_ECR_ACCOUNT_URL}"
tag: "latest,${CIRCLE_SHA1}"
cache_from: "${AWS_ECR_ACCOUNT_URL}/core:latest"
- aws-ecr/push-image:
repo: core
tag: "latest,${CIRCLE_SHA1}"
deploy-production:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: deploy/default
steps:
- kubernetes/install-kubectl:
kubectl-version: v1.22.0
- rollbar/notify_deploy_started:
environment: report
- deploy/update-image:
resource-name: deployment/core-web
template-file-path: core-web-pod.yml
cluster-name: report
environment: report
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 3
- deploy/update-image:
resource-name: deployment/core-worker
template-file-path: core-worker-pod.yml
cluster-name: report
environment: report
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 4
- deploy/update-image:
resource-name: deployment/core-worker-batch
template-file-path: core-worker-batch-pod.yml
cluster-name: report
environment: report
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 1
- rollbar/notify_deploy_finished:
deploy_id: "${ROLLBAR_DEPLOY_ID}"
status: succeeded
deploy-demo:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: deploy/default
steps:
- kubernetes/install-kubectl:
kubectl-version: v1.22.0
- rollbar/notify_deploy_started:
environment: demo
- deploy/update-image:
resource-name: deployment/core-web
template-file-path: core-web-pod.yml
cluster-name: demo
environment: demo
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 2
- deploy/update-image:
resource-name: deployment/core-worker
template-file-path: core-worker-pod.yml
cluster-name: demo
environment: demo
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 1
- deploy/update-image:
resource-name: deployment/core-worker-batch
template-file-path: core-worker-batch-pod.yml
cluster-name: demo
environment: demo
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 1
- rollbar/notify_deploy_finished:
deploy_id: "${ROLLBAR_DEPLOY_ID}"
status: succeeded
workflows:
version: 2.1
build-n-test:
jobs:
- test:
filters:
branches:
ignore: master
build-approve-deploy:
jobs:
- build-and-push-image:
context: Core
filters:
branches:
only: master
- approve-report-deploy:
type: approval
requires:
- build-and-push-image
- approve-demo-deploy:
type: approval
requires:
- build-and-push-image
- deploy-production:
context: Core
requires:
- approve-report-deploy
- deploy-demo:
context: Core
requires:
- approve-demo-deploy
There is an issue in aws-cli. It is already fixed.
Option 1:
In my case, updating aws-cli + updating the ~/.kube/config helped.
Update aws-cli (following the documentation)
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
Update the kube configuration
mv ~/.kube/config ~/.kube/config.bk
aws eks update-kubeconfig --region ${AWS_REGION} --name ${EKS_CLUSTER_NAME}
Option 2:
Change v1alpha1 to v1beta1:
diff ~/.kube/config ~/.kube/config-backup
691c691
< apiVersion: client.authentication.k8s.io/v1beta1
---
> apiVersion: client.authentication.k8s.io/v1alpha1
We HAVE a fix here: https://github.com/aws/aws-cli/issues/6920#issuecomment-1119926885
Update the aws-cli (aws cli v1) to the version with the fix:
pip3 install awscli --upgrade --user
For aws cli v2 see this.
After that, don't forget to rewrite the kube-config with:
aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
This command should update the kube apiVersion to v1beta1
In my case, changing apiVersion to v1beta1 in the kube configuration file helped:
apiVersion: client.authentication.k8s.io/v1beta1
There is a glitch with the very latest version of kubectl.
For now, you can follow these steps to get rid of the issue:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo kubectl version
There is a problem with the latest kubectl and the aws-cli:
https://github.com/aws/aws-cli/issues/6920
An alternative is to update the AWS cli. It worked for me.
The rest of the instructions are from the answer provided by bigLucas.
Update the aws-cli (aws cli v2) to the latest version:
winget install Amazon.AWSCLI
After that, don't forget to rewrite the kube-config with:
aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
This command should update the kube apiVersion to v1beta1.
I changed the alpha1 value to the beta1 value, and it’s working for me under the configuration file.
The simplest solution: (it appears here but in complicated words..)
Open your kube config file and replace all alpha instances with beta.
(Editors with find&replace are recommended: Atom, Sublime, etc..).
Example with Nano:
nano ~/.kube/config
Or with Atom:
atom ~/.kube/config
Then you should search for the alpha instances and replace them with beta and save the file.
Open ~/.kube/config
Search for the user within the cluster you have a problem with and replace the client.authentication.k8s.io/v1alpha1 with client.authentication.k8s.io/v1beta1
I was facing the same issue for solution, please follow the below setups:
take backup existing config file mv ~/.kube/config ~/.kube/config.bk
run below command:
aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
then open the config ~/.kube/config file in any text editor, update v1apiVersion1 to v1beta1 and then try again.
Using kubectl 1.21.9 fixed it for me, with asdf:
asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9
And I would recommend having a .tools-versions file with:
kubectl 1.21.9
I was able to fix this by running on a MacBook Pro M1 chip (Homebrew):
brew upgrade awscli
Try upgrading the AWS Command Line Interface:
Steps
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg ./AWSCLIV2.pkg -target
You can use other ways from the AWS documentation: Installing or updating the latest version of the AWS CLI
Try updating your awscli (AWS Command Line Interface) version.
For Mac, it's brew upgrade awscli (Homebrew).
I got the same problem:
EKS version 1.22
kubectl works, and its version: v1.22.15-eks-fb459a0
helm version is 3.9+, when I execute helm ls -n $namespace I got the error
Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
from here: it is helm version issue.
so I use the command
curl -L https://git.io/get_helm.sh | bash -s -- --version v3.8.2
downgraded the helm version. helm works
fixed for me only change in kubeconfig
-- >v1alpha1 to v1beta1
In case of Windows, first delete the configuration file in $HOME/.kube folder.
Then run the aws eks update-kubeconfig --name command as suggested by bigLucas.
I just simplified the workaround by updating awscli to awscli-v2, but that also requires Python and pip to be upgraded. It requires minimum Python 3.6 and pip3.
apt install python3-pip -y && pip3 install awscli --upgrade --user
And then update the cluster configuration with awscli
aws eks update-kubeconfig --region <regionname> --name <ClusterName>
Output
Added new context arn:aws:eks:us-east-1:XXXXXXXXXXX:cluster/mycluster to /home/dev/.kube/config
Then check the connectivity with cluster
dev#ip-10-100-100-6:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
ip-X-XX-XX-XXX.ec2.internal Ready <none> 148m v1.21.5-eks-9017834
You can run the below command on your host machine where kubectl and aws-cli exist:
export KUBERNETES_EXEC_INFO='{"apiVersion":"client.authentication.k8s.io/v1beta1"}'
If using ‘sudo’ while running kubectl commands, then export this as root user.
apt install python3-pip -y
pip3 install awscli --upgrade --user
try diffrent version of kubectl ,
if kubernetes version is a 1.23 then we can use (one near) kubectl version 1.23,1.24,1.22

MariaDB Galera on Minikube: mkdir: cannot create directory '/bitnami/mariadb/data': Permission denied

I want to deploy a MariaDB Galera instance onto a local Minikube cluster with 3 nodes via Helm.
I used the following command for that:
helm install my-release bitnami/mariadb-galera --set rootUser.password=test --set db.name=test
The problem is, if I do that I get the following error in the log:
mariadb 10:27:41.60
mariadb 10:27:41.60 Welcome to the Bitnami mariadb-galera container
mariadb 10:27:41.60 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb-galera
mariadb 10:27:41.60 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb-galera/issues
mariadb 10:27:41.61
mariadb 10:27:41.61 INFO ==> ** Starting MariaDB setup **
mariadb 10:27:41.64 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars
mariadb 10:27:41.67 INFO ==> Initializing mariadb database
mkdir: cannot create directory '/bitnami/mariadb/data': Permission denied
The site of the image lists the possibility to use an extra init container to fix that (Link).
So I came up with the following configuration:
mariadb-galera-init-config.yaml
extraInitContainers:
- name: initcontainer
image: bitnami/minideb
command: ["chown -R 1001:1001 /bitnami/mariadb/"]
The problem is that when I run the command with this configuration:
helm install my-release bitnami/mariadb-galera --set rootUser.password=test --set db.name=test -f mariadb-galera-init-config.yaml
I get the following error on the Minikube dashboard:
Error: failed to start container "initcontainer": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "chown -R 1001:1001 /bitnami/mariadb/": stat chown -R 1001:1001 /bitnami/mariadb/: no such file or directory: unknown
I don't know how to fix this configuration file, or if there is some other better way to get this working...
In any case someone has issues with this, may I suggest running a initContainer before.
initContainers:
- name: mariadb-create-directory-structure
image: busybox
command:
[
"sh",
"-c",
"mkdir -p /bitnami/mariadb/data && chown -R 1001:1001 /bitnami",
]
volumeMounts:
- name: data
mountPath: /bitnami
i agree with #ventsislav_rs creating initContainer will do the trick..

gitlab-ci and kubectl issue

I am trying to build and deploy nodejs app using gitlab ci/cd and kubernates cluster. the build pass successfully while the deployment failed. Meanwhile I added Kubernates cluster to gitlab (API url, CA certificate and service token) and the error that I got for running kubectl within the deploy due to issue related to KUBECONFIG and the below is gitlab-ci.yml that I am using
stages:
- build
- deploy
services:
- docker:dind
build_app:
stage: build
image: docker:git
only:
- master
- develop
script:
- docker login -u gitlab-ci-token -p ${CI_BUILD_TOKEN} ${CI_REGISTRY}
- docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH} .
- docker tag ${CI_REGISTRY}/${CI_PROJECT_PATH} ${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_SHORT_SHA}
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_SHORT_SHA}
variables:
DOCKER_HOST: tcp://docker:2375/
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: [""]
script:
- USER_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
- CERTIFICATE_AUTHORITY_DATA=$(cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt | base64 -i -w0 -)
- kubectl config set-cluster k8s --server="https://kubernetes.default.svc"
- kubectl config set clusters.k8s.certificate-authority-data ${CERTIFICATE_AUTHORITY_DATA}
- kubectl config set-credentials gitlab --token="${USER_TOKEN}"
- kubectl config set-context default --cluster=k8s --user=gitlab
- kubectl config use-context default
- kubectl set image deployment test-flight web=${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_SHORT_SHA} -n test-flight-dev
$ USER_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
cat: /var/run/secrets/kubernetes.io/serviceaccount/token: No such file or directory
Update: Creating Environment and attach it to the stage solve the issue of identifying the cluster which the deployment will be, and so the cluster can get the action to apply the command
Creating Environment and attach it to the stage solve the issue of identifying the cluster which the deployment will be, and so the cluster can get the action to apply the command environment:
name: production