I'm trying to setup auto deploy with Kubernetes on GitLab. I've successfully enabled Kubernetes integration in my project settings.
Well, the integration icon is green and when I click "Test Settings" I see "We sent a request to the provided URL":
My deployment environment is the Google Container Engine.
Here's the auto deploy section in my gitlab-ci.yml config:
deploy:
image: registry.gitlab.com/gitlab-examples/kubernetes-deploy
stage: deploy
script:
- export
- echo CI_PROJECT_ID=$CI_PROJECT_ID
- echo KUBE_URL=$KUBE_URL
- echo KUBE_CA_PEM_FILE=$KUBE_CA_PEM_FILE
- echo KUBE_TOKEN=$KUBE_TOKEN
- echo KUBE_NAMESPACE=$KUBE_NAMESPACE
- kubectl config set-cluster "$CI_PROJECT_ID" --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM_FILE"
- kubectl config set-credentials "$CI_PROJECT_ID" --token="$KUBE_TOKEN"
- kubectl config set-context "$CI_PROJECT_ID" --cluster="$CI_PROJECT_ID" --user="$CI_PROJECT_ID" --namespace="$KUBE_NAMESPACE"
- kubectl config use-context "$CI_PROJECT_ID"
When I look at the results, the deploy phase fails. This is because all the KUBE variables are empty.
I'm not having much luck with the Kubernetes services beyond this point. Am I missing something?
As it turns out, the Deployment Variables will not materialise unless you have configured and referenced an Environment.
Here's what the .gitlab-ci.yaml file looks like with the environment keyword:
deploy:
image: registry.gitlab.com/gitlab-examples/kubernetes-deploy
stage: deploy
environment: production
script:
- export
- echo CI_PROJECT_ID=$CI_PROJECT_ID
- echo KUBE_URL=$KUBE_URL
- echo KUBE_CA_PEM_FILE=$KUBE_CA_PEM_FILE
- echo KUBE_TOKEN=$KUBE_TOKEN
- echo KUBE_NAMESPACE=$KUBE_NAMESPACE
- kubectl config set-cluster "$CI_PROJECT_ID" --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM_FILE"
- kubectl config set-credentials "$CI_PROJECT_ID" --token="$KUBE_TOKEN"
- kubectl config set-context "$CI_PROJECT_ID" --cluster="$CI_PROJECT_ID" --user="$CI_PROJECT_ID" --namespace="$KUBE_NAMESPACE"
- kubectl config use-context "$CI_PROJECT_ID"
Related
I am testing automation by applying Gitlab CI/CD to a GKE cluster. The app is successfully deployed, but the source code changes are not applied (eg renaming the html title).
I have confirmed that the code has been changed in the gitlab repository master branch. No other branch.
CI/CD simply goes through the process below.
push code to master branch
builds the NextJS code
builds the docker image and pushes it to GCR
pulls the docker image and deploys it in.
The content of the menifest file is as follows.
.gitlab-ci.yml
stages:
- build-push
- deploy
image: docker:19.03.12
variables:
GCP_PROJECT_ID: PROJECT_ID..
GKE_CLUSTER_NAME: cicd-micro-cluster
GKE_CLUSTER_ZONE: asia-northeast1-b
DOCKER_HOST: tcp://docker:2375/
DOCKER_TLS_CERTDIR: ""
REGISTRY_HOSTNAME: gcr.io/${GCP_PROJECT_ID}
DOCKER_IMAGE_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: latest
services:
- docker:19.03.12-dind
build-push:
stage: build-push
before_script:
- docker info
- echo "$GKE_ACCESS_KEY" > key.json
- docker login -u _json_key --password-stdin https://gcr.io < key.json
script:
- docker build --tag $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG .
- docker push $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG
deploy:
stage: deploy
image: google/cloud-sdk
script:
- export USE_GKE_GCLOUD_AUTH_PLUGIN=True
- echo "$GKE_ACCESS_KEY" > key.json
- gcloud auth activate-service-account --key-file=key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud config set container/cluster $GKE_CLUSTER_NAME
- gcloud config set compute/zone $GKE_CLUSTER_ZONE
- gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_CLUSTER_ZONE --project $GCP_PROJECT_ID
- kubectl apply -f deployment.yaml
- gcloud container images list-tags gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME} --filter='-tags:*' --format="get(digest)" --limit=10 > tags && while read p; do gcloud container images delete "gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME}#$p" --quiet; done < tags
Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontweb-lesson-prod
labels:
app: frontweb-lesson
spec:
selector:
matchLabels:
app: frontweb-lesson
template:
metadata:
labels:
app: frontweb-lesson
spec:
containers:
- name: frontweb-lesson-prod-app
image: gcr.io/PROJECT_ID../REPOSITORY_NAME..:latest
ports:
- containerPort: 3000
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: frontweb-lesson-prod-svc
labels:
app: frontweb-lesson
spec:
selector:
app: frontweb-lesson
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
type: LoadBalancer
loadBalancerIP: "EXTERNAL_IP.."
Is there something I'm missing?
By default,imagepullpolicy will be Always but there could be chances if there is no change in the deployment file when applying it might not update the deployment. As you are using the same label each time latest.
As there different between kubectl apply and kubectl patch command
What you can do is add minor label change or annotation change in deployment and check image will get updated with kubectl apply command too otherwise it will be mostly unchange response of kubectl apply
Ref : imagepullpolicy
You should avoid using the :latest tag when deploying containers in
production as it is harder to track which version of the image is
running and more difficult to roll back properly.
I am trying to build and deploy nodejs app using gitlab ci/cd and kubernates cluster. the build pass successfully while the deployment failed. Meanwhile I added Kubernates cluster to gitlab (API url, CA certificate and service token) and the error that I got for running kubectl within the deploy due to issue related to KUBECONFIG and the below is gitlab-ci.yml that I am using
stages:
- build
- deploy
services:
- docker:dind
build_app:
stage: build
image: docker:git
only:
- master
- develop
script:
- docker login -u gitlab-ci-token -p ${CI_BUILD_TOKEN} ${CI_REGISTRY}
- docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH} .
- docker tag ${CI_REGISTRY}/${CI_PROJECT_PATH} ${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_SHORT_SHA}
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_SHORT_SHA}
variables:
DOCKER_HOST: tcp://docker:2375/
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: [""]
script:
- USER_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
- CERTIFICATE_AUTHORITY_DATA=$(cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt | base64 -i -w0 -)
- kubectl config set-cluster k8s --server="https://kubernetes.default.svc"
- kubectl config set clusters.k8s.certificate-authority-data ${CERTIFICATE_AUTHORITY_DATA}
- kubectl config set-credentials gitlab --token="${USER_TOKEN}"
- kubectl config set-context default --cluster=k8s --user=gitlab
- kubectl config use-context default
- kubectl set image deployment test-flight web=${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_SHORT_SHA} -n test-flight-dev
$ USER_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
cat: /var/run/secrets/kubernetes.io/serviceaccount/token: No such file or directory
Update: Creating Environment and attach it to the stage solve the issue of identifying the cluster which the deployment will be, and so the cluster can get the action to apply the command
Creating Environment and attach it to the stage solve the issue of identifying the cluster which the deployment will be, and so the cluster can get the action to apply the command environment:
name: production
I have a CircleCI Configured config.yml file to build and deploy the code and I wanted that config.yml file to be run in Azure DevOps pipeline but I am getting the error as below.Kindly help in fixing my below script where should I need to change to run in Azure DevOps? I am new to the YAML configuration and new in Azure DevOps,so Kindly help me in this matter.
Error:
config.yml:
#
# Required variables
#
# Production:
# - GCLOUD_SERVICE_KEY_PRODUCTION
# - GCLOUD_PROJECT_ID_PRODUCTION
# - GCLOUD_PROJECT_CLUSTER_ID_PRODUCTION
# - GCLOUD_PROJECT_CLUSTER_ZONE_PRODUCTION
#
# Staging:
# - GCLOUD_SERVICE_KEY_STAGING
# - GCLOUD_PROJECT_ID_STAGING
# - GCLOUD_PROJECT_CLUSTER_ID_STAGING
# - GCLOUD_PROJECT_CLUSTER_ZONE_STAGING
#
gcp_runtime: &gcp_runtime
docker:
- image: boiyaa/google-cloud-sdk-nodejs
setup-production_credentials: &setup-production_credentials
run:
name: Setup credentials to act on behalf of circle service account
command: |
echo ${GCLOUD_SERVICE_KEY_PRODUCTION} > ${HOME}/gcp-key.json
gcloud auth activate-service-account --key-file ${HOME}/gcp-key.json
gcloud container clusters get-credentials ${GCLOUD_PROJECT_CLUSTER_ID_PRODUCTION} \
--zone ${GCLOUD_PROJECT_CLUSTER_ZONE_PRODUCTION} \
--project ${GCLOUD_PROJECT_ID_PRODUCTION}
setup-staging_credentials: &setup-staging_credentials
run:
name: Setup credentials to act on behalf of circle service account
command: |
echo ${GCLOUD_SERVICE_KEY_STAGING} > ${HOME}/gcp-key.json
gcloud auth activate-service-account --key-file ${HOME}/gcp-key.json
gcloud container clusters get-credentials ${GCLOUD_PROJECT_CLUSTER_ID_STAGING} \
--zone ${GCLOUD_PROJECT_CLUSTER_ZONE_STAGING} \
--project ${GCLOUD_PROJECT_ID_STAGING}
setup-production-env: &setup-production-env
run:
name: Setup env for production
command: |
rm -f .env
echo "REACT_APP_API_URL=${REACT_APP_API_URL_PRODUCTION}" >> .env
echo "REACT_APP_SOCIAL_API_URL=${REACT_APP_SOCIAL_API_URL_PRODUCTION}" >> .env
echo "REACT_APP_WEB_URL=${REACT_APP_WEB_URL_PRODUCTION}" >> .env
echo "REACT_APP_AUTH0_DOMAIN=${REACT_APP_AUTH0_DOMAIN_PRODUCTION}" >> .env
echo "REACT_APP_AUTH0_CLIENT_ID=${REACT_APP_AUTH0_CLIENT_ID_PRODUCTION}" >> .env
echo "REACT_APP_PUSHER_KEY=${REACT_APP_PUSHER_KEY_PRODUCTION}" >> .env
echo "REACT_APP_PUSHER_CLUSTER=${REACT_APP_PUSHER_CLUSTER_PRODUCTION}" >> .env
echo "REACT_APP_VALID_DOMAIN=${REACT_APP_VALID_DOMAIN_PRODUCTION}" >> .env
setup-staging-env: &setup-staging-env
run:
name: Setup env for staging
command: |
rm -f .env
echo "REACT_APP_API_URL=${REACT_APP_API_URL_STAGING}" >> .env
echo "REACT_APP_SOCIAL_API_URL=${REACT_APP_SOCIAL_API_URL_STAGING}" >> .env
echo "REACT_APP_WEB_URL=${REACT_APP_WEB_URL_STAGING}" >> .env
echo "REACT_APP_AUTH0_DOMAIN=${REACT_APP_AUTH0_DOMAIN_STAGING}" >> .env
echo "REACT_APP_AUTH0_CLIENT_ID=${REACT_APP_AUTH0_CLIENT_ID_STAGING}" >> .env
echo "REACT_APP_PUSHER_KEY=${REACT_APP_PUSHER_KEY_STAGING}" >> .env
echo "REACT_APP_PUSHER_CLUSTER=${REACT_APP_PUSHER_CLUSTER_STAGING}" >> .env
echo "REACT_APP_VALID_DOMAIN=${REACT_APP_VALID_DOMAIN_STAGING}" >> .env
build_docker_images: &build_docker_images
run:
name: build and cache all docker images first and fail before deploying
command: |
true || docker build --build-arg CIRCLE_BUILD_NUM=${CIRCLE_BUILD_NUM:-0} -f ./Dockerfile -t web .
deploy_script_production: &deploy_script_production
run:
name: Deploy the application to prod
command: bash ./deploy/deploy-all.sh prod
deploy_script_staging: &deploy_script_staging
run:
name: Deploy the application to staging
command: bash ./deploy/deploy-all.sh staging
deploy-production: &deploy-production
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- *build_docker_images
- *setup-production-env
- *setup-production_credentials
- *deploy_script_production
deploy-staging: &deploy-staging
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- *build_docker_images
- *setup-staging-env
- *setup-staging_credentials
- *deploy_script_staging
version: 2
jobs:
deploy_to_production:
<<: *gcp_runtime
environment:
ENVIRONMENT: production
SKIP_BASE: "true"
<<: *deploy-production
deploy_to_staging:
<<: *gcp_runtime
environment:
ENVIRONMENT: staging
SKIP_BASE: "true"
<<: *deploy-staging
workflows:
version: 2
deploy_to_production:
jobs:
- deploy_to_production:
filters:
branches:
only: production
deploy_to_staging:
jobs:
- deploy_to_staging:
filters:
branches:
only: staging
As stated in the Azure DevOps documentation:
Note: Azure Pipelines doesn't support all features of YAML, such as anchors, complex keys, and sets.
This means that you need to do away with all anchors (and aliases) in your YAML file. Moreover, you cannot expect a CircleCI configuration to be a valid Azure DevOps configuration. They are different tools and have a different configuration structure.
You should start by reading the Azure DevOps docs and then rewrite your file accordingly. This is not a trivial modification of the file.
I have a small cloudbuild.yaml file where I build a Docker image, push it to Google container registry (GCR) and then apply the changes to my Kubernetes cluster. It looks like this:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/$PROJECT_ID/frontend:latest || exit 0'
]
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-f",
"./services/frontend/prod.Dockerfile",
"-t",
"gcr.io/$PROJECT_ID/frontend:$REVISION_ID",
"-t",
"gcr.io/$PROJECT_ID/frontend:latest",
".",
]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/frontend"]
- name: "gcr.io/cloud-builders/kubectl"
args: ["apply", "-f", "kubernetes/gcp/frontend.yaml"]
env:
- "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
- "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"
- name: "gcr.io/cloud-builders/kubectl"
args: ["rollout", "restart", "deployment/frontend-deployment"]
env:
- "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
- "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"
The build runs smoothly, until the last step. args: ["rollout", "restart", "deployment/frontend-deployment"]. It has the following log output:
Already have image (with digest): gcr.io/cloud-builders/kubectl
Running: gcloud container clusters get-credentials --project="cents-ideas" --zone="europe-west3-a" "cents-ideas"
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cents-ideas.
Running: kubectl rollout restart deployment/frontend-deployment
error: unknown command "restart deployment/frontend-deployment"
See 'kubectl rollout -h' for help and examples.
Allegedly, restart is an unknown command. But it works when I run kubectl rollout restart deployment/frontend-deployment manually.
How can I fix this problem?
Looking at the Kubernetes release notes, the kubectl rollout restart commmand was introduced in the v1.15 version. In your case, it seems Cloud Build is using an older version where this command wasn't implemented yet.
After doing some test, it appears Cloud Build uses a kubectl client version depending on the cluster's server version. For example, when running the following build:
steps:
- name: "gcr.io/cloud-builders/kubectl"
args: ["version"]
env:
- "CLOUDSDK_COMPUTE_ZONE=<cluster_zone>"
- "CLOUDSDK_CONTAINER_CLUSTER=<cluster_name>"
if the cluster's master version is v1.14, Cloud Build uses a v1.14 kubectl client and returns the same unknown command "restart" error message. When master's version is v1.15, Cloud Build uses a v1.15 kubectl client and the command runs successfully.
So about your case, I suspect your cluster "cents-ideas" master version is <1.15 which would explain the error you're getting. As per why it works when you run the command manually (I understand locally), I suspect your kubectl may be authenticated to another cluster with master version >=1.15.
I am trying to configure gitlab ci to deploy app to google compute engine. I have succesfully pushed image to gitlab repository but after applying kubernetes deployment config i see following error in kubectl describe pods:
Failed to pull image "registry.gitlab.com/proj/subproj/api:v1": rpc error: code = 2
desc = Error response from daemon: {"message":"Get https://registry.gitlab.com/v2/proj/subproj/api/manifests/v1: unauthorized: HTTP Basic: Access denied"}
Here is my deployment gitlab-ci job:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl delete secret registry.gitlab.com --ignore-not-found
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com/v1/ --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" --docker-email=some#gmail.com
- kubectl apply -f cloud-kubernetes.yml
and here is cloud-kubernetes.yml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: proj
labels:
app: proj
spec:
type: LoadBalancer
ports:
- port: 8082
name: proj
targetPort: 8082
nodePort: 32756
selector:
app: proj
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: projdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: proj
spec:
containers:
- name: projcontainer
image: registry.gitlab.com/proj/subproj/api:v1
imagePullPolicy: Always
env:
- name: SPRING_PROFILES_ACTIVE
value: "cloud"
ports:
- containerPort: 8082
imagePullSecrets:
- name: registry.gitlab.com
I have followed this article
There is workaround, image could be pushed to google container registry, and then pulled from gcr without security. We can push image to gcr without gcloud cli using json token file. So .gitlab-ci.yaml could look like:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
- docker tag registry.gitlab.com/proj/subproj/api:v1 gcr.io/proj/api:v1
- docker login -u _json_key -p "$GOOGLE_KEY" https://gcr.io
- docker push gcr.io/proj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl apply -f cloud-kubernetes.yml
And image in cloud-kubernetes.yaml should be:
gcr.io/proj/api:v1
You must use --docker-server=CI_REGISTRY.
The same as you sue for docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY.
Also note that your docker secrets must be in the same namespace with Deployment/ReplicaSet/DaemonSet/StatefullSet/Job.