Gitlab CI/CD pipeline passed, but no changes were applied to the server - kubernetes

I am testing automation by applying Gitlab CI/CD to a GKE cluster. The app is successfully deployed, but the source code changes are not applied (eg renaming the html title).
I have confirmed that the code has been changed in the gitlab repository master branch. No other branch.
CI/CD simply goes through the process below.
push code to master branch
builds the NextJS code
builds the docker image and pushes it to GCR
pulls the docker image and deploys it in.
The content of the menifest file is as follows.
.gitlab-ci.yml
stages:
- build-push
- deploy
image: docker:19.03.12
variables:
GCP_PROJECT_ID: PROJECT_ID..
GKE_CLUSTER_NAME: cicd-micro-cluster
GKE_CLUSTER_ZONE: asia-northeast1-b
DOCKER_HOST: tcp://docker:2375/
DOCKER_TLS_CERTDIR: ""
REGISTRY_HOSTNAME: gcr.io/${GCP_PROJECT_ID}
DOCKER_IMAGE_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: latest
services:
- docker:19.03.12-dind
build-push:
stage: build-push
before_script:
- docker info
- echo "$GKE_ACCESS_KEY" > key.json
- docker login -u _json_key --password-stdin https://gcr.io < key.json
script:
- docker build --tag $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG .
- docker push $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG
deploy:
stage: deploy
image: google/cloud-sdk
script:
- export USE_GKE_GCLOUD_AUTH_PLUGIN=True
- echo "$GKE_ACCESS_KEY" > key.json
- gcloud auth activate-service-account --key-file=key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud config set container/cluster $GKE_CLUSTER_NAME
- gcloud config set compute/zone $GKE_CLUSTER_ZONE
- gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_CLUSTER_ZONE --project $GCP_PROJECT_ID
- kubectl apply -f deployment.yaml
- gcloud container images list-tags gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME} --filter='-tags:*' --format="get(digest)" --limit=10 > tags && while read p; do gcloud container images delete "gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME}#$p" --quiet; done < tags
Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontweb-lesson-prod
labels:
app: frontweb-lesson
spec:
selector:
matchLabels:
app: frontweb-lesson
template:
metadata:
labels:
app: frontweb-lesson
spec:
containers:
- name: frontweb-lesson-prod-app
image: gcr.io/PROJECT_ID../REPOSITORY_NAME..:latest
ports:
- containerPort: 3000
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: frontweb-lesson-prod-svc
labels:
app: frontweb-lesson
spec:
selector:
app: frontweb-lesson
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
type: LoadBalancer
loadBalancerIP: "EXTERNAL_IP.."
Is there something I'm missing?

By default,imagepullpolicy will be Always but there could be chances if there is no change in the deployment file when applying it might not update the deployment. As you are using the same label each time latest.
As there different between kubectl apply and kubectl patch command
What you can do is add minor label change or annotation change in deployment and check image will get updated with kubectl apply command too otherwise it will be mostly unchange response of kubectl apply
Ref : imagepullpolicy
You should avoid using the :latest tag when deploying containers in
production as it is harder to track which version of the image is
running and more difficult to roll back properly.

Related

Gitlab-agent with Helm: Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp [::1]:8080: connect: connection refused

i installed the new gitlab agent for kubernetes cluster. This works when I use KUBECTL and gives this error when I try to deploy in Azure Cloud with Helm chart.
my .gitlab-ci.yml
variables:
#registry variable
REGISTRY: registry.gitlab.com
#docker-image tag
DOCKER_IMAGE_TAG: ${CI_COMMIT_SHA}
#target variable
TARGET: metrix9/wysiwys-ic
stages:
- build
- package
- deploy
#job to build gradle application and save the jar file in artifacts
build docker image:
image: gradle
stage: build
before_script:
- chmod +x ./gradlew
script:
- ./gradlew jib -Djib.to.auth.username=$CI_REGISTRY_USER -Djib.to.auth.password=$CI_REGISTRY_PASSWORD -Djib.from.auth.username=$CI_REGISTRY_USER -Djib.from.auth.password=$CI_REGISTRY_PASSWORD
# job to push file-server docker-imagedocker
package wysiwys image:
stage: package
image: docker.io/library/docker
#dependencies:
# - build
services:
- name: docker:dind
before_script:
- IMAGE=${CI_REGISTRY}/${TARGET}
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull "${IMAGE}:latest" || true
script:
#- docker build --tag "${IMAGE}:latest" .
- docker push "${IMAGE}:latest"
#job to package and push the file-server helm chart
package wysiwys-ic helm:
stage: package
image:
name: alpine/helm
entrypoint: [""]
before_script:
- helm repo add --username $CI_REGISTRY_USER --password $CI_REGISTRY_PASSWORD wysiwys-ci-repo https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/packages/helm/stable
- helm plugin install https://github.com/chartmuseum/helm-push
script:
- helm package wysiwys-helm
- helm cm-push ./wysiwys-helm-0.1.0.tgz wysiwys-ci-repo
#job to install convert2pdf with helm chart
install wysiwys-ic:
stage: deploy
image:
name: alpine/helm
entrypoint: [""]
before_script:
- helm repo add bitnami https://charts.bitnami.com/bitnami -n Convert2pdf-repo
script:
- helm upgrade --install wysiwys-ci ./wysiwys-helm
gitlab agent:
i tryed export the KUBECONFIG and to run helm repo update in the pipeline..
but the same error comes out...
I was struggling with the same issue. First use image with helm and kubectl(f.e. registry.gitlab.com/gitlab-org/cluster-integration/cluster-applications) and try adding the following changes in the deployment part:
deploy app:
stage: deploy-app
variables:
KUBE_CONTEXT: -->gitlabproject<--:-->name of the installed agent<--
before_script:
- if [ -n "$KUBE_CONTEXT" ]; then kubectl config use-context "$KUBE_CONTEXT"; fi

Injecting variables to my NextJS app using Kubernetes

I'm trying to generate some env variables when I'm deploying my code with Kubernets. What I'm trying to do is to generate a ConfigMap to get my variables, but it's not working.
I'm using azure pipelines to do my build and publish steps.
Dockerfile:
FROM node:14-alpine
WORKDIR /usr/src/app
COPY package.json .
COPY . .
RUN npm cache clean --force
RUN npm install
RUN npm run build
EXPOSE 80
CMD ["npm", "start"]
My azure-pipelines.yml:
stages:
#Build Dev
- stage: BuildDev
displayName: Build and Push Dev
jobs:
- job: Development
displayName: Build and Push Dev
timeoutInMinutes: 0
pool:
vmImage: ubuntu-18.04
steps:
- checkout: self
- task: Docker#1
displayName: Build Image
inputs:
azureSubscriptionEndpoint: my-subscription
azureContainerRegistry: my-container-registry
command: build
imageName: tenant/front/dev:$(Build.BuildId)
includeLatestTag: true
buildContext: '**'
- task: Docker#1
displayName: Push Image
inputs:
azureSubscriptionEndpoint: my-subscription
azureContainerRegistry: my-container-registry
command: push
imageName: tenant/front/dev:$(Build.BuildId)
buildContext: '**'
#Deploy Dev
- stage: DeployDev
displayName: Deploy Dev
jobs:
- deployment: Deploy
displayName: Deploy Dev
timeoutInMinutes: 0
pool:
vmImage: ubuntu-18.04
environment: Development-Front
strategy:
runOnce:
deploy:
steps:
- task: Kubernetes#1
displayName: 'kubectl apply'
inputs:
kubernetesServiceEndpoint: 'AKS (standard subscription)'
command: apply
useConfigurationFile: true
configurationType: inline
inline: |
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: $(appNameDev)
labels:
app: $(appNameDev)
spec:
replicas: 1
selector:
matchLabels:
app: $(appNameDev)
template:
metadata:
labels:
app: $(appNameDev)
spec:
containers:
- name: $(appNameDev)
image: tenant/front/dev:$(Build.BuildId)
imagePullPolicy:
env:
- name: NEXT_PUBLIC_APP_API
value: development
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: environment-variables
mountPath: /usr/src/app/.env
readOnly: true
volumes:
- name: environment-variables
configMap:
name: environment-variables
items:
- key: .env
path: .env
---
apiVersion: v1
kind: Service
metadata:
name: $(appNameDev)
labels:
app: $(appNameDev)
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: $(appNameDev)
---
apiVersion: v1
kind: ConfigMap
metadata:
name: environment-variables
data:
.env: |
NEXT_PUBLIC_APP_API=development
API=http://another.endpoint.com/serverSide
When I'm trying to access this NEXT_PUBLIC_APP_API variable, I'm receiving undefined. In my next.config.js, I'm exporting the variable as publicRuntimeConfig.
If you are using GitHub actions, the first thing is to add a step in your image build process to include dynamic variables
- name: Create variables
id: vars
run: |
branch=${GITHUB_REF##*/}
echo "API_URL=API_${branch^^}" >> $GITHUB_ENV
echo "APP_ENV=APP_${branch^^}" >> $GITHUB_ENV
echo "BASE_URL=BASE_${branch^^}" >> $GITHUB_ENV
sed -i "s/GIT_VERSION/${{ github.sha }}/g" k8s/${branch}/api-deployment.yaml
The second step is to build the docker image with extra arguments, if you are using another CI, just add the variables directly in the build args as below:
--build-arg PROD_ENV=NEXT_PUBLIC_API_URL=${{ secrets[env.API_URL] }}\nNEXT_PUBLIC_BASE_URL=${{ secrets[env.BASE_URL]}}\nNEXT_PUBLIC_APP_ENV=${{ secrets[env.APP_ENV] }}
Pay attention to the \n to skip lines and docker to be able to understand that you are sending multiple variables to the build process.
The last thing is to add the extra args inside the Dockerfile
# Install dependencies only when needed
FROM node:16.13.0-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
# Rebuild the source code only when needed
FROM node:16.13.0-alpine AS builder
ARG PROD_ENV=""
WORKDIR /app
COPY . .
COPY --from=deps /app/node_modules ./node_modules
RUN printf "$PROD_ENV" >> .env.production
RUN yarn build
# Production image, copy all the files and run next
FROM node:16.13.0-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/.env* ./
COPY --from=builder /app/next-i18next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /app/.next
USER nextjs
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry.
RUN npx next telemetry disable
CMD ["yarn", "start"]
I send as extra args PROD_ENV and then build a .env.production file on the fly with the required values.
Mark as answer if it helps you

Is it possible to add dependent environment variables in Google Cloud Run?

I would like to specify dependent environment variables on a Cloud Run service.
If the environment variables have been defined in a .env file it would look like this
DATABASE_NAME=my-database
DATABASE_USER=root
DATABASE_PASSWORD=P4SSw0rd!
DATABASE_PORT=5432
DATABASE_HOST="/socket/my-database-socket"
DATABASE_URL="user=${DATABASE_USER} password=${DATABASE_PASSWORD} dbname=${DATABASE_NAME} host=${DATABASE_HOST}"
In this example, DATABASE_URL depends on every other environment variables.
To deploy the service I run the following command:
gcloud run deploy my-service \
--image gcr.io/my-project/my-image:latest \
--region europe-west1 \
--port 80 \
--platform managed \
--allow-unauthenticated \
--set-env-vars 'DATABASE_NAME=my-database' \
--set-env-vars 'DATABASE_USER=root' \
--set-env-vars 'DATABASE_PASSWORD=P4SSw0rd!' \
--set-env-vars 'DATABASE_PORT=5432' \
--set-env-vars 'DATABASE_HOST="/socket/my-database-socket"' \
--set-env-vars 'DATABASE_URL="user=$(DATABASE_USER) password=$(DATABASE_PASSWORD) dbname=$(DATABASE_NAME) host=$(DATABASE_HOST)"'
Here is the created YAML definition of the service (some values are omitted)
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
spec:
template:
metadata:
name: ...
spec:
containerConcurrency: 80
timeoutSeconds: 300
containers:
- image: ...
ports:
- name: http1
containerPort: 80
env:
- name: DATABASE_NAME
value: my-database
- name: DATABASE_USER
value: root
- name: DATABASE_PASSWORD
value: P4SSw0rd!
- name: DATABASE_HOST
value: /socket/my-database-socket
- name: DATABASE_URL
value: user=$(DATABASE_USER) password=$(DATABASE_PASSWORD) dbname=$(DATABASE_NAME) host=$(DATABASE_HOST)
The problem is that when the service is running, the env vars in DATABASE_URL seem not interpolated.
I read that Kubernetes supports dependent env vars but I can't figure out how to make this run in Cloud Run.
I am wondering if it is supported in Cloud Run in the end.
It's likely this may work in Knative open source (which uses Kubernetes to execute pods) but not on Google Cloud Run (fully hosted), which runs on a proprietary execution engine.

Deploying Node.js apps with Kubernetes

I was trying to deploy a very basic Express app, a small server listening on 8080 on a EC2 server (Ubuntu 16.04) following this tutorial. On that server, it was created a Kubernetes cluster through kops 1.8.0.
After that, I created a Dockerfile like the following:
FROM node:carbon
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
# Create app directory
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
# At the end, set the user to use when running this image
USER node
After that, I built the image with docker build -t ccastelli/stupid_server:test1, I specified my credentials with docker login -u ccastelli, I copied the imaged ID from docker images, tagged it docker tag c549618dcd86 org/test:first_try and pushed with docker push org/test on a private repository in cloud.docker.com.
After that I created a cluster secret with kubectl create secret docker-registry ccastelli-regcred --docker-server=docker.com --docker-username=ccastelli --docker-password='pass' --docker-email=myemail#gmail.com
After that I created a deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: stupid-server-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: stupid-server
spec:
containers:
- name: stupid-server
image: org/test:first_try
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: ccastelli-regcred
I see from kubectl get pods that the image transitioned from ErrPullImage to ImagePullBackOff and it's not ready. Anyway the docker container was working on the client instance but not in the cluster. At this point, I'm a bit lost. What am I doing wrong?
Thanks
Edit: message error:
Failed to pull image "org/test:first_try": rpc error: code =
Unknown desc = Error response from daemon: repository pycomio/test not
found: does not exist or no pull access
your --docker-server should be index.docker.io
DOCKER_REGISTRY_SERVER=https://index.docker.io/v1/
DOCKER_USER=Type your dockerhub username, same as when you `docker login`
DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`
kubectl create secret docker-registry myregistrykey \
--docker-server=$DOCKER_REGISTRY_SERVER \
--docker-username=$DOCKER_USER \
--docker-password=$DOCKER_PASSWORD \
--docker-email=$DOCKER_EMAIL

kubectl pull image from gitlab unauthorized: HTTP Basic: Access denied

I am trying to configure gitlab ci to deploy app to google compute engine. I have succesfully pushed image to gitlab repository but after applying kubernetes deployment config i see following error in kubectl describe pods:
Failed to pull image "registry.gitlab.com/proj/subproj/api:v1": rpc error: code = 2
desc = Error response from daemon: {"message":"Get https://registry.gitlab.com/v2/proj/subproj/api/manifests/v1: unauthorized: HTTP Basic: Access denied"}
Here is my deployment gitlab-ci job:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl delete secret registry.gitlab.com --ignore-not-found
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com/v1/ --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" --docker-email=some#gmail.com
- kubectl apply -f cloud-kubernetes.yml
and here is cloud-kubernetes.yml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: proj
labels:
app: proj
spec:
type: LoadBalancer
ports:
- port: 8082
name: proj
targetPort: 8082
nodePort: 32756
selector:
app: proj
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: projdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: proj
spec:
containers:
- name: projcontainer
image: registry.gitlab.com/proj/subproj/api:v1
imagePullPolicy: Always
env:
- name: SPRING_PROFILES_ACTIVE
value: "cloud"
ports:
- containerPort: 8082
imagePullSecrets:
- name: registry.gitlab.com
I have followed this article
There is workaround, image could be pushed to google container registry, and then pulled from gcr without security. We can push image to gcr without gcloud cli using json token file. So .gitlab-ci.yaml could look like:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
- docker tag registry.gitlab.com/proj/subproj/api:v1 gcr.io/proj/api:v1
- docker login -u _json_key -p "$GOOGLE_KEY" https://gcr.io
- docker push gcr.io/proj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl apply -f cloud-kubernetes.yml
And image in cloud-kubernetes.yaml should be:
gcr.io/proj/api:v1
You must use --docker-server=CI_REGISTRY.
The same as you sue for docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY.
Also note that your docker secrets must be in the same namespace with Deployment/ReplicaSet/DaemonSet/StatefullSet/Job.