I'm trying to generate some env variables when I'm deploying my code with Kubernets. What I'm trying to do is to generate a ConfigMap to get my variables, but it's not working.
I'm using azure pipelines to do my build and publish steps.
Dockerfile:
FROM node:14-alpine
WORKDIR /usr/src/app
COPY package.json .
COPY . .
RUN npm cache clean --force
RUN npm install
RUN npm run build
EXPOSE 80
CMD ["npm", "start"]
My azure-pipelines.yml:
stages:
#Build Dev
- stage: BuildDev
displayName: Build and Push Dev
jobs:
- job: Development
displayName: Build and Push Dev
timeoutInMinutes: 0
pool:
vmImage: ubuntu-18.04
steps:
- checkout: self
- task: Docker#1
displayName: Build Image
inputs:
azureSubscriptionEndpoint: my-subscription
azureContainerRegistry: my-container-registry
command: build
imageName: tenant/front/dev:$(Build.BuildId)
includeLatestTag: true
buildContext: '**'
- task: Docker#1
displayName: Push Image
inputs:
azureSubscriptionEndpoint: my-subscription
azureContainerRegistry: my-container-registry
command: push
imageName: tenant/front/dev:$(Build.BuildId)
buildContext: '**'
#Deploy Dev
- stage: DeployDev
displayName: Deploy Dev
jobs:
- deployment: Deploy
displayName: Deploy Dev
timeoutInMinutes: 0
pool:
vmImage: ubuntu-18.04
environment: Development-Front
strategy:
runOnce:
deploy:
steps:
- task: Kubernetes#1
displayName: 'kubectl apply'
inputs:
kubernetesServiceEndpoint: 'AKS (standard subscription)'
command: apply
useConfigurationFile: true
configurationType: inline
inline: |
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: $(appNameDev)
labels:
app: $(appNameDev)
spec:
replicas: 1
selector:
matchLabels:
app: $(appNameDev)
template:
metadata:
labels:
app: $(appNameDev)
spec:
containers:
- name: $(appNameDev)
image: tenant/front/dev:$(Build.BuildId)
imagePullPolicy:
env:
- name: NEXT_PUBLIC_APP_API
value: development
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: environment-variables
mountPath: /usr/src/app/.env
readOnly: true
volumes:
- name: environment-variables
configMap:
name: environment-variables
items:
- key: .env
path: .env
---
apiVersion: v1
kind: Service
metadata:
name: $(appNameDev)
labels:
app: $(appNameDev)
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: $(appNameDev)
---
apiVersion: v1
kind: ConfigMap
metadata:
name: environment-variables
data:
.env: |
NEXT_PUBLIC_APP_API=development
API=http://another.endpoint.com/serverSide
When I'm trying to access this NEXT_PUBLIC_APP_API variable, I'm receiving undefined. In my next.config.js, I'm exporting the variable as publicRuntimeConfig.
If you are using GitHub actions, the first thing is to add a step in your image build process to include dynamic variables
- name: Create variables
id: vars
run: |
branch=${GITHUB_REF##*/}
echo "API_URL=API_${branch^^}" >> $GITHUB_ENV
echo "APP_ENV=APP_${branch^^}" >> $GITHUB_ENV
echo "BASE_URL=BASE_${branch^^}" >> $GITHUB_ENV
sed -i "s/GIT_VERSION/${{ github.sha }}/g" k8s/${branch}/api-deployment.yaml
The second step is to build the docker image with extra arguments, if you are using another CI, just add the variables directly in the build args as below:
--build-arg PROD_ENV=NEXT_PUBLIC_API_URL=${{ secrets[env.API_URL] }}\nNEXT_PUBLIC_BASE_URL=${{ secrets[env.BASE_URL]}}\nNEXT_PUBLIC_APP_ENV=${{ secrets[env.APP_ENV] }}
Pay attention to the \n to skip lines and docker to be able to understand that you are sending multiple variables to the build process.
The last thing is to add the extra args inside the Dockerfile
# Install dependencies only when needed
FROM node:16.13.0-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
# Rebuild the source code only when needed
FROM node:16.13.0-alpine AS builder
ARG PROD_ENV=""
WORKDIR /app
COPY . .
COPY --from=deps /app/node_modules ./node_modules
RUN printf "$PROD_ENV" >> .env.production
RUN yarn build
# Production image, copy all the files and run next
FROM node:16.13.0-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/.env* ./
COPY --from=builder /app/next-i18next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /app/.next
USER nextjs
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry.
RUN npx next telemetry disable
CMD ["yarn", "start"]
I send as extra args PROD_ENV and then build a .env.production file on the fly with the required values.
Mark as answer if it helps you
Related
I'm currently doing a PoC to validate usage of Argo Workflow. I created a workflow spec with the following template (this is just a small portion of the workflow yaml):
templates:
- name: dummy-name
inputs:
parameters:
- name: params
container:
name: container-name
image: <image>
volumeMounts:
- name: vault-token
mountPath: "/etc/secrets"
readOnly: true
imagePullPolicy: IfNotPresent
command: ['workflow', 'f10', 'reports', 'expiry', '.', '--days-until-expiry', '30', '--vault-token-file-path', '/etc/secrets/token', '--environment', 'corporate', '--log-level', 'debug']
The above way of passing the commands works without any issues upon submitting the workflow. However, if I replace the command with {{inputs.parameters.params}} like this:
templates:
- name: dummy-name
inputs:
parameters:
- name: params
container:
name: container-name
image: <image>
volumeMounts:
- name: vault-token
mountPath: "/etc/secrets"
readOnly: true
imagePullPolicy: IfNotPresent
command: ['workflow', '{{inputs.parameters.params}}']
it fails with the following error:
DEBU[2023-01-20T18:11:07.220Z] Log line
content="Error: failed to find name in PATH: exec: \"workflow f10 reports expiry . --days-until-expiry 30 --vault-token-file-path /etc/secrets/token --environment corporate --log-level debug\":
stat workflow f10 reports expiry . --days-until-expiry 30 --vault-token-file-path /etc/secrets/token --environment corporate --log-level debug: no such file or directory"
Am I missing something here?
FYI: The Dockerfile that builds the container has the following ENTRYPOINT set:
ENTRYPOINT ["workflow"]
I am testing automation by applying Gitlab CI/CD to a GKE cluster. The app is successfully deployed, but the source code changes are not applied (eg renaming the html title).
I have confirmed that the code has been changed in the gitlab repository master branch. No other branch.
CI/CD simply goes through the process below.
push code to master branch
builds the NextJS code
builds the docker image and pushes it to GCR
pulls the docker image and deploys it in.
The content of the menifest file is as follows.
.gitlab-ci.yml
stages:
- build-push
- deploy
image: docker:19.03.12
variables:
GCP_PROJECT_ID: PROJECT_ID..
GKE_CLUSTER_NAME: cicd-micro-cluster
GKE_CLUSTER_ZONE: asia-northeast1-b
DOCKER_HOST: tcp://docker:2375/
DOCKER_TLS_CERTDIR: ""
REGISTRY_HOSTNAME: gcr.io/${GCP_PROJECT_ID}
DOCKER_IMAGE_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: latest
services:
- docker:19.03.12-dind
build-push:
stage: build-push
before_script:
- docker info
- echo "$GKE_ACCESS_KEY" > key.json
- docker login -u _json_key --password-stdin https://gcr.io < key.json
script:
- docker build --tag $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG .
- docker push $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG
deploy:
stage: deploy
image: google/cloud-sdk
script:
- export USE_GKE_GCLOUD_AUTH_PLUGIN=True
- echo "$GKE_ACCESS_KEY" > key.json
- gcloud auth activate-service-account --key-file=key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud config set container/cluster $GKE_CLUSTER_NAME
- gcloud config set compute/zone $GKE_CLUSTER_ZONE
- gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_CLUSTER_ZONE --project $GCP_PROJECT_ID
- kubectl apply -f deployment.yaml
- gcloud container images list-tags gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME} --filter='-tags:*' --format="get(digest)" --limit=10 > tags && while read p; do gcloud container images delete "gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME}#$p" --quiet; done < tags
Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontweb-lesson-prod
labels:
app: frontweb-lesson
spec:
selector:
matchLabels:
app: frontweb-lesson
template:
metadata:
labels:
app: frontweb-lesson
spec:
containers:
- name: frontweb-lesson-prod-app
image: gcr.io/PROJECT_ID../REPOSITORY_NAME..:latest
ports:
- containerPort: 3000
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: frontweb-lesson-prod-svc
labels:
app: frontweb-lesson
spec:
selector:
app: frontweb-lesson
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
type: LoadBalancer
loadBalancerIP: "EXTERNAL_IP.."
Is there something I'm missing?
By default,imagepullpolicy will be Always but there could be chances if there is no change in the deployment file when applying it might not update the deployment. As you are using the same label each time latest.
As there different between kubectl apply and kubectl patch command
What you can do is add minor label change or annotation change in deployment and check image will get updated with kubectl apply command too otherwise it will be mostly unchange response of kubectl apply
Ref : imagepullpolicy
You should avoid using the :latest tag when deploying containers in
production as it is harder to track which version of the image is
running and more difficult to roll back properly.
I have a workflow that runs DAG with 2 tasks.
Each task clones a repo.
I want that the first repo in the 1st task, will be available for the second task which runs another container.
How can I share the repos between them?
Using volume persisent, artifacts, outputs?
kind: Workflow
metadata:
name: create-configmap-dag-workflow
spec:
entrypoint: create-configmap-dag
templates:
- name: create-configmap-dag
dag:
tasks:
- name: automation-npm-packages
template: run-shell-command
arguments:
parameters:
- name: cmd
value: git clone https://github.com/org/pulumi-automation.git && cd automation && npm install
- name: campaign-update-npm-packages
template: run-shell-command
depends: automation-npm-packages.Succeeded
arguments:
parameters:
- name: cmd
value: git clone https://github.com/org/campaign-update.git && cd campaign-update/infra && npm install
- name: run-shell-command
inputs:
parameters:
- name: cmd
container:
image: amazonaws.com/jenkins-slave:ecs-global-node_master-3
command: [ "sh", "-c" ]
args: [ "{{inputs.parameters.cmd}}" ]
We have a Tekton pipeline and want to replace the image tags contents of our deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-api-spring-boot
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: microservice-api-spring-boot
template:
metadata:
labels:
app: microservice-api-spring-boot
spec:
containers:
- image: registry.gitlab.com/jonashackt/microservice-api-spring-boot#sha256:5d8a03755d3c45a3d79d32ab22987ef571a65517d0edbcb8e828a4e6952f9bcd
name: microservice-api-spring-boot
ports:
- containerPort: 8098
imagePullSecrets:
- name: gitlab-container-registry
Our Tekton pipeline uses the yq Task from Tekton Hub to replace the .spec.template.spec.containers[0].image with the "$(params.IMAGE):$(params.SOURCE_REVISION)" name like this:
- name: substitute-config-image-name
taskRef:
name: yq
runAfter:
- fetch-config-repository
workspaces:
- name: source
workspace: config-workspace
params:
- name: files
value:
- "./deployment/deployment.yml"
- name: expression
value: .spec.template.spec.containers[0].image = \"$(params.IMAGE)\":\"$(params.SOURCE_REVISION)\"
Sadly the yq Task doesn't seem to work, it produces a green
Step completed successfully, but shows the following errors:
16:50:43 safelyRenameFile [ERRO] Failed copying from /tmp/temp3555913516 to /workspace/source/deployment/deployment.yml
16:50:43 safelyRenameFile [ERRO] open /workspace/source/deployment/deployment.yml: permission denied
Here's also a screenshot from our Tekton Dashboard:
Any idea on how to solve the error?
The problem seems to be related to the way how the Dockerfile of https://github.com/mikefarah/yq now handles file permissions (for example this fix among others). The 0.3 version of the Tekton yq Task uses the image https://hub.docker.com/layers/mikefarah/yq/4.16.2/images/sha256-c6ef1bc27dd9cee57fa635d9306ce43ca6805edcdab41b047905f7835c174005 which produces the error.
One work-around to the problem could be the usage of the yq Task version 0.2 which you can apply via:
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/yq/0.2/yq.yaml
This one uses the older docker.io/mikefarah/yq:4#sha256:34f1d11ad51dc4639fc6d8dd5ade019fe57cf6084bb6a99a2f11ea522906033b and works without the error.
Alternatively you can simply create your own yq based Task that won't have the problem like this:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: replace-image-name-with-yq
spec:
workspaces:
- name: source
description: A workspace that contains the file which need to be dumped.
params:
- name: IMAGE_NAME
description: The image name to substitute
- name: FILE_PATH
description: The file path relative to the workspace dir.
- name: YQ_VERSION
description: Version of https://github.com/mikefarah/yq
default: v4.2.0
steps:
- name: substitute-with-yq
image: alpine
workingDir: $(workspaces.source.path)
command:
- /bin/sh
args:
- '-c'
- |
set -ex
echo "--- Download yq & add to path"
wget https://github.com/mikefarah/yq/releases/download/$(params.YQ_VERSION)/yq_linux_amd64 -O /usr/bin/yq &&\
chmod +x /usr/bin/yq
echo "--- Run yq expression"
yq e ".spec.template.spec.containers[0].image = \"$(params.IMAGE_NAME)\"" -i $(params.FILE_PATH)
echo "--- Show file with replacement"
cat $(params.FILE_PATH)
resources: {}
This custom Task simple uses the alpine image as base and installs yq using the Plain binary wget download. Also it uses yq exactly as you would do on the command line locally, which makes development of your expression so much easier!
As a bonus it outputs the file contents so you can check the replacement results right in the Tekton pipeline!
You need to apply it with
kubectl apply -f tekton-ci-config/replace-image-name-with-yq.yml
And should now be able to use it like this:
- name: replace-config-image-name
taskRef:
name: replace-image-name-with-yq
runAfter:
- dump-contents
workspaces:
- name: source
workspace: config-workspace
params:
- name: IMAGE_NAME
value: "$(params.IMAGE):$(params.SOURCE_REVISION)"
- name: FILE_PATH
value: "./deployment/deployment.yml"
Inside the Tekton dashboard it will look somehow like this and output the processed file:
Need to change the values of AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER & AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY via CI-CD variable. The above values are present in airflow_template.yaml file. I tried substituting the CI-CD variables, but it is not working. If there is a better way to parameterize. please let me know.
#My project folder structure looks like below:
dataops
-- docker
-- base
-- airflow.cfg
-- **airflow_template.yaml**
-- Dockerfile
-- dag-image
--Dockerfile
-- helm
--Chart.yaml
--values.yaml
--templates
--deployment.yaml
--svc.yaml
**airflow_template.yaml**
apiVersion: v1
kind: Pod
metadata:
labels: {}
spec:
containers:
- args: []
command: []
env:
- name: AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY
value: $DEV_AIRFLOW_CONTAINER_REPO
- name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
value: $DEV_AIRFLOW_LOG_FOLDER
envFrom: []
imagePullPolicy: Always
name: base
ports: []
volumeMounts:
- mountPath: /usr/local/airflow/logs
name: airflow-logs
hostNetwork: false
imagePullSecrets: []
initContainers: []
nodeSelector: {}
restartPolicy: Never
securityContext:
runAsUser: 1000
serviceAccountName: default
volumes:
- emptyDir: {}
name: airflow-logs
gitlab-ci.yml
stages:
- build_and_upload
- deploy_to_dev
- tag_prod
- deploy_to_prod
build_and_upload:
stage: build_and_upload
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.14-dind
script:
- echo $DEV_CREDENTIALS > service_account.json && cat service_account.json | docker login -u _json_key --password-stdin https://gcr.io
- echo "as- $DEV_AIRFLOW_LOG_FOLDER"
- export DEV_AIRFLOW_LOG_FOLDER="${DEV_AIRFLOW_LOG_FOLDER}"
- mkdir -p edfi/operation
- cp -r airflow_dags/ dataops/docker/dag-image/airflow_dags/
- cd dataops/docker/dag-image/
- docker build -t "$DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA" --build-arg COMMIT_HASH=$CI_COMMIT_SHORT_SHA .
- docker tag $DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA $DEV_DAGS_IMAGE:latest
- docker push $DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $DEV_DAGS_IMAGE:latest
only:
refs:
- develop
# variables:
# - $CI_COMMIT_MESSAGE =~ /penguin/
deploy_to_dev:
stage: deploy_to_dev
image: $CI_REGISTRY_IMAGE:kube-image
script:
- echo $DEV_CREDENTIALS > service_account.json && cat service_account.json | docker login -u _json_key --password-stdin https://gcr.io
- echo "as- $DEV_AIRFLOW_LOG_FOLDER"
- export DEV_AIRFLOW_CONTAINER_REPO="${DEV_AIRFLOW_CONTAINER_REPO}"
- export DEV_AIRFLOW_LOG_FOLDER="${DEV_AIRFLOW_LOG_FOLDER}"
- gcloud auth activate-service-account $DEV_SERVICE_ACCOUNT --key-file=./service_account.json --project=$DEV_PROJECT_NAME
- gcloud container clusters get-credentials $DEV_GKE_CLUSTER --region $REGION
- echo $DEV_DB_CONN > dataops/helm/airflow-loadbalancer/files/secrets/airflow/AIRFLOW__CORE__SQL_ALCHEMY_CONN
- cd dataops/helm/
- helm upgrade airflow-dev airflow-loadbalancer/ --install --atomic --set dags_image.tag=$CI_COMMIT_SHORT_SHA
only:
refs:
- develop
You could make it a jinja2 template and use a small Python program to interpolate the values into the template.
Then you also have all the flexibility to use environment variables or something else.