kubectl pull image from gitlab unauthorized: HTTP Basic: Access denied - kubernetes

I am trying to configure gitlab ci to deploy app to google compute engine. I have succesfully pushed image to gitlab repository but after applying kubernetes deployment config i see following error in kubectl describe pods:
Failed to pull image "registry.gitlab.com/proj/subproj/api:v1": rpc error: code = 2
desc = Error response from daemon: {"message":"Get https://registry.gitlab.com/v2/proj/subproj/api/manifests/v1: unauthorized: HTTP Basic: Access denied"}
Here is my deployment gitlab-ci job:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl delete secret registry.gitlab.com --ignore-not-found
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com/v1/ --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" --docker-email=some#gmail.com
- kubectl apply -f cloud-kubernetes.yml
and here is cloud-kubernetes.yml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: proj
labels:
app: proj
spec:
type: LoadBalancer
ports:
- port: 8082
name: proj
targetPort: 8082
nodePort: 32756
selector:
app: proj
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: projdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: proj
spec:
containers:
- name: projcontainer
image: registry.gitlab.com/proj/subproj/api:v1
imagePullPolicy: Always
env:
- name: SPRING_PROFILES_ACTIVE
value: "cloud"
ports:
- containerPort: 8082
imagePullSecrets:
- name: registry.gitlab.com
I have followed this article

There is workaround, image could be pushed to google container registry, and then pulled from gcr without security. We can push image to gcr without gcloud cli using json token file. So .gitlab-ci.yaml could look like:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
- docker tag registry.gitlab.com/proj/subproj/api:v1 gcr.io/proj/api:v1
- docker login -u _json_key -p "$GOOGLE_KEY" https://gcr.io
- docker push gcr.io/proj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl apply -f cloud-kubernetes.yml
And image in cloud-kubernetes.yaml should be:
gcr.io/proj/api:v1

You must use --docker-server=CI_REGISTRY.
The same as you sue for docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY.
Also note that your docker secrets must be in the same namespace with Deployment/ReplicaSet/DaemonSet/StatefullSet/Job.

Related

Gitlab CI/CD pipeline passed, but no changes were applied to the server

I am testing automation by applying Gitlab CI/CD to a GKE cluster. The app is successfully deployed, but the source code changes are not applied (eg renaming the html title).
I have confirmed that the code has been changed in the gitlab repository master branch. No other branch.
CI/CD simply goes through the process below.
push code to master branch
builds the NextJS code
builds the docker image and pushes it to GCR
pulls the docker image and deploys it in.
The content of the menifest file is as follows.
.gitlab-ci.yml
stages:
- build-push
- deploy
image: docker:19.03.12
variables:
GCP_PROJECT_ID: PROJECT_ID..
GKE_CLUSTER_NAME: cicd-micro-cluster
GKE_CLUSTER_ZONE: asia-northeast1-b
DOCKER_HOST: tcp://docker:2375/
DOCKER_TLS_CERTDIR: ""
REGISTRY_HOSTNAME: gcr.io/${GCP_PROJECT_ID}
DOCKER_IMAGE_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: latest
services:
- docker:19.03.12-dind
build-push:
stage: build-push
before_script:
- docker info
- echo "$GKE_ACCESS_KEY" > key.json
- docker login -u _json_key --password-stdin https://gcr.io < key.json
script:
- docker build --tag $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG .
- docker push $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG
deploy:
stage: deploy
image: google/cloud-sdk
script:
- export USE_GKE_GCLOUD_AUTH_PLUGIN=True
- echo "$GKE_ACCESS_KEY" > key.json
- gcloud auth activate-service-account --key-file=key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud config set container/cluster $GKE_CLUSTER_NAME
- gcloud config set compute/zone $GKE_CLUSTER_ZONE
- gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_CLUSTER_ZONE --project $GCP_PROJECT_ID
- kubectl apply -f deployment.yaml
- gcloud container images list-tags gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME} --filter='-tags:*' --format="get(digest)" --limit=10 > tags && while read p; do gcloud container images delete "gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME}#$p" --quiet; done < tags
Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontweb-lesson-prod
labels:
app: frontweb-lesson
spec:
selector:
matchLabels:
app: frontweb-lesson
template:
metadata:
labels:
app: frontweb-lesson
spec:
containers:
- name: frontweb-lesson-prod-app
image: gcr.io/PROJECT_ID../REPOSITORY_NAME..:latest
ports:
- containerPort: 3000
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: frontweb-lesson-prod-svc
labels:
app: frontweb-lesson
spec:
selector:
app: frontweb-lesson
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
type: LoadBalancer
loadBalancerIP: "EXTERNAL_IP.."
Is there something I'm missing?
By default,imagepullpolicy will be Always but there could be chances if there is no change in the deployment file when applying it might not update the deployment. As you are using the same label each time latest.
As there different between kubectl apply and kubectl patch command
What you can do is add minor label change or annotation change in deployment and check image will get updated with kubectl apply command too otherwise it will be mostly unchange response of kubectl apply
Ref : imagepullpolicy
You should avoid using the :latest tag when deploying containers in
production as it is harder to track which version of the image is
running and more difficult to roll back properly.

Skaffold deploys an extra pod when deploying with helm

I have skaffold build that creates 2 docker images - myimg1 and myimg2.
When I try to deploy via helm, skaffold deploys them fine, then tries to deploy and additional pod for myimg1. I can see that in minikube dashboard. At first see myimg1:<tag> but in a few seconds that changes to myimg1-0
deploy:
kubectl:
manifests:
- k8s/pgadmin.yaml
kubeContext: minikube
helm:
releases:
- name: myrelease
chartPath: charts/mychart
artifactOverrides:
myimg1.container.image: myimg1
- name: myrelease
chartPath: charts/mychart
artifactOverrides:
myimg2.container.image: myimg2
Values.yam looks like this:
myimg1:
name: myimg1
container:
image: jfrog.host.com/docker_repo/myimg1:latest
pullPolicy: IfNotPresent
port: 5432
service:
type: ClusterIP
port: 5432
myimg2:
name: myimag2
container:
image: jfrog.host.com/docker_repo/myimg1:latest
pullPolicy: IfNotPresent
port: 8080
service:
type: ClusterIP
port: 8080
portName: http
Now, if I run helm manually and override the image tags via command line arguments, it deploys fine:
helm install --atomic --debug myrelease ./charts/mychart/ -f ./charts/mychart/values.yaml --set-string myimg1.container.image=myimg1:latest --set-string myimg2.container.image: myimg2:latest
When I run skaffold dev however, both containers are deployed, then the second image is deployed again and it tries to pulls from the remote registry.
As far as I can tell everything is set up fine:
Is there anyway to debug this?
I see the following log message:
time="2022-10-07T10:40:50-04:00" level=info msg="Streaming logs from pod: myimg1-0 container: myimg1" subtask=-1 task=DevLoop
time="2022-10-07T10:40:50-04:00" level=debug msg="Running command: [kubectl --context minikube logs --since=7s -f myimg1-0 -c myrelease --namespace default]" subtask=-1 task=DevLoop

Is it possible to add dependent environment variables in Google Cloud Run?

I would like to specify dependent environment variables on a Cloud Run service.
If the environment variables have been defined in a .env file it would look like this
DATABASE_NAME=my-database
DATABASE_USER=root
DATABASE_PASSWORD=P4SSw0rd!
DATABASE_PORT=5432
DATABASE_HOST="/socket/my-database-socket"
DATABASE_URL="user=${DATABASE_USER} password=${DATABASE_PASSWORD} dbname=${DATABASE_NAME} host=${DATABASE_HOST}"
In this example, DATABASE_URL depends on every other environment variables.
To deploy the service I run the following command:
gcloud run deploy my-service \
--image gcr.io/my-project/my-image:latest \
--region europe-west1 \
--port 80 \
--platform managed \
--allow-unauthenticated \
--set-env-vars 'DATABASE_NAME=my-database' \
--set-env-vars 'DATABASE_USER=root' \
--set-env-vars 'DATABASE_PASSWORD=P4SSw0rd!' \
--set-env-vars 'DATABASE_PORT=5432' \
--set-env-vars 'DATABASE_HOST="/socket/my-database-socket"' \
--set-env-vars 'DATABASE_URL="user=$(DATABASE_USER) password=$(DATABASE_PASSWORD) dbname=$(DATABASE_NAME) host=$(DATABASE_HOST)"'
Here is the created YAML definition of the service (some values are omitted)
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
spec:
template:
metadata:
name: ...
spec:
containerConcurrency: 80
timeoutSeconds: 300
containers:
- image: ...
ports:
- name: http1
containerPort: 80
env:
- name: DATABASE_NAME
value: my-database
- name: DATABASE_USER
value: root
- name: DATABASE_PASSWORD
value: P4SSw0rd!
- name: DATABASE_HOST
value: /socket/my-database-socket
- name: DATABASE_URL
value: user=$(DATABASE_USER) password=$(DATABASE_PASSWORD) dbname=$(DATABASE_NAME) host=$(DATABASE_HOST)
The problem is that when the service is running, the env vars in DATABASE_URL seem not interpolated.
I read that Kubernetes supports dependent env vars but I can't figure out how to make this run in Cloud Run.
I am wondering if it is supported in Cloud Run in the end.
It's likely this may work in Knative open source (which uses Kubernetes to execute pods) but not on Google Cloud Run (fully hosted), which runs on a proprietary execution engine.

How to parameterized pod_template_file (yaml file) using GitLab CI-CD variables?

Need to change the values of AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER & AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY via CI-CD variable. The above values are present in airflow_template.yaml file. I tried substituting the CI-CD variables, but it is not working. If there is a better way to parameterize. please let me know.
#My project folder structure looks like below:
dataops
-- docker
-- base
-- airflow.cfg
-- **airflow_template.yaml**
-- Dockerfile
-- dag-image
--Dockerfile
-- helm
--Chart.yaml
--values.yaml
--templates
--deployment.yaml
--svc.yaml
**airflow_template.yaml**
apiVersion: v1
kind: Pod
metadata:
labels: {}
spec:
containers:
- args: []
command: []
env:
- name: AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY
value: $DEV_AIRFLOW_CONTAINER_REPO
- name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
value: $DEV_AIRFLOW_LOG_FOLDER
envFrom: []
imagePullPolicy: Always
name: base
ports: []
volumeMounts:
- mountPath: /usr/local/airflow/logs
name: airflow-logs
hostNetwork: false
imagePullSecrets: []
initContainers: []
nodeSelector: {}
restartPolicy: Never
securityContext:
runAsUser: 1000
serviceAccountName: default
volumes:
- emptyDir: {}
name: airflow-logs
gitlab-ci.yml
stages:
- build_and_upload
- deploy_to_dev
- tag_prod
- deploy_to_prod
build_and_upload:
stage: build_and_upload
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.14-dind
script:
- echo $DEV_CREDENTIALS > service_account.json && cat service_account.json | docker login -u _json_key --password-stdin https://gcr.io
- echo "as- $DEV_AIRFLOW_LOG_FOLDER"
- export DEV_AIRFLOW_LOG_FOLDER="${DEV_AIRFLOW_LOG_FOLDER}"
- mkdir -p edfi/operation
- cp -r airflow_dags/ dataops/docker/dag-image/airflow_dags/
- cd dataops/docker/dag-image/
- docker build -t "$DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA" --build-arg COMMIT_HASH=$CI_COMMIT_SHORT_SHA .
- docker tag $DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA $DEV_DAGS_IMAGE:latest
- docker push $DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $DEV_DAGS_IMAGE:latest
only:
refs:
- develop
# variables:
# - $CI_COMMIT_MESSAGE =~ /penguin/
deploy_to_dev:
stage: deploy_to_dev
image: $CI_REGISTRY_IMAGE:kube-image
script:
- echo $DEV_CREDENTIALS > service_account.json && cat service_account.json | docker login -u _json_key --password-stdin https://gcr.io
- echo "as- $DEV_AIRFLOW_LOG_FOLDER"
- export DEV_AIRFLOW_CONTAINER_REPO="${DEV_AIRFLOW_CONTAINER_REPO}"
- export DEV_AIRFLOW_LOG_FOLDER="${DEV_AIRFLOW_LOG_FOLDER}"
- gcloud auth activate-service-account $DEV_SERVICE_ACCOUNT --key-file=./service_account.json --project=$DEV_PROJECT_NAME
- gcloud container clusters get-credentials $DEV_GKE_CLUSTER --region $REGION
- echo $DEV_DB_CONN > dataops/helm/airflow-loadbalancer/files/secrets/airflow/AIRFLOW__CORE__SQL_ALCHEMY_CONN
- cd dataops/helm/
- helm upgrade airflow-dev airflow-loadbalancer/ --install --atomic --set dags_image.tag=$CI_COMMIT_SHORT_SHA
only:
refs:
- develop
You could make it a jinja2 template and use a small Python program to interpolate the values into the template.
Then you also have all the flexibility to use environment variables or something else.

Deploying Node.js apps with Kubernetes

I was trying to deploy a very basic Express app, a small server listening on 8080 on a EC2 server (Ubuntu 16.04) following this tutorial. On that server, it was created a Kubernetes cluster through kops 1.8.0.
After that, I created a Dockerfile like the following:
FROM node:carbon
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
# Create app directory
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
# At the end, set the user to use when running this image
USER node
After that, I built the image with docker build -t ccastelli/stupid_server:test1, I specified my credentials with docker login -u ccastelli, I copied the imaged ID from docker images, tagged it docker tag c549618dcd86 org/test:first_try and pushed with docker push org/test on a private repository in cloud.docker.com.
After that I created a cluster secret with kubectl create secret docker-registry ccastelli-regcred --docker-server=docker.com --docker-username=ccastelli --docker-password='pass' --docker-email=myemail#gmail.com
After that I created a deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: stupid-server-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: stupid-server
spec:
containers:
- name: stupid-server
image: org/test:first_try
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: ccastelli-regcred
I see from kubectl get pods that the image transitioned from ErrPullImage to ImagePullBackOff and it's not ready. Anyway the docker container was working on the client instance but not in the cluster. At this point, I'm a bit lost. What am I doing wrong?
Thanks
Edit: message error:
Failed to pull image "org/test:first_try": rpc error: code =
Unknown desc = Error response from daemon: repository pycomio/test not
found: does not exist or no pull access
your --docker-server should be index.docker.io
DOCKER_REGISTRY_SERVER=https://index.docker.io/v1/
DOCKER_USER=Type your dockerhub username, same as when you `docker login`
DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`
kubectl create secret docker-registry myregistrykey \
--docker-server=$DOCKER_REGISTRY_SERVER \
--docker-username=$DOCKER_USER \
--docker-password=$DOCKER_PASSWORD \
--docker-email=$DOCKER_EMAIL