I was trying to deploy a very basic Express app, a small server listening on 8080 on a EC2 server (Ubuntu 16.04) following this tutorial. On that server, it was created a Kubernetes cluster through kops 1.8.0.
After that, I created a Dockerfile like the following:
FROM node:carbon
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
# Create app directory
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
# At the end, set the user to use when running this image
USER node
After that, I built the image with docker build -t ccastelli/stupid_server:test1, I specified my credentials with docker login -u ccastelli, I copied the imaged ID from docker images, tagged it docker tag c549618dcd86 org/test:first_try and pushed with docker push org/test on a private repository in cloud.docker.com.
After that I created a cluster secret with kubectl create secret docker-registry ccastelli-regcred --docker-server=docker.com --docker-username=ccastelli --docker-password='pass' --docker-email=myemail#gmail.com
After that I created a deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: stupid-server-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: stupid-server
spec:
containers:
- name: stupid-server
image: org/test:first_try
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: ccastelli-regcred
I see from kubectl get pods that the image transitioned from ErrPullImage to ImagePullBackOff and it's not ready. Anyway the docker container was working on the client instance but not in the cluster. At this point, I'm a bit lost. What am I doing wrong?
Thanks
Edit: message error:
Failed to pull image "org/test:first_try": rpc error: code =
Unknown desc = Error response from daemon: repository pycomio/test not
found: does not exist or no pull access
your --docker-server should be index.docker.io
DOCKER_REGISTRY_SERVER=https://index.docker.io/v1/
DOCKER_USER=Type your dockerhub username, same as when you `docker login`
DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`
kubectl create secret docker-registry myregistrykey \
--docker-server=$DOCKER_REGISTRY_SERVER \
--docker-username=$DOCKER_USER \
--docker-password=$DOCKER_PASSWORD \
--docker-email=$DOCKER_EMAIL
Related
I am testing automation by applying Gitlab CI/CD to a GKE cluster. The app is successfully deployed, but the source code changes are not applied (eg renaming the html title).
I have confirmed that the code has been changed in the gitlab repository master branch. No other branch.
CI/CD simply goes through the process below.
push code to master branch
builds the NextJS code
builds the docker image and pushes it to GCR
pulls the docker image and deploys it in.
The content of the menifest file is as follows.
.gitlab-ci.yml
stages:
- build-push
- deploy
image: docker:19.03.12
variables:
GCP_PROJECT_ID: PROJECT_ID..
GKE_CLUSTER_NAME: cicd-micro-cluster
GKE_CLUSTER_ZONE: asia-northeast1-b
DOCKER_HOST: tcp://docker:2375/
DOCKER_TLS_CERTDIR: ""
REGISTRY_HOSTNAME: gcr.io/${GCP_PROJECT_ID}
DOCKER_IMAGE_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: latest
services:
- docker:19.03.12-dind
build-push:
stage: build-push
before_script:
- docker info
- echo "$GKE_ACCESS_KEY" > key.json
- docker login -u _json_key --password-stdin https://gcr.io < key.json
script:
- docker build --tag $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG .
- docker push $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG
deploy:
stage: deploy
image: google/cloud-sdk
script:
- export USE_GKE_GCLOUD_AUTH_PLUGIN=True
- echo "$GKE_ACCESS_KEY" > key.json
- gcloud auth activate-service-account --key-file=key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud config set container/cluster $GKE_CLUSTER_NAME
- gcloud config set compute/zone $GKE_CLUSTER_ZONE
- gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_CLUSTER_ZONE --project $GCP_PROJECT_ID
- kubectl apply -f deployment.yaml
- gcloud container images list-tags gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME} --filter='-tags:*' --format="get(digest)" --limit=10 > tags && while read p; do gcloud container images delete "gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME}#$p" --quiet; done < tags
Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontweb-lesson-prod
labels:
app: frontweb-lesson
spec:
selector:
matchLabels:
app: frontweb-lesson
template:
metadata:
labels:
app: frontweb-lesson
spec:
containers:
- name: frontweb-lesson-prod-app
image: gcr.io/PROJECT_ID../REPOSITORY_NAME..:latest
ports:
- containerPort: 3000
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: frontweb-lesson-prod-svc
labels:
app: frontweb-lesson
spec:
selector:
app: frontweb-lesson
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
type: LoadBalancer
loadBalancerIP: "EXTERNAL_IP.."
Is there something I'm missing?
By default,imagepullpolicy will be Always but there could be chances if there is no change in the deployment file when applying it might not update the deployment. As you are using the same label each time latest.
As there different between kubectl apply and kubectl patch command
What you can do is add minor label change or annotation change in deployment and check image will get updated with kubectl apply command too otherwise it will be mostly unchange response of kubectl apply
Ref : imagepullpolicy
You should avoid using the :latest tag when deploying containers in
production as it is harder to track which version of the image is
running and more difficult to roll back properly.
I want to download the container image, but dont want to deploy/install the image.
How can i deploy podspec to download only images but it should not create container.
Any podspec snapshot for this?
As far as I know there is no direct Kubernetes resources to only download an image of your choosing. To have the images of your applications on your Nodes you can consider using following solutions/workarounds:
Use a Daemonset with an initContainer('s)
Use tools like Ansible to pull the images with a playbook
Use a Daemonset with InitContainers
Assuming the following situation:
You've created 2 images that you want to have on all of the Nodes.
You can use a Daemonset (spawn a Pod on each Node) with initContainers (with images as source) that will run on all Nodes and ensure that the images will be present on the machine.
An example of such setup could be following:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pull-images
labels:
k8s-app: pull-images
spec:
# AS THIS DAEMONSET IS NOT SUPPOSED TO SERVE TRAFFIC I WOULD CONSIDER USING THIS UPDATE STRATEGY FOR SPEEDING UP THE DOWNLOAD PROCESS
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 100
selector:
matchLabels:
name: pull-images
template:
metadata:
labels:
name: pull-images
spec:
initContainers:
# PUT HERE IMAGES THAT YOU WANT TO PULL AND OVERRIDE THEIR ENTRYPOINT
- name: ubuntu
image: ubuntu:20.04 # <-- IMAGE #1
imagePullPolicy: Always # SPECIFY THE POLICY FOR SPECIFIC IMAGE
command: ["/bin/sh", "-c", "exit 0"]
- name: nginx
image: nginx:1.19.10 # <-- IMAGE #2
imagePullPolicy: IfNotPresent # SPECIFY THE POLICY FOR SPECIFIC IMAGE
command: ["/bin/sh", "-c", "exit 0"]
containers:
# MAIN CONTAINER WITH AS SMALL AS POSSIBLE IMAGE SLEEPING
- name: alpine
image: alpine
command: [sleep]
args:
- "infinity"
Kubernetes Daemonset controller will ensure that the Pod will run on each Node. Before the image is run, the initContainers will act as a placeholders for the images. The images that you want to have on the Nodes will be pulled. The ENTRYPOINT will be overridden to not run the image continuously. After that the main container (alpine) will be run with a sleep infinity command.
This setup will also work when the new Nodes are added.
Following on that topic I would also consider checking following documentation on imagePullPolicy:
Kubernetes.io: Docs: Concepts: Containers: Images: Updating images
A side note!
I've set the imagePullPolicy for the images in initContainers differently to show you that you can specify the imagePullPolicy independently for each container. Please use the policy that suits your use case the most.
Use tools like Ansible to pull the images with a playbook
Assuming that you have SSH access to the Nodes you can consider using Ansible with it's community module (assuming that you are using Docker):
community.docker.docker_image
Citing the documentation for this module:
This plugin is part of the community.docker collection (version 1.3.0).
To install it use: ansible-galaxy collection install community.docker.
Synopsis
Build, load or pull an image, making the image available for creating containers. Also supports tagging an image into a repository and archiving an image to a .tar file.
-- Docs.ansible.com: Ansible: Collections: Community: Docker: Docker image module
You can use it with a following example:
hosts.yaml
all:
hosts:
node-1:
ansible_port: 22
ansible_host: X.Y.Z.Q
node-2:
ansible_port: 22
ansible_host: A.B.C.D
playbook.yaml
- name: Playbook to download images
hosts: all
user: ENTER_USER
tasks:
- name: Pull an image
community.docker.docker_image:
name: "{{ item }}"
source: pull
with_items:
- "nginx"
- "ubuntu"
A side note!
In the ansible way, I needed to install docker python package:
$ pip3 install docker
Additional resources:
Kubernetes.io: Docs: Concepts: Workloads: Controllers: Daemonset
Kubernetes.io: Docs: Concepts: Workloads: Pods: initContainers
I have created an mock.Dockerfile which just contains one line.
FROM eu.gcr.io/some-org/mock-service:0.2.0
With that config and a reference to it the build section, skaffold builds that dockerfile using the private GCR registry. However, if I remove that Dockerfile, skaffold does not build it, and when starting skaffold it only loads the images which are referenced in that build section(public images, like postgres work as well). So in that local kubernetes config, like minikube, this results in a
ImagePullBackOff
Failed to pull image "eu.gcr.io/some-org/mock-service:0.2.0": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials
So basically when I create a one-line Dockerfile, and include that, skaffold builds that image and loads it into minikube. Now it is possible to change the minikube config so that request to GCR succeds, but the goal is that developers don't have to change their minikube config...
Is there any other way to get that image loaded into Minikube, without changing the config and without that one-line Dockerfile?
skaffold.yaml:
apiVersion: skaffold/v2beta8
kind: Config
metadata:
name: some-service
build:
artifacts:
- image: eu.gcr.io/some-org/some-service
docker:
dockerfile: Dockerfile
- image: eu.gcr.io/some-org/mock-service
docker:
dockerfile: mock.Dockerfile
local: { }
profiles:
- name: mock
activation:
- kubeContext: (minikube|kind-.*|k3d-(.*))
deploy:
helm:
releases:
- name: postgres
chartPath: test/postgres
- name: mock-service
chartPath: test/mock-service
- name: skaffold-some-service
chartPath: helm/some-service
artifactOverrides:
image: eu.gcr.io/some-org/some-service
setValues:
serviceAccount.create: true
Although GKE comes pre-configured to pull from registries within the same project, Kubernetes clusters generally require special configuration at the pod level to pull from private registries. It's a bit involved.
Fortunately minikube introduced a registry-creds add-on that will configure the minikube instance with appropriate credentials to pull images.
I have tested the app using minikube locally and it works. When I use the same Doeckerfile with deploymnt.yml, the pod returns to Error state with the below reason
Error: Cannot find module '/usr/src/app/server.js'
Dockerfile:
FROM node:13-alpine
WORKDIR /api
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app-dep
labels:
app: nodejs-app
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
serviceAccountName: opp-sa
imagePullSecrets:
- name: xxx
containers:
- name: nodejs-app
image: registry.xxxx.net/k8s_app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
Assuming it could be a problem with "node_modules", I had "ls" on the WORDIR inside the Dockerfile and it does show me "node_modules". Does anyone what else to check to resolve this issue ?
Since I can't give you this level of suggestions on a comment I'm writing you a fully working example so you can compare to yours and check if there is something different.
Sources:
Your Dockerfile:
FROM node:13-alpine
WORKDIR /api
COPY package*.json .
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
Sample package.json:
{
"name": "docker_web_app",
"version": "1.0.0",
"description": "Node.js on Docker",
"author": "First Last <first.last#example.com>",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^4.16.1"
}
}
sample server.js:
'use strict';
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
Build image:
$ ls
Dockerfile package.json server.js
$ docker build -t k8s_app .
...
Successfully built 2dfbfe9f6a2f
Successfully tagged k8s_app:latest
$ docker images k8s_app
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s_app latest 2dfbfe9f6a2f 4 minutes ago 118MB
Your deployment sample + service for easy access (called nodejs-app.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app-dep
labels:
app: nodejs-app
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
containers:
- name: web-app
image: k8s_app
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: web-app-svc
spec:
type: NodePort
selector:
app: nodejs-app
ports:
- port: 8080
targetPort: 8080
Note: I'm using the minikube docker registry for this example, that's why imagePullPolicy: Never is set.
Now I'll deploy it:
$ kubectl apply -f nodejs-app.yaml
deployment.apps/nodejs-app-dep created
service/web-app-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nodejs-app-dep-5d75f54c7d-mfw8x 1/1 Running 0 3s
Whenever you need to troubleshoot inside a pod you can use kubectl exec -it <pod_name> -- /bin/sh (or /bin/bash depending on the base image.)
$ kubectl exec -it nodejs-app-dep-5d75f54c7d-mfw8x -- /bin/sh
/api # ls
Dockerfile node_modules package-lock.json package.json server.js
The pod is running and the files are in the WORKDIR folder as stated in the Dockerfile.
Finally let's test accessing from outside the cluster:
$ minikube service list
|-------------|-------------|--------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|-------------|--------------|-------------------------|
| default | web-app-svc | 8080 | http://172.17.0.2:31446 |
|-------------|-------------|--------------|-------------------------|
$ curl -i http://172.17.0.2:31446
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 11
ETag: W/"b-Ck1VqNd45QIvq3AZd8XYQLvEhtA"
Date: Thu, 14 May 2020 18:49:40 GMT
Connection: keep-alive
Hello World$
The Hello World is being served as desired.
To Summarize:
I Build the Docker Image in minikube ssh so it is cached.
Created the manifest containing the deployment pointing to the image, added the service part to allow access externally using Nodeport.
NodePort routes all traffic to the Minikube IP in the port assigned to the service (i.e:31446) and deliver to the pods matching the selector listening on port 8080.
A few pointers for troubleshooting:
kubectl describe pod <pod_name>: provides precious information when the pod status is in any kind of error.
kubectl exec is great to troubleshoot inside the container as it's running, it's pretty similar to docker run command.
Review your code files to ensure there is no baked path in it.
Try using WORKDIR /usr/src/app instead of /api and see if you get a different result.
Try using a .dockerignore file with node_modules on it's content.
Try out and let me know in the comments if you need further help
#willrof, thanks for the detailed write-up. A reply to your response is limited to 30 characters and hence I'm posting as new comment.
My problem was resolved yesterday. It was with COPY . .
It works perfectly fine in my local but, when I tried to deploy onto the cluster with the same Dockerfile, I was running into the issue of "cannot find module..."
So it finally worked when the directory path was mentioned instead of . . while copying files
COPY /api /usr/app #copy project basically
WORKDIR /usr/app #set workdir just before npm install
RUN npm install
EXPOSE 3000
Moving WORKDIR statement before installing "node_modules" worked in my case. I'm surprised to figure this as the problem though it worked locally with COPY . .
I am trying to configure gitlab ci to deploy app to google compute engine. I have succesfully pushed image to gitlab repository but after applying kubernetes deployment config i see following error in kubectl describe pods:
Failed to pull image "registry.gitlab.com/proj/subproj/api:v1": rpc error: code = 2
desc = Error response from daemon: {"message":"Get https://registry.gitlab.com/v2/proj/subproj/api/manifests/v1: unauthorized: HTTP Basic: Access denied"}
Here is my deployment gitlab-ci job:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl delete secret registry.gitlab.com --ignore-not-found
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com/v1/ --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" --docker-email=some#gmail.com
- kubectl apply -f cloud-kubernetes.yml
and here is cloud-kubernetes.yml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: proj
labels:
app: proj
spec:
type: LoadBalancer
ports:
- port: 8082
name: proj
targetPort: 8082
nodePort: 32756
selector:
app: proj
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: projdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: proj
spec:
containers:
- name: projcontainer
image: registry.gitlab.com/proj/subproj/api:v1
imagePullPolicy: Always
env:
- name: SPRING_PROFILES_ACTIVE
value: "cloud"
ports:
- containerPort: 8082
imagePullSecrets:
- name: registry.gitlab.com
I have followed this article
There is workaround, image could be pushed to google container registry, and then pulled from gcr without security. We can push image to gcr without gcloud cli using json token file. So .gitlab-ci.yaml could look like:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
- docker tag registry.gitlab.com/proj/subproj/api:v1 gcr.io/proj/api:v1
- docker login -u _json_key -p "$GOOGLE_KEY" https://gcr.io
- docker push gcr.io/proj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl apply -f cloud-kubernetes.yml
And image in cloud-kubernetes.yaml should be:
gcr.io/proj/api:v1
You must use --docker-server=CI_REGISTRY.
The same as you sue for docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY.
Also note that your docker secrets must be in the same namespace with Deployment/ReplicaSet/DaemonSet/StatefullSet/Job.