Set up react app to work with host domain in local development - kubernetes

I have a create-react-app with minor adjustments to configuration (adding a set of aliases with react-app-rewire -- see config-overrides.js). I have added environment variables to my .env file: (1) HOST=mavata.dev; (2) DANGEROUSLY_DISABLE_HOST_CHECK=true; (3) DISABLE_ESLINT_PLUGIN=true; (4) GENERATE_SOURCEMAP=false; (5) SKIP_PREFLIGHT_CHECK=true; (6) INLINE_RUNTIME_CHUNK=false. My index.js file simply imports bootstrap.js, a file that renders the App components into the root id div of my index.html.
When I try running the react app with npm run start with a start script of set port=9000 && react-app-rewired start it takes me to mavata.dev:9000 in the browser where the page will not load due to a ERR_SSL_PROTOCOL_ERROR error saying "This site can’t provide a secure connection: mavata.dev sent an invalid response." I tried changing the start script to set port=9000 && set HTTPS=true && set SSL_CRT_FILE=./reactcert/cert.pem SSL_KEY_FILE=./reactcert/key.pem && react-app-rewired start after creating a certificate but get the same error.
When I try running the react app in my Kubernetes cluster using skaffold (port-forwarding), I navigate to mavata.dev an get a 404 Not Found Nginx error saying "GET https://mavata.dev/ 404".
How can I get my react app to be hosted over mavata.dev (i have it set to both 127.0.0.1 and 0.0.0.0 in my hosts file on my local machine)? Whether through npm run start or from skaffold dev --port-forward?
*** config-overrides.js ***:
const path = require('path');
module.exports = {
webpack:
(config, env) => {
config.resolve = {
...config.resolve,
alias: {
...config.alias,
'containerMfe': path.resolve(__dirname, './src/'),
'Variables': path.resolve(__dirname, './src/data-variables/Variables.js'),
'Functions': path.resolve(__dirname, './src/functions/Functions.js'),
'Company': path.resolve(__dirname, './src/roots/company'),
'Auth': path.resolve(__dirname, './src/roots/auth'),
'Marketing': path.resolve(__dirname, './src/roots/marketing'),
}
};
return config;
},
}
*** Client Deployment***:
apiVersion: apps/v1
kind: Deployment # type of k8s object we want to create
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
# tier: frontend
spec:
containers:
- name: client-container
# image: us.gcr.io/mavata/frontend
image: bridgetmelvin/frontend
ports:
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client-container
protocol: TCP
port: 9000
targetPort: 9000
Ingress-Srv.yaml:
# RUN: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
# for GCP run: kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)
# then run: kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: mavata.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 4000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 9000
*** Dockerfile.dev ***:
FROM node:16-alpine as builder
ENV CI=true
ENV WDS_SOCKET_PORT=0
ENV PATH /app/node_modules/.bin:$PATH
RUN apk add --no-cache sudo git openssh-client bash
RUN apk add bash icu-libs krb5-libs libgcc libintl libssl1.1 libstdc++ zlib
RUN git config --global user.email "bridgetmelvin42#gmail.com"
RUN git config --global user.name "theCosmicGame"
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan git.mdbootstrap.com >> ~/.ssh/known_hosts
RUN cd ~/.ssh && ls
WORKDIR /app
ENV DOCKER_RUNNING=true
ENV CI=true
COPY package.json ./
COPY .env ./
COPY postinstall.js ./
COPY pre-mdb-install.sh ./
COPY postinstall.sh ./
RUN npm config set unsafe-perm true
RUN npm install
RUN bash postinstall.sh
RUN ls node_modules
COPY . .
CMD ["npm", "run", "start"]

Related

Gitlab CI/CD pipeline passed, but no changes were applied to the server

I am testing automation by applying Gitlab CI/CD to a GKE cluster. The app is successfully deployed, but the source code changes are not applied (eg renaming the html title).
I have confirmed that the code has been changed in the gitlab repository master branch. No other branch.
CI/CD simply goes through the process below.
push code to master branch
builds the NextJS code
builds the docker image and pushes it to GCR
pulls the docker image and deploys it in.
The content of the menifest file is as follows.
.gitlab-ci.yml
stages:
- build-push
- deploy
image: docker:19.03.12
variables:
GCP_PROJECT_ID: PROJECT_ID..
GKE_CLUSTER_NAME: cicd-micro-cluster
GKE_CLUSTER_ZONE: asia-northeast1-b
DOCKER_HOST: tcp://docker:2375/
DOCKER_TLS_CERTDIR: ""
REGISTRY_HOSTNAME: gcr.io/${GCP_PROJECT_ID}
DOCKER_IMAGE_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: latest
services:
- docker:19.03.12-dind
build-push:
stage: build-push
before_script:
- docker info
- echo "$GKE_ACCESS_KEY" > key.json
- docker login -u _json_key --password-stdin https://gcr.io < key.json
script:
- docker build --tag $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG .
- docker push $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG
deploy:
stage: deploy
image: google/cloud-sdk
script:
- export USE_GKE_GCLOUD_AUTH_PLUGIN=True
- echo "$GKE_ACCESS_KEY" > key.json
- gcloud auth activate-service-account --key-file=key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud config set container/cluster $GKE_CLUSTER_NAME
- gcloud config set compute/zone $GKE_CLUSTER_ZONE
- gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_CLUSTER_ZONE --project $GCP_PROJECT_ID
- kubectl apply -f deployment.yaml
- gcloud container images list-tags gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME} --filter='-tags:*' --format="get(digest)" --limit=10 > tags && while read p; do gcloud container images delete "gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME}#$p" --quiet; done < tags
Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontweb-lesson-prod
labels:
app: frontweb-lesson
spec:
selector:
matchLabels:
app: frontweb-lesson
template:
metadata:
labels:
app: frontweb-lesson
spec:
containers:
- name: frontweb-lesson-prod-app
image: gcr.io/PROJECT_ID../REPOSITORY_NAME..:latest
ports:
- containerPort: 3000
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: frontweb-lesson-prod-svc
labels:
app: frontweb-lesson
spec:
selector:
app: frontweb-lesson
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
type: LoadBalancer
loadBalancerIP: "EXTERNAL_IP.."
Is there something I'm missing?
By default,imagepullpolicy will be Always but there could be chances if there is no change in the deployment file when applying it might not update the deployment. As you are using the same label each time latest.
As there different between kubectl apply and kubectl patch command
What you can do is add minor label change or annotation change in deployment and check image will get updated with kubectl apply command too otherwise it will be mostly unchange response of kubectl apply
Ref : imagepullpolicy
You should avoid using the :latest tag when deploying containers in
production as it is harder to track which version of the image is
running and more difficult to roll back properly.

Worker unable to connect to the master and invalid args in webport for Locust

I am trying to set up a load test for an endpoint. This is what I have followed so far:
Dockerfile
FROM python:3.8
# Add the external tasks directory into /tasks
WORKDIR /src
ADD requirements.txt .
RUN pip install --no-cache-dir --upgrade locust==2.10.1
ADD run.sh .
ADD load_test.py .
ADD load_test.conf .
# Expose the required Locust ports
EXPOSE 5557 5558 8089
# Set script to be executable
RUN chmod 755 run.sh
# Start Locust using LOCUS_OPTS environment variable
CMD ["bash", "run.sh"]
# Modified from:
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/docker/Dockerfile
run.sh
#!/bin/bash
LOCUST="locust"
LOCUS_OPTS="--config=load_test.conf"
LOCUST_MODE=${LOCUST_MODE:-standalone}
if [[ "$LOCUST_MODE" = "master" ]]; then
LOCUS_OPTS="$LOCUS_OPTS --master"
elif [[ "$LOCUST_MODE" = "worker" ]]; then
LOCUS_OPTS="$LOCUS_OPTS --worker --master-host=$LOCUST_MASTER_HOST"
fi
echo "${LOCUST} ${LOCUS_OPTS}"
$LOCUST $LOCUS_OPTS
# Copied from
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/docker/locust/run.sh
This is how I have written the load test locust script:
import json
from locust import HttpUser, constant, task
class CategorizationUser(HttpUser):
wait_time = constant(1)
#task
def predict(self):
payload = json.dumps(
{
"text": "Face and Body Paint washable Rubies Halloween item 91#385"
}
)
_ = self.client.post("/predict", data=payload)
I am invoking that with a configuration:
locustfile = load_test.py
headless = false
users = 1000
spawn-rate = 1
run-time = 5m
host = IP
html = locust_report.html
So, after building and pushing the Docker image and creating a k8s cluster on GKE, I am deploying it. This is how the deployment.yaml looks like:
# Copied from
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/kubernetes/templates/deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: locust-master-deployment
labels:
name: locust
role: master
spec:
replicas: 1
selector:
matchLabels:
name: locust
role: master
template:
metadata:
labels:
name: locust
role: master
spec:
containers:
- name: locust
image: gcr.io/PROJECT_ID/IMAGE_URI
imagePullPolicy: Always
env:
- name: LOCUST_MODE
value: master
- name: LOCUST_LOG_LEVEL
value: DEBUG
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5558
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: locust-worker-deployment
labels:
name: locust
role: worker
spec:
replicas: 2
selector:
matchLabels:
name: locust
role: worker
template:
metadata:
labels:
name: locust
role: worker
spec:
containers:
- name: locust
image: gcr.io/PROJECT_ID/IMAGE_URI
imagePullPolicy: Always
env:
- name: LOCUST_MODE
value: worker
- name: LOCUST_MASTER
value: locust-master-service
- name: LOCUST_LOG_LEVEL
value: DEBUG
After deployment, I am exposing the required ports like so:
kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type NodePort \
--port 5558 \
--target-port 5558 \
--name locust-5558
kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type NodePort \
--port 5557 \
--target-port 5557 \
--name locust-5557
kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type LoadBalancer \
--port 80 \
--target-port 8089 \
--name locust-web
The cluster and the nodes provision successfully. But the moment I hit the IP of locust-web, I am getting:
Any suggestions on how to resolve the bug?
Since you are exposing your pods and you are trying to access to them outside the cluster (your web application), you have to port-forward them or add an Ingress in order to access to your locust pods.
My first approach will be trying to ping or send requests to your locust pods with a simple port-forward.
More infos about the port forwarding here.
Probably the environment variables set by k8s is colliding with locust’s (LOCUST_WEB_PORT specifically). Change your setup so that no containers are named ”locust”.
See https://github.com/locustio/locust/issues/1226 for a similar issue.

Kubernetes pod in minikube can't access a service

I can't access a service from a pod, when I run the curl serviceIP:port command from my pod console, I get the following error:
root#strongswan-deployment-7bc4c96494-qmb46:/# curl -v 10.111.107.133:80
* Trying 10.111.107.133:80...
* TCP_NODELAY set
* connect to 10.111.107.133 port 80 failed: Connection timed out
* Failed to connect to 10.111.107.133 port 80: Connection timed out
* Closing connection 0
curl: (28) Failed to connect to 10.111.107.133 port 80: Connection timed out
Here is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: strongswan-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: strongswan
template:
metadata:
labels:
app: strongswan
spec:
containers:
- name: strongswan-container
image: 192.168.39.1:5000/mystrongswan
ports:
- containerPort: 80
command: ["/bin/bash", "-c", "--"]
args: ["while true; do sleep 30; done;"]
securityContext:
privileged: true
imagePullSecrets:
- name: dockerregcred
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: strongswan-service
spec:
selector:
app: strongswan
ports:
- port: 80 # Port exposed to the cluster
protocol: TCP
targetPort: 80 # Port on which the pod listens
I tried with an Nginx pod and this time it works, I am able to connect to the Nginx service with the curl command.
I don't see where the problem comes from, since it works for the Nginx pod. What I did wrong?
I use minikube :
user#user-ThinkCentre-M91p:~/minikube$ minikube version
minikube version: v1.20.0
commit: c61663e942ec43b20e8e70839dcca52e44cd85ae
EDIT
My second pod yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: godart-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: godart
template:
metadata:
labels:
app: godart
spec:
containers:
- name: godart-container
image: 192.168.39.1:5000/mygodart
ports:
- containerPort: 9020
imagePullSecrets:
- name: dockerregcred
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: godart-service
spec:
selector:
app: godart
ports:
- port: 9020 # Port exposed to the cluster
protocol: TCP
targetPort: 9020 # Port on which the pod listens
The error :
[root#godart-deployment-648fb8757c-6mscv /]# curl -v 10.104.206.191:9020
* About to connect() to 10.104.206.191 port 9020 (#0)
* Trying 10.104.206.191...
* Connection timed out
* Failed connect to 10.104.206.191:9020; Connection timed out
* Closing connection 0
curl: (7) Failed connect to 10.104.206.191:9020; Connection timed out
the dockerfile :
FROM centos/systemd
ENV container docker
RUN yum -y update; yum clean all
RUN yum -y install systemd; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
COPY /godart* /home
RUN yum install -y /home/GoDart-3.3-b10.el7.x86_64.rpm
RUN yum install -y /home/GoDartHmi-3.3-b10.el7.x86_64.rpm
CMD ["/usr/sbin/init"]
EDIT EDIT:
I solved my problem by adding a file that can respond to an http request, this is the file:
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(9020, "0.0.0.0");
To make it work you must have a Node js environment installed.
Run the script with the command:node filename.js
And after that I am able to curl my services.
I don't really understand why it works now, does anyone have an explanation ?
Thank you
Your strongswan-container container is using bash -c -- "while true; do sleep 30; done;" as command.
The sleep command obviously does not listen to any port.
When you try to curl your service on port 80, a TCP connection is attempted towards the Pod on port 80, but since there is no such port listening in the Pod the connection attempt fails.
how can I fix this error without using the sleep command?
If I good understand your question I know 2 solutions of your problem. First you can understand how work CrashLoopBackOff. Then you can change Container restart policy. The most important field should be: lastProbeTime. This means Timestamp of when the Pod condition was last probed.
Second solution should be creating a readiness probe. You can read more about it also here.

write tasks to install nginx and postgresql using ansible-playbook

.score.sh is given as
#!/bin/bash
pass=0;
fail=0;
if [ $? -eq 0 ];then
worker=`ps -eaf|grep nginx|grep worker`
master=`ps -eaf|grep nginx|grep master`
serverup=`curl -Is http://localhost:9090/|grep -i "200 OK"`
serverurl=`curl -Is http://localhost:9090/|grep -io "google.com"`
if [[ ! -z ${worker} ]];then
((pass++))
echo "nginx is running as worker";
else
((fail++))
echo "nginx is not running as worker";
fi;
if [[ ! -z ${master} ]];then
((pass++))
echo "nginx is running as master";
else
((fail++))
echo "nginx is not running as master";
fi;
if [[ ! -z ${serverup} ]];then
((pass++))
echo "Nginx server is up";
else
((fail++))
echo "Nginx server is not up";
fi;
if [[ ! -z ${serverurl} ]];then
((pass++))
echo "Nginx server is redirecting to google.com";
else
((fail++))
echo "Nginx server is not redirecting to google.com ";
fi;
fi;
echo $pass $fail
score=$(( $pass * 25 ))
echo "FS_SCORE:$score%"
i was only able to install nginx and postgresql but not satisy the conditions given in .score.sh
Can someone help me how do I install nginx as both master worker node and master and direct it to google?
#installing nginx and postgresql
name: Updating apt
command: sudo apt-get update
name: Install list of packages
apt:
pkg: ['nginx', 'postgresql']
state: latest
name: Start Ngnix service
service:
name: nginx
state: started
name: Start PostgreSQL service
service:
name: postgresql
state: started
if nginx not started use
'sudo service nginx restart'
This worked for me and the fresco course did get passed for me.
---
#installing nginx and postgresql
- name: Install nginx
apt: name=nginx state=latest
tags: nginx
- name: restart nginx
service:
name: nginx
state: started
- name: Install PostgreSQL
apt: name=postgresql state=latest
tags: PostgreSQL
- name: Start PostgreSQL
service:
name: postgresql
state: started
---
#installing nginx and postgresql
- name: Install nginx
apt: name=nginx state=latest
tags: nginx
- name: restart nginx
service:
name: nginx
state: started
- name: Install PostgreSQL
apt: name=postgresql state=latest
tags: PostgreSQL
- name: Start PostgreSQL
service:
name: postgresql
state: started
I have tried the above one getting below error message
ERROR! 'apt' is not a valid attribute for a Play
The error appears to be in '/projects/challenge/fresco_nginx/tasks/main.yml': line 3, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
#installing nginx and postgresql
- name: Install nginx
^ here
All the answers give above works on installing nginx, the problem is nginx is running port 80 and the score script check 9090. If you curl using port 80 you will get response. So you need to find some way to change the nginx conf file to use port 9090.
Below code worked for me:
Define your port number and the site you wish to redirect nginx server to in .j2 file in Templates folder under your roles.
Include a task in Playbook to set the template to /etc/nginx/sites-enabled/default folder. Include a notify for the handler defined in
'Handlers' folder.
In some cases if nginx server doesnt restart, use 'sudo service nginx restart' at the terminal before testing your code.
Ansible-Sibelius (Try it Out- Write a Playbook)
#installing nginx and postgresql
- name: Install nginx
apt: name=nginx state=latest
tags: nginx
- name: restart nginx
service:
name: nginx
state: started
- name: Install PostgreSQL
apt: name=postgresql state=latest
tags: PostgreSQL
- name: Start PostgreSQL
service:
name: postgresql
state: started
- name: Set the configuration for the template file
template:
src: /<path-to-your-roles>/templates/sites-enabled.j2
dest: /etc/nginx/sites-enabled/default
notify: restart nginx
I found below code useful and passed the frescoplay. and above mentioned code also passess the handson in frescoplay.
- hosts: all
tasks:
- name: ensure nginx is at the latest version
apt: name=nginx state=latest
- name: start nginx
service:
name: nginx
state: started
server {
listen 9090;
root /var/www/your_domain/html;
index index.html;
server_name google.com;
location / {
try_files $uri $uri/ =404;
proxy_pass https://www.google.com;
}
}

Cannot find module '/usr/src/app/server.js'

I have tested the app using minikube locally and it works. When I use the same Doeckerfile with deploymnt.yml, the pod returns to Error state with the below reason
Error: Cannot find module '/usr/src/app/server.js'
Dockerfile:
FROM node:13-alpine
WORKDIR /api
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app-dep
labels:
app: nodejs-app
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
serviceAccountName: opp-sa
imagePullSecrets:
- name: xxx
containers:
- name: nodejs-app
image: registry.xxxx.net/k8s_app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
Assuming it could be a problem with "node_modules", I had "ls" on the WORDIR inside the Dockerfile and it does show me "node_modules". Does anyone what else to check to resolve this issue ?
Since I can't give you this level of suggestions on a comment I'm writing you a fully working example so you can compare to yours and check if there is something different.
Sources:
Your Dockerfile:
FROM node:13-alpine
WORKDIR /api
COPY package*.json .
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
Sample package.json:
{
"name": "docker_web_app",
"version": "1.0.0",
"description": "Node.js on Docker",
"author": "First Last <first.last#example.com>",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^4.16.1"
}
}
sample server.js:
'use strict';
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
Build image:
$ ls
Dockerfile package.json server.js
$ docker build -t k8s_app .
...
Successfully built 2dfbfe9f6a2f
Successfully tagged k8s_app:latest
$ docker images k8s_app
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s_app latest 2dfbfe9f6a2f 4 minutes ago 118MB
Your deployment sample + service for easy access (called nodejs-app.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app-dep
labels:
app: nodejs-app
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
containers:
- name: web-app
image: k8s_app
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: web-app-svc
spec:
type: NodePort
selector:
app: nodejs-app
ports:
- port: 8080
targetPort: 8080
Note: I'm using the minikube docker registry for this example, that's why imagePullPolicy: Never is set.
Now I'll deploy it:
$ kubectl apply -f nodejs-app.yaml
deployment.apps/nodejs-app-dep created
service/web-app-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nodejs-app-dep-5d75f54c7d-mfw8x 1/1 Running 0 3s
Whenever you need to troubleshoot inside a pod you can use kubectl exec -it <pod_name> -- /bin/sh (or /bin/bash depending on the base image.)
$ kubectl exec -it nodejs-app-dep-5d75f54c7d-mfw8x -- /bin/sh
/api # ls
Dockerfile node_modules package-lock.json package.json server.js
The pod is running and the files are in the WORKDIR folder as stated in the Dockerfile.
Finally let's test accessing from outside the cluster:
$ minikube service list
|-------------|-------------|--------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|-------------|--------------|-------------------------|
| default | web-app-svc | 8080 | http://172.17.0.2:31446 |
|-------------|-------------|--------------|-------------------------|
$ curl -i http://172.17.0.2:31446
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 11
ETag: W/"b-Ck1VqNd45QIvq3AZd8XYQLvEhtA"
Date: Thu, 14 May 2020 18:49:40 GMT
Connection: keep-alive
Hello World$
The Hello World is being served as desired.
To Summarize:
I Build the Docker Image in minikube ssh so it is cached.
Created the manifest containing the deployment pointing to the image, added the service part to allow access externally using Nodeport.
NodePort routes all traffic to the Minikube IP in the port assigned to the service (i.e:31446) and deliver to the pods matching the selector listening on port 8080.
A few pointers for troubleshooting:
kubectl describe pod <pod_name>: provides precious information when the pod status is in any kind of error.
kubectl exec is great to troubleshoot inside the container as it's running, it's pretty similar to docker run command.
Review your code files to ensure there is no baked path in it.
Try using WORKDIR /usr/src/app instead of /api and see if you get a different result.
Try using a .dockerignore file with node_modules on it's content.
Try out and let me know in the comments if you need further help
#willrof, thanks for the detailed write-up. A reply to your response is limited to 30 characters and hence I'm posting as new comment.
My problem was resolved yesterday. It was with COPY . .
It works perfectly fine in my local but, when I tried to deploy onto the cluster with the same Dockerfile, I was running into the issue of "cannot find module..."
So it finally worked when the directory path was mentioned instead of . . while copying files
COPY /api /usr/app #copy project basically
WORKDIR /usr/app #set workdir just before npm install
RUN npm install
EXPOSE 3000
Moving WORKDIR statement before installing "node_modules" worked in my case. I'm surprised to figure this as the problem though it worked locally with COPY . .