Kubernetes on Docker Desktop - local connectivity issue using a service between two pods - kubernetes

I have created an ASP.NET Core C# Kubernetes Microservice (Named: 'DemoApi') with an Angular Frontend app (Named: 'DemoApp').
Despite the fact that it works when I run both containers through Docker for Desktop or Docker Swarm Mode, I seem to have some connectivity issues when I run through Kubernetes on Docker Desktop.
To clarify the issue, I can display both
the front-end application at http://localhost:30005
the back-end application at http://localhost/demo
but while running the pods using Kubernetes, the front-end application does not display the data it should get from the API back-end.
Despite my search, I could not find a similar problem to mine. What could be the problem? Any assistance would be greatly appreciated.
The following is a short summary of the steps I took:
Defined the applications to read one from the other.
a. DemoApi:
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddCors(options =>
{
options.AddPolicy("AllowAngularDevClient",
builder =>
{
builder
.WithOrigins("http://localhost:4200")
.AllowAnyHeader()
.AllowAnyMethod();
});
options.AddPolicy("AllowAngularClient",
builder =>
{
builder
.WithOrigins("http://localhost")
.AllowAnyHeader()
.AllowAnyMethod();
});
});
}
b. DemoApp (under app.component.ts):
export class AppComponent {
title = 'DemoApp';
response = "No data loaded, yet";
constructor(private http: HttpClient)
{
this.http.get('http://localhost/demo', {responseType: 'text'}).subscribe((response: any) => {
console.log(response);
this.response = response;
});
}
Built the applications into an image
a. Built the ASP.NET Core C# using the default Visual Studio docker support (Using the solution explorer, Right-click the dockerfile > Build Docker Image).
b. Created a dockerfile for the angular application as follow:
FROM node:14-alpine as build-step
RUN mkdir -p /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
RUN npm run build --prod
FROM nginx:1.21.0-alpine
COPY --from=build-step /app/dist/DemoApp /usr/share/nginx/html
then ran the command docker build -t demoapp .
✔️ Worked using Local Docker for Desktop
docker run --name demoapi-container demoapi
docker run --name demoapp-container --publish 4200:80 demoapp
✔️ Worked using Local Docker Swarm Mode
docker service create --name demoapi-svc -p 80:80 demoapi
docker service create --name demoapp-svc -p 4200:80 demoapp
❌ Did not work using Local Kubernetes for Docker Desktop
DemoApi:
apiVersion: v1
kind: Pod
metadata:
name: demo-api-pod
labels:
name: demo-api-pod
app: demo-api
spec:
containers:
- name: demo-api-container
image: demoapi:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demoapi
labels:
name: demo-api-svc
app: demo-api
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
name: demo-api-pod
app: demo-api
DemoApp:
apiVersion: v1
kind: Pod
metadata:
name: demo-angular-pod
labels:
name: demo-angular-pod
app: demo-angular
spec:
containers:
- name: demo-angular-container
image: demoapp:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demoapp
labels:
name: demo-angular-svc
app: demo-angular
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30005
selector:
name: demo-angular-pod
app: demo-angular
EDIT: I have completely re-configured everything starting from scratch thanks to #darwinawardee's suggestion:
At my DemoApi app I've added to the controller:
options.AddPolicy("AllowAngularKubernetesClient",
builder =>
{
builder
.WithOrigins("http://demoapp").WithOrigins("demoapp").WithOrigins("http://localhost:30005")
.AllowAnyHeader()
.AllowAnyMethod();
});
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseCors("AllowAngularKubernetesClient");
[...]
}
At my DemoApp app I've added to app.component.ts:
constructor(private http: HttpClient)
{
this.http.get('http://demoapi/demo', {responseType: 'text'}).subscribe((response: any) => {
console.log(response + " -- from Kubernetes Cluster.");
this.response = response;
});
}
and re-built the images.
I've done the same process, yet the DemoApp does not get the information from the DemoApi.
Here are the logs from the entire process:
This is the result of using native Chrome's Inspect > Network tab:

You will be facing a network routing issue.
In Kubernetes each deployment will run in a pod which is assigned it's own IP address, as is the associated service for each (which load balances traffic across pods). Kubernetes maintains an internal DNS for routing between services. A service can be reached by it's full route servicename.namespace.svc.cluster-domain.example or within the namespace simply via the servicename itself.
So for the traffic to reach the DemoApi you'll the DemoApp to query http://DemoApi/demo rather than http://localhost/demo.

Related

Worker unable to connect to the master and invalid args in webport for Locust

I am trying to set up a load test for an endpoint. This is what I have followed so far:
Dockerfile
FROM python:3.8
# Add the external tasks directory into /tasks
WORKDIR /src
ADD requirements.txt .
RUN pip install --no-cache-dir --upgrade locust==2.10.1
ADD run.sh .
ADD load_test.py .
ADD load_test.conf .
# Expose the required Locust ports
EXPOSE 5557 5558 8089
# Set script to be executable
RUN chmod 755 run.sh
# Start Locust using LOCUS_OPTS environment variable
CMD ["bash", "run.sh"]
# Modified from:
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/docker/Dockerfile
run.sh
#!/bin/bash
LOCUST="locust"
LOCUS_OPTS="--config=load_test.conf"
LOCUST_MODE=${LOCUST_MODE:-standalone}
if [[ "$LOCUST_MODE" = "master" ]]; then
LOCUS_OPTS="$LOCUS_OPTS --master"
elif [[ "$LOCUST_MODE" = "worker" ]]; then
LOCUS_OPTS="$LOCUS_OPTS --worker --master-host=$LOCUST_MASTER_HOST"
fi
echo "${LOCUST} ${LOCUS_OPTS}"
$LOCUST $LOCUS_OPTS
# Copied from
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/docker/locust/run.sh
This is how I have written the load test locust script:
import json
from locust import HttpUser, constant, task
class CategorizationUser(HttpUser):
wait_time = constant(1)
#task
def predict(self):
payload = json.dumps(
{
"text": "Face and Body Paint washable Rubies Halloween item 91#385"
}
)
_ = self.client.post("/predict", data=payload)
I am invoking that with a configuration:
locustfile = load_test.py
headless = false
users = 1000
spawn-rate = 1
run-time = 5m
host = IP
html = locust_report.html
So, after building and pushing the Docker image and creating a k8s cluster on GKE, I am deploying it. This is how the deployment.yaml looks like:
# Copied from
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/kubernetes/templates/deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: locust-master-deployment
labels:
name: locust
role: master
spec:
replicas: 1
selector:
matchLabels:
name: locust
role: master
template:
metadata:
labels:
name: locust
role: master
spec:
containers:
- name: locust
image: gcr.io/PROJECT_ID/IMAGE_URI
imagePullPolicy: Always
env:
- name: LOCUST_MODE
value: master
- name: LOCUST_LOG_LEVEL
value: DEBUG
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5558
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: locust-worker-deployment
labels:
name: locust
role: worker
spec:
replicas: 2
selector:
matchLabels:
name: locust
role: worker
template:
metadata:
labels:
name: locust
role: worker
spec:
containers:
- name: locust
image: gcr.io/PROJECT_ID/IMAGE_URI
imagePullPolicy: Always
env:
- name: LOCUST_MODE
value: worker
- name: LOCUST_MASTER
value: locust-master-service
- name: LOCUST_LOG_LEVEL
value: DEBUG
After deployment, I am exposing the required ports like so:
kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type NodePort \
--port 5558 \
--target-port 5558 \
--name locust-5558
kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type NodePort \
--port 5557 \
--target-port 5557 \
--name locust-5557
kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type LoadBalancer \
--port 80 \
--target-port 8089 \
--name locust-web
The cluster and the nodes provision successfully. But the moment I hit the IP of locust-web, I am getting:
Any suggestions on how to resolve the bug?
Since you are exposing your pods and you are trying to access to them outside the cluster (your web application), you have to port-forward them or add an Ingress in order to access to your locust pods.
My first approach will be trying to ping or send requests to your locust pods with a simple port-forward.
More infos about the port forwarding here.
Probably the environment variables set by k8s is colliding with locust’s (LOCUST_WEB_PORT specifically). Change your setup so that no containers are named ”locust”.
See https://github.com/locustio/locust/issues/1226 for a similar issue.

Cannot find module '/usr/src/app/server.js'

I have tested the app using minikube locally and it works. When I use the same Doeckerfile with deploymnt.yml, the pod returns to Error state with the below reason
Error: Cannot find module '/usr/src/app/server.js'
Dockerfile:
FROM node:13-alpine
WORKDIR /api
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app-dep
labels:
app: nodejs-app
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
serviceAccountName: opp-sa
imagePullSecrets:
- name: xxx
containers:
- name: nodejs-app
image: registry.xxxx.net/k8s_app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
Assuming it could be a problem with "node_modules", I had "ls" on the WORDIR inside the Dockerfile and it does show me "node_modules". Does anyone what else to check to resolve this issue ?
Since I can't give you this level of suggestions on a comment I'm writing you a fully working example so you can compare to yours and check if there is something different.
Sources:
Your Dockerfile:
FROM node:13-alpine
WORKDIR /api
COPY package*.json .
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
Sample package.json:
{
"name": "docker_web_app",
"version": "1.0.0",
"description": "Node.js on Docker",
"author": "First Last <first.last#example.com>",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^4.16.1"
}
}
sample server.js:
'use strict';
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
Build image:
$ ls
Dockerfile package.json server.js
$ docker build -t k8s_app .
...
Successfully built 2dfbfe9f6a2f
Successfully tagged k8s_app:latest
$ docker images k8s_app
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s_app latest 2dfbfe9f6a2f 4 minutes ago 118MB
Your deployment sample + service for easy access (called nodejs-app.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app-dep
labels:
app: nodejs-app
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
containers:
- name: web-app
image: k8s_app
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: web-app-svc
spec:
type: NodePort
selector:
app: nodejs-app
ports:
- port: 8080
targetPort: 8080
Note: I'm using the minikube docker registry for this example, that's why imagePullPolicy: Never is set.
Now I'll deploy it:
$ kubectl apply -f nodejs-app.yaml
deployment.apps/nodejs-app-dep created
service/web-app-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nodejs-app-dep-5d75f54c7d-mfw8x 1/1 Running 0 3s
Whenever you need to troubleshoot inside a pod you can use kubectl exec -it <pod_name> -- /bin/sh (or /bin/bash depending on the base image.)
$ kubectl exec -it nodejs-app-dep-5d75f54c7d-mfw8x -- /bin/sh
/api # ls
Dockerfile node_modules package-lock.json package.json server.js
The pod is running and the files are in the WORKDIR folder as stated in the Dockerfile.
Finally let's test accessing from outside the cluster:
$ minikube service list
|-------------|-------------|--------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|-------------|--------------|-------------------------|
| default | web-app-svc | 8080 | http://172.17.0.2:31446 |
|-------------|-------------|--------------|-------------------------|
$ curl -i http://172.17.0.2:31446
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 11
ETag: W/"b-Ck1VqNd45QIvq3AZd8XYQLvEhtA"
Date: Thu, 14 May 2020 18:49:40 GMT
Connection: keep-alive
Hello World$
The Hello World is being served as desired.
To Summarize:
I Build the Docker Image in minikube ssh so it is cached.
Created the manifest containing the deployment pointing to the image, added the service part to allow access externally using Nodeport.
NodePort routes all traffic to the Minikube IP in the port assigned to the service (i.e:31446) and deliver to the pods matching the selector listening on port 8080.
A few pointers for troubleshooting:
kubectl describe pod <pod_name>: provides precious information when the pod status is in any kind of error.
kubectl exec is great to troubleshoot inside the container as it's running, it's pretty similar to docker run command.
Review your code files to ensure there is no baked path in it.
Try using WORKDIR /usr/src/app instead of /api and see if you get a different result.
Try using a .dockerignore file with node_modules on it's content.
Try out and let me know in the comments if you need further help
#willrof, thanks for the detailed write-up. A reply to your response is limited to 30 characters and hence I'm posting as new comment.
My problem was resolved yesterday. It was with COPY . .
It works perfectly fine in my local but, when I tried to deploy onto the cluster with the same Dockerfile, I was running into the issue of "cannot find module..."
So it finally worked when the directory path was mentioned instead of . . while copying files
COPY /api /usr/app #copy project basically
WORKDIR /usr/app #set workdir just before npm install
RUN npm install
EXPOSE 3000
Moving WORKDIR statement before installing "node_modules" worked in my case. I'm surprised to figure this as the problem though it worked locally with COPY . .

How to create a mongo database per service with Docker

I am working towards having multiple services (NodeJS, Spring-boot) that each have their own MongoDB Database-server-per-service (eventually targeting GCP & K8s) so that I can keep the data separate. I will be using Docker compose to launch both the service and database together. However, when I run multiple services, naturally I get port collision. Here is a typical docker-compose file:
version: '3'
# Define the services/containers to be run
services:
myapp: #name of your service
build: ./ # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forwarding
links:
- database # link this service to the database service
volumes:
- .:/usr/src/app
depends_on:
- database
database: # name of the service
image: mongo # specify image to build container from
volumes:
- ./data:/data/db
ports:
- "27017:27017"
I am looking for an example of how to do this. My thinking is that each compose file will have it's own ports and each service will map to those ports internally?
You can make yaml for deployment and explain all your containers in one pod (a Pod is a group of containers). Your deployment may look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: application-deployment
labels:
app: application
spec:
selector:
matchLabels:
app: application
template:
metadata:
labels:
app: application
spec:
containers:
- name: application
image: application:version
ports:
- containerPort: 3000
name: database
image: database:version
ports:
- containerPort: 27017
It is just deployment inside your cluster. You need to expose it outside the cluster. I recommend you to use Ingress for that.
Here you will have the database inside the pod. Also you can create 2 deployments for database and your app in the same namespace.
Also, you need to build Docker images manually or use the CI tool for that. You can manage environments ( prod, pre-prod, dev, test ) by namespaces. One namespace for one environment will give you full isolation. Also, to manage all this, I recommend you to use tools like Helm or kops.
There are a lot of differences between Kubernetes and Docker-compose, but the main difference is design. In Kubernetes, you have more entities for each level of application, and you can manage them. In Docker-compose, you configure all as one service in one place and usually it is hard to manage some specific things.

What is the equivalent for depends_on in kubernetes

I have a docker compose file with the following entries
version: '2.1'
services:
mysql:
container_name: mysql
image: mysql:latest
volumes:
- ./mysqldata:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 'password'
ports:
- '3306:3306'
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3306"]
interval: 30s
timeout: 10s
retries: 5
test1:
container_name: test1
image: test1:latest
ports:
- '4884:4884'
- '8443'
depends_on:
mysql:
condition: service_healthy
links:
- mysql
The Test-1 container is dependent on mysql and it needs to be up and running.
In docker this can be controlled using health check and depends_on attributes.
The health check equivalent in kubernetes is readinessprobe which i have already created but how do we control the container startup in the pod's?????
Any directions on this is greatly appreciated.
My Kubernetes file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
template:
metadata:
labels:
app: deployment
spec:
containers:
- name: mysqldb
image: "dockerregistry:mysqldatabase"
imagePullPolicy: Always
ports:
- containerPort: 3306
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 10
- name: test1
image: "dockerregistry::test1"
imagePullPolicy: Always
ports:
- containerPort: 3000
That's the beauty of Docker Compose and Docker Swarm... Their simplicity.
We came across this same Kubernetes shortcoming when deploying the ELK stack.
We solved it by using a side-car (initContainer), which is just another container in the same pod thats run first, and when it's complete, kubernetes automatically starts the [main] container. We made it a simple shell script that is in loop until Elasticsearch is up and running, then it exits and Kibana's container starts.
Below is an example of a side-car that waits until Grafana is ready.
Add this 'initContainer' block just above your other containers in the Pod:
spec:
initContainers:
- name: wait-for-grafana
image: darthcabs/tiny-tools:1
args:
- /bin/bash
- -c
- >
set -x;
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://grafana:3000/login)" != "200" ]]; do
echo '.'
sleep 15;
done
containers:
.
.
(your other containers)
.
.
This was purposefully left out. The reason being is that applications should be responsible for their connect/re-connect logic for connecting to service(s) such as a database. This is outside the scope of Kubernetes.
While I don't know the direct answer to your question except this link (k8s-AppController), I don't think it's wise to use same deployment for DB and app. Because you are tightly coupling your db with app and loosing awesome k8s option to scale any one of them as needed. Further more if your db pod dies you loose your data as well.
Personally what I would do is to have a separate StatefulSet with Persistent Volume for database and Deployment for app and use Service to make sure their communication.
Yes I have to run few different commands and may need at least two separate deployment files but this way I am decoupling them and can scale them as needed. And my data is being persistent as well!
As mentioned, you should run the database and the application containers in separate pods and connect them with a service.
Unfortunately, both Kubernetes and Helm don't provide a functionality similar to what you've described. We had many issues with that and tried a few approaches until we have decided to develop a smallish utility that solved this problem for us.
Here's the link to the tool we've developed: https://github.com/Opsfleet/depends-on
You can make pods wait until other pods become ready according to their readinessProbe configuration. It's very close to Docker's depends_on functionality.
In Kubernetes terminology one your docker-compose set is a Pod.
So, there is no depends_on equivalent there. Kubernetes will check all containers in a pod and they all have to be alive for a mark that pod as Healthy and will always run them together.
In your case, you need to prepare configuration of Deployment like that:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
template:
metadata:
labels:
app: app-and-db
spec:
containers:
- name: app
image: nginx
ports:
- containerPort: 80
- name: db
image: mysql
ports:
- containerPort: 3306
After pod will be started, your database will be available on localhost interface for your application, because of network conception:
Containers within a pod share an IP address and port space, and can find each other via localhost. They can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory.
But, as #leninhasda mentioned, it is not a good idea to run database and application in your pod and without Persistent Volume. Here is a good tutorial on how to run a stateful application in the Kubernetes.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
what about liveness and readiness ??? supports commands, http requests and more
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5

Connect to Google Cloud SQL from Container Engine with Java App

I'm having a tough time connecting to a Cloud SQL Instance from a Java App running in a Google Container Engine Instance.
I whitelisted the external instance IP from the Access Control of CloudSQL. Connecting from my local machine works well, however I haven't managed to establish a connection from my App yet.
I'm configuring the Container as (cloud-deployment.yaml):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APPNAME
spec:
replicas: 1
template:
metadata:
labels:
app: APPNAME
spec:
imagePullSecrets:
- name: APPNAME.com
containers:
- image: index.docker.io/SOMEUSER/APPNAME:latest
name: web
env:
- name: MYQL_ENV_DB_HOST
value: 111.111.111.111 # the cloud sql instance ip
- name: MYQL_ENV_MYSQL_PASSWORD
value: THEPASSWORD
- name: MYQL_ENV_MYSQL_USER
value: THEUSER
ports:
- containerPort: 9000
name: APPNAME
using the connection url jdbc:mysql://111.111.111.111:3306/databaseName, resulting in:
Error while executing: Access denied for user 'root'#'ip adress of the instance' (using password: YES)`
I can confirm that the Container Engine external IP is set on the SQL instance.
I don't want to use the Cloud Proxy Image for now as I'm still in development stage.
Any help is greatly appreciated.
You must use the cloud SQL proxy as described here: https://github.com/GoogleCloudPlatform/cloudsql-proxy/blob/master/README.md