I have a flask app running on a remote Kubernetes cluster and when I'm accessing it on the inside it works. However, when I'm trying to access it from the outside nothing happens.
I'm using kind to create the cluster. Locally I can access the flask app via node's IP address.
I'm don't know how to access the service from the outside, do I need to do something else to be able to access the app.
apiVersion: v1
vi serkind: Service
metadata:
name: iweblens-svc
labels:
app: flaskapp
spec:
type: NodePort
ports:
- port: 5000
targetPort: 5000
protocol: TCP
selector:
app: flaskapp
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
kubeadmConfigPatches:
- |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
kubeadmConfigPatchesJSON6902:
- group: kubeadm.k8s.io
version: v1beta2
kind: ClusterConfiguration
patch: |
- op: add
path: /apiServer/certSANs/-
value: my-hostname
nodes:
- role: control-plane
- role: worker
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaskapp
labels:
app: flaskapp
spec:
replicas: 1
selector:
matchLabels:
app: flaskapp
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: flaskapp
image: myimage
imagePullPolicy: Never
ports:
- containerPort: 5000
resources:
limits:
cpu: "0.5"
requests:
cpu: "0.5"
Create a NodePort or LoadBalancer (works only on supported cloud providers) service to expose the deployment outside the cluster.
Here is a guide on how to use NodePort service.
To be be able to access an app via NodePort service the Node IP need to be reachable(i.e should be in same network) from the system where you are accessing it.
Related
I have several microservices, each one with its own mongodb deployment. I would like to start with getting my auth service working with a persistent volume. I have watched courses where postgresql is used and read a lot in the kubernetes docs but am having trouble getting this to work for mongodb.
When I run skaffold dev the PVC is created with no errors. kubectl shows the PVC is in Bound status, and running describe on the PVC shows my mongo deployment as the user.
However, when I visit my client service in the browser, I signup, logout, signin again with no problem and then if I restart skaffold so it deletes and recreates the containers my data is gone and I have to signup again.
Here are my files
auth-mongo-depl.yaml
# auth-mongo service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
auth-depl.yaml
# auth service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: isimmons33/ticketing-auth
env:
- name: MONGO_URI
value: 'mongodb://auth-mongo-ip-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-ip-srv
spec:
selector:
app: auth
type: ClusterIP
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
api/users portion of my ingress-srv.yaml
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-ip-srv
port:
number: 3000
My client fires off a post request to /api/users/auth with which I can successfully signup or signin as long as I don't restart skaffold.
I even used kubectl to get a shell into my mongo deployment and queried to see the new user account there as it should be. But of course it is gone after restarting skaffold.
I am on Windows 10 but am running everything through WSL2 (Ubuntu)
Thanks for any help
It is highly recommended to use StatefulSets for running databases in Kubernetes. In Deployment if your pod crashes for some reason and creates new one, it's not guaranteed the pod will get patched to the same PV, hence the you loose the data.
Have a look on this https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets
The solution as pointed out by raghu_manne was to use StatefulSets. But because the link posted is extremely old, here is the full solution that worked for me.
Also here is a youtube video I just found that explains StatefulSet and volumeClaimTemplates quite well.
How to run MongoDB with StatefulSets in Kubernetes
auth-mongo-depl.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: auth-mongo-depl
spec:
replicas: 1
serviceName: auth-mongo
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: auth-mongo-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: auth-mongo-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017
I have several pages that I found with similar question and most answer tell us to white list our IP. However I have allowed access from anywhere 0.0.0.0/0 in the atlas, and have installed the latest version of mongoose(6.2.6 ; which is supposed to have support for the protocol (mongodb+srv).
The connection works perfectly when I run locally using npm start or even from a dockerized container. But, when I deploy to a k8s cluster, I get an error saying:
querySrv ENOTFOUND _mongodb._tcp.mongodb-cluster0.zvnxj.mongodb.net
The deployment and service file are as:
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
replicas: 2
selector:
matchLabels:
app: my-workflow-api
template:
metadata:
labels:
app: my-workflow-api
spec:
containers:
- name: my-workflow-api
image: "myname/my-workflow-api:1.0.0"
ports:
- containerPort: 3000
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "256m"
The service.yaml has the contents:
apiVersion: v1
kind: Service
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
selector:
app: my-workflow-api
type: LoadBalancer
ports:
- name: http
port: 8000
targetPort: 3000
protocol: TCP
The namespace.yaml has the contents:
apiVersion: v1
kind: Namespace
metadata:
name: ns-my-workflow-api
I also tried the deployment.yaml with the dns rule:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
replicas: 2
selector:
matchLabels:
app: my-workflow-api
template:
metadata:
labels:
app: my-workflow-api
spec:
dnsPolicy: Default # <------ this rule
containers:
- name: my-workflow-api
image: "myname/my-workflow-api:1.0.0"
ports:
- containerPort: 3000
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "256m"
Once I changed the connection url to use 2.0.14 or earlier I was able to connect. The connection string started with mongodb://....
While I have managed to make the connection work with the workaround using an old-style connection string, and it seems to be some sort of dns resolution issue, how do I make the newer protocols work to connect to atlas from inside the cluster? Thanks in advance
I was able to solve it using this to start minikube:
minikube start --driver=docker
It seems there's some dns resolution issue with the underlying oracle's virtualbox driver(Maybe some configuration and setup issue as well)
Question first Part:
Every time I deploy latest service my data is lost so I want to mount
some volume so that my data is not lost and also want to sync same
volume mount to multiple services for data sharing.
Question Second Part:
how could I access service using public IP, It's accessible on local network but not publicly
Here is the deployment file I write:
apiVersion: apps/v1
kind: Deployment
metadata:
name: setup
labels:
app: setup
spec:
replicas: 1
selector:
matchLabels:
app: setup
template:
metadata:
labels:
app: setup
spec:
containers:
- name: setup
image: esmagic/setup:1.0
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: setup
labels:
app: setup
spec:
ports:
- name: tcp-8081-8081-setup
port: 8081
targetPort: 8081
protocol: TCP
selector:
app: setup
type: LoadBalancer
externalIPs:
- 192.168.1.249
sessionAffinity: None
externalTrafficPolicy: Cluster
status:
loadBalancer: {}
For the first one, you have to define volume mounts in spec container and define volume in spec, and for second one you have to define public IP in external IPs
Deployment file after changes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: setup
labels:
app: setup
spec:
replicas: 1
selector:
matchLabels:
app: setup
template:
metadata:
labels:
app: setup
spec:
containers:
- name: setup
image: esmagic/setup:1.0
ports:
- containerPort: 8081
volumeMounts:
- name: sync-dir
mountPath: /abc/xyz #Any Real path like (/opt/software)
volumes:
- name: sync-dir
hostPath:
path: /abc/xyz
---
apiVersion: v1
kind: Service
metadata:
name: setup
labels:
app: setup
spec:
ports:
- name: tcp-8081-8081-setup
port: 8081
targetPort: 8081
protocol: TCP
selector:
app: setup
type: LoadBalancer
externalIPs:
- 192.168.1.249 #Here comes public IP or write both
sessionAffinity: None
externalTrafficPolicy: Cluster
status:
loadBalancer: {}
And also visit this guide for more details about the volume mount as said by Arghya Sadhu
Follow this guide for using a volume mount in a pod. Specifically for logging you can write it to a volume and use Fluentd or FluentBit to send the logs to a centralized log aggregator system.
You can use NodePort or LoadBalaancer type service to expose pods outside the cluster.If you are using NodePort then the Kubernetes cluster nodes need to have Public IP. Follow this guide for NodePort.
I had deployed 2 pods which needed to talk to another pod (let say Pod A).
Pod A requires Ip address of services of deployed pods.So i need to set those IP address in config property file needed for pod A.
As Ip address are dynamic i.e if pod crashed it get changed.So need to set it dynamically.
Currently I deployed 2 pods and do
kubectl get ep
and set those Ip address in config property file and build Dockerfile and push it and use that image for deployment.
This is my deplyment and svc file in which image djtijare/a2ipricing refers to config file
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
selector:
app: spring-boot-demo-pricing
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
spec:
replicas: 1
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
nodeSelector:
disktype: ssd
So How to set IP's of those 2 pods dynamically in config file and build and push docker image.
I think you should think about using Headless services.
Sometimes you don’t need or want load-balancing and a single service IP. In this case, you can create what are termed “headless” Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
You can use a headless Service to interface with other service discovery mechanisms, without being tied to Kubernetes’ implementation. For example, you could implement a custom [Operator]( be built upon this API.
For such Services, a cluster IP is not allocated, kube-proxy does not handle these services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the service has selectors defined.
For your example if you set service to spec.clusterIP = None you could nslookup -type=A spring-boot-demo-pricing which will show you IPs of pods attached to this service.
/ # nslookup -type=A spring-boot-demo-pricing
Server: 10.11.240.10
Address: 10.11.240.10:53
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.2.20
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.12
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.13
And here are the yaml I've used:
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
clusterIP: None
selector:
app: spring-boot-demo-pricing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
replicas: 3
selector:
matchLabels:
app: spring-boot-demo-pricing
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
How correct write yaml configuration for kubernetes pod and service in minikube cluster with driver on docker with one requirement: 80 port of container must be accessible from host machine. Solution with nodePort doesn't work as excepected:
type: NodePort
ports:
- port: 80
targetPort: 8006
selector:
app: blogapp
Label app: blogapp set on container. Can you show correct configuration for nginx image for example with port accessible from host.
You should create a Kubernetes deployment instead of creating a NodePort. Once you create the deployment(which will also create a ReplicaSet and Pod automatically), you can expose it. The blogapp will not be available to the outside world by default, so you must expose it if you want to be able to access it from outside the cluster.
Exposing the deployment will automatically create a service as well.
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: blogapp
labels:
app: blogapp
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: blogapp
spec:
containers:
- image: <YOUR_NGINX_IMAGE>
name: blogapp
ports:
- containerPort: 8006
resources: {}
restartPolicy: Always
status: {}
Create the deployment
kubectl create -f deployment.yml
Expose the deployment
kubectl expose deployment blogapp --name=blogapp --type=LoadBalancer --target-port=8006
Get the exposed URL
minikube service blogapp --url
You can use the below configuration:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app-server-instance
labels:
app: blog-app
spec:
replicas: 1
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
containers:
- name: blog-app-server-instance
image: blog-app-server
ports:
- containerPort: 8006
---
apiVersion: v1
kind: Service
metadata:
name: blog-app-service
labels:
app: blog-app
spec:
selector:
app: blog-app
type: NodePort
ports:
- port: 80
nodePort: 31364
targetPort: 8006
protocol: TCP
name: http
I guess you were missing spec.ports[0].nodePort.