How create service on minikube with yaml configuration,which accessible from host? - minikube

How correct write yaml configuration for kubernetes pod and service in minikube cluster with driver on docker with one requirement: 80 port of container must be accessible from host machine. Solution with nodePort doesn't work as excepected:
type: NodePort
ports:
- port: 80
targetPort: 8006
selector:
app: blogapp
Label app: blogapp set on container. Can you show correct configuration for nginx image for example with port accessible from host.

You should create a Kubernetes deployment instead of creating a NodePort. Once you create the deployment(which will also create a ReplicaSet and Pod automatically), you can expose it. The blogapp will not be available to the outside world by default, so you must expose it if you want to be able to access it from outside the cluster.
Exposing the deployment will automatically create a service as well.
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: blogapp
labels:
app: blogapp
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: blogapp
spec:
containers:
- image: <YOUR_NGINX_IMAGE>
name: blogapp
ports:
- containerPort: 8006
resources: {}
restartPolicy: Always
status: {}
Create the deployment
kubectl create -f deployment.yml
Expose the deployment
kubectl expose deployment blogapp --name=blogapp --type=LoadBalancer --target-port=8006
Get the exposed URL
minikube service blogapp --url

You can use the below configuration:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app-server-instance
labels:
app: blog-app
spec:
replicas: 1
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
containers:
- name: blog-app-server-instance
image: blog-app-server
ports:
- containerPort: 8006
---
apiVersion: v1
kind: Service
metadata:
name: blog-app-service
labels:
app: blog-app
spec:
selector:
app: blog-app
type: NodePort
ports:
- port: 80
nodePort: 31364
targetPort: 8006
protocol: TCP
name: http
I guess you were missing spec.ports[0].nodePort.

Related

What host does Kubernetes assign to my deployment?

I have two Kubernetes deployments: composite-app (1 pod) and product-app (2 pods), both listening in port 8080. The first one needs to call the second one sometimes.
However, the first deployment can't find the second one. When it tries to call it using the product.app host it fails:
Exception: I/O error on GET request for "http://product-app:8080/product/123": product-app;
nested exception is UnknownHostException
Am I using the right host? So far I've tried (to no avail):
product
product-app.default.pod.cluster.local
product-app
Here's my YAML:
apiVersion: v1
kind: Service
metadata:
name: composite-service
spec:
type: NodePort
selector:
app: composite-app
ports:
- targetPort: 8080
port: 8080
nodePort: 30091
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: composite-deploy
spec:
replicas: 1
selector:
matchLabels:
app: composite-app
template:
metadata:
labels:
app: composite-app
spec:
containers:
- name: composite-container
image: 192.168.49.2:2376/composite-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You need to define a Service object for the product-deploy Deployment as well for the other pod to be able to connect to it. The Service can be of type ClusterIP if it is not needed to be exposed to the external world.
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
selector:
app: product-app
ports:
- targetPort: 8080
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You can connect to the other pod using the pod's IP without the service. However, that is not recommended since the pod's IP can be changed across pod updates.
You can then connect to the product-app pod from the composite-app using product-service.

tunnel for service target port empty kubernetes and can't access pod from local browser

apiVersion: apps/v1
kind: Deployment
metadata:
name: identityold-deployment
spec:
selector:
matchLabels:
app: identityold
replicas: 1
template:
metadata:
labels:
app: identityold
spec:
containers:
- name: identityold
image: <image name from docker hub>
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
labels:
app: identityold
name: identityold-svc
namespace: default
spec:
type: NodePort # use LoadBalancer as type here
ports:
- port: 80
targetPort: 8081
nodePort: 30036
selector:
app: identityold
The above code is my deployment YAML file.
and cant access from the browser the service
Exposing a service in minikube cluster is little bit different than in normal kubernetes cluster.
Please follow this guide from kubernetes documentation and use minikube service command in order to expose it properly.

GKE NodePort service refusing incoming traffic

I have created a Node port service in Google cloud with the following specification... I have a firewall rule created to allow traffic from 0.0.0.0/0 for the port '30100' ,I have verified stackdriver logs and traffic is allowed but when I either use curl or from browser to hit http://:30100 I am not getting any response. I couldn't proceed how to debug the issue also... can someone please suggest on this ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginxv1
template:
metadata:
labels:
app: nginxv1
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginxv1
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
nodePort: 30100
selector:
app: nginxv1
type: NodePort
Thanks.
You need to fix the container port, it must be 80 because the nginx container exposes this port as you can see here
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginxv1
template:
metadata:
labels:
app: nginxv1
spec:
containers:
- name: nginx
image: nginx:latest
---
apiVersion: v1
kind: Service
metadata:
name: nginxv1
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30100
selector:
app: nginxv1
type: NodePort
Also, you need to create a firewall rule to permit the traffic to the node, as mentioned by #danyL in comments:
gcloud compute firewall-rules create test-node-port --allow tcp:30100
Get the node IP with the command
kubectl get nodes -owide
And them try to access the nginx page with:
curl http://<NODEIP>:30100

How to define volume mounts for logs in Kubernates deployments and access service through public IP?

Question first Part:
Every time I deploy latest service my data is lost so I want to mount
some volume so that my data is not lost and also want to sync same
volume mount to multiple services for data sharing.
Question Second Part:
how could I access service using public IP, It's accessible on local network but not publicly
Here is the deployment file I write:
apiVersion: apps/v1
kind: Deployment
metadata:
name: setup
labels:
app: setup
spec:
replicas: 1
selector:
matchLabels:
app: setup
template:
metadata:
labels:
app: setup
spec:
containers:
- name: setup
image: esmagic/setup:1.0
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: setup
labels:
app: setup
spec:
ports:
- name: tcp-8081-8081-setup
port: 8081
targetPort: 8081
protocol: TCP
selector:
app: setup
type: LoadBalancer
externalIPs:
- 192.168.1.249
sessionAffinity: None
externalTrafficPolicy: Cluster
status:
loadBalancer: {}
For the first one, you have to define volume mounts in spec container and define volume in spec, and for second one you have to define public IP in external IPs
Deployment file after changes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: setup
labels:
app: setup
spec:
replicas: 1
selector:
matchLabels:
app: setup
template:
metadata:
labels:
app: setup
spec:
containers:
- name: setup
image: esmagic/setup:1.0
ports:
- containerPort: 8081
volumeMounts:
- name: sync-dir
mountPath: /abc/xyz #Any Real path like (/opt/software)
volumes:
- name: sync-dir
hostPath:
path: /abc/xyz
---
apiVersion: v1
kind: Service
metadata:
name: setup
labels:
app: setup
spec:
ports:
- name: tcp-8081-8081-setup
port: 8081
targetPort: 8081
protocol: TCP
selector:
app: setup
type: LoadBalancer
externalIPs:
- 192.168.1.249 #Here comes public IP or write both
sessionAffinity: None
externalTrafficPolicy: Cluster
status:
loadBalancer: {}
And also visit this guide for more details about the volume mount as said by Arghya Sadhu
Follow this guide for using a volume mount in a pod. Specifically for logging you can write it to a volume and use Fluentd or FluentBit to send the logs to a centralized log aggregator system.
You can use NodePort or LoadBalaancer type service to expose pods outside the cluster.If you are using NodePort then the Kubernetes cluster nodes need to have Public IP. Follow this guide for NodePort.

How to expose nginx on public Ip using NodePort service in Kubernetes?

I'm executing kubectl create -f nginx.yaml which creates the pods successfully. But the PODS aren't exposed on Public IP of my instance. Following is the YAML used be me with service type as nodeport:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx
What could be in-correct in my approach or above YAML file to expose the pod on deployment to the public IP?
PS: Firewall and ACLs are open to internet on all TCP
The endpoint was not getting added. On debugging I found the label between deployment and Service has a mismatch. Hence changed the label type from "app" to "name" and it worked.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
name: nginx
Jeel is right. Your Service selector is mismatch with Pod labels.
If you fix that like what Jeel already added in this answer
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
name: nginx
Your Service will be exposed in Node IP address. Because your Service Type is NodePort.
If your Node IP is, lets say, 35.226.16.207, you can connect to your Pod using this IP and NodePort
$ curl 35.226.16.207:30080
In this case, your node must have a public IP. Otherwise, you can't access
Second option, you can create LoadBalancer Type Service
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
selector:
name: nginx
This will provide you a public IP.
For more details, check this