How to define volume mounts for logs in Kubernates deployments and access service through public IP? - kubernetes

Question first Part:
Every time I deploy latest service my data is lost so I want to mount
some volume so that my data is not lost and also want to sync same
volume mount to multiple services for data sharing.
Question Second Part:
how could I access service using public IP, It's accessible on local network but not publicly
Here is the deployment file I write:
apiVersion: apps/v1
kind: Deployment
metadata:
name: setup
labels:
app: setup
spec:
replicas: 1
selector:
matchLabels:
app: setup
template:
metadata:
labels:
app: setup
spec:
containers:
- name: setup
image: esmagic/setup:1.0
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: setup
labels:
app: setup
spec:
ports:
- name: tcp-8081-8081-setup
port: 8081
targetPort: 8081
protocol: TCP
selector:
app: setup
type: LoadBalancer
externalIPs:
- 192.168.1.249
sessionAffinity: None
externalTrafficPolicy: Cluster
status:
loadBalancer: {}

For the first one, you have to define volume mounts in spec container and define volume in spec, and for second one you have to define public IP in external IPs
Deployment file after changes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: setup
labels:
app: setup
spec:
replicas: 1
selector:
matchLabels:
app: setup
template:
metadata:
labels:
app: setup
spec:
containers:
- name: setup
image: esmagic/setup:1.0
ports:
- containerPort: 8081
volumeMounts:
- name: sync-dir
mountPath: /abc/xyz #Any Real path like (/opt/software)
volumes:
- name: sync-dir
hostPath:
path: /abc/xyz
---
apiVersion: v1
kind: Service
metadata:
name: setup
labels:
app: setup
spec:
ports:
- name: tcp-8081-8081-setup
port: 8081
targetPort: 8081
protocol: TCP
selector:
app: setup
type: LoadBalancer
externalIPs:
- 192.168.1.249 #Here comes public IP or write both
sessionAffinity: None
externalTrafficPolicy: Cluster
status:
loadBalancer: {}
And also visit this guide for more details about the volume mount as said by Arghya Sadhu

Follow this guide for using a volume mount in a pod. Specifically for logging you can write it to a volume and use Fluentd or FluentBit to send the logs to a centralized log aggregator system.
You can use NodePort or LoadBalaancer type service to expose pods outside the cluster.If you are using NodePort then the Kubernetes cluster nodes need to have Public IP. Follow this guide for NodePort.

Related

Digitalocean kubernetes cluster load balancer doesn't work properly as round robin

I created a Digitalocean kubernetes cluster and create a service with 4 replicas. the service type is LoadBalancer. the load balancer is also created. but I posted my request to the target endpoint by using postman. and I have written the endpoint with the pod host name. but every time I get the response from the same pod. if the load is balanced by the load balancer, the requests should go to each and every pods. but is not happening as I expected.
my manifest file like below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-deployment
labels:
app: user-service
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: myRepo/ms-user-service:1.0.1
ports:
- containerPort: 8080
env:
- name: SPRING_PROFILES_ACTIVE
value: proud
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
---
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
please resolve this problem.
you have to disable the keepalive feature (service.beta.kubernetes.io/do-loadbalancer-enable-backend-keepalive) when you create the service. under the metadata.annotations you can configure it like the below. read for more detail.
apiVersion: v1
kind: Service
metadata:
name: user-service
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-backend-keepalive: "false"
spec:
selector:
app: user-service
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080

Why can't I curl endpoint on GCP?

I am working my way through a kubernetes tutorial using GKE, but it was written with Azure in mind - tho it has been working ok so far.
The first part where it has not worked has been with exercises regarding coreDNS - which I understand does not exist on GKE - it's kubedns only?
Is this why I can't get a pod endpoint with:
export PODIP=$(kubectl get endpoints hello-world-clusterip -o jsonpath='{ .subsets[].addresses[].ip}')
and then curl:
curl http://$PODIP:8080
My deployment is definitely on the right port:
ports:
- containerPort: 8080
And, in fact, the deployment for the tut is from a google sample.
Is this to do with coreDNS or authorisation/needing a service account? What can I do to make the curl request work?
Deployment yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
ports:
- port: 80
protocol: TCP
targetPort: 8080
Having a deeper insight on what Gari comments, when exposing a service outside your cluster, this services must be configured as NodePort or LoadBalancer, since ClusterIP only exposes the Service on a cluster-internal IP making the service only reachable from within the cluster, and since Cloud Shell is a a shell environment for managing resources hosted on Google Cloud, and not part of the cluster, that's why you're not getting any response. To change this, you can change your yaml file with the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
After redeploying your service, you can run command kubectl get all -o wide on cloud shell to validate that NodePort type service has been created with a node and target port.
To test your deployment just throw a CURL test to he external IP from one of your nodes incluiding the node port that was assigned, the command should look like something like:
curl <node_IP_address>:<Node_port>

tunnel for service target port empty kubernetes and can't access pod from local browser

apiVersion: apps/v1
kind: Deployment
metadata:
name: identityold-deployment
spec:
selector:
matchLabels:
app: identityold
replicas: 1
template:
metadata:
labels:
app: identityold
spec:
containers:
- name: identityold
image: <image name from docker hub>
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
labels:
app: identityold
name: identityold-svc
namespace: default
spec:
type: NodePort # use LoadBalancer as type here
ports:
- port: 80
targetPort: 8081
nodePort: 30036
selector:
app: identityold
The above code is my deployment YAML file.
and cant access from the browser the service
Exposing a service in minikube cluster is little bit different than in normal kubernetes cluster.
Please follow this guide from kubernetes documentation and use minikube service command in order to expose it properly.

Not able to access the application using Load Balancer service in Azure Kubernetes Service

I have created small nginx deployment and type as LoadBalancer in Azure Kubernetes service, but I was unable to access the application using LoadBalaner service. Can some one provide the solution
I have already updated security group to allow all traffic, but no use.
Do I need to update any security group to access the application?
Please find the deployment file.
cat nginx.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-kubernetes
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: nginx:latest
ports:
- containerPort: 8080
Nginx container is using port 80 by default and you are trying to connect to port 8080 where nothing is listening and thus getting connection refused.
Take a look here at nginx conateiner Dockerfile. What port do you see?
All you need to do to make it work is to change target port like following:
apiVersion: v1
kind: Service
metadata:
name: nginx-kubernetes
spec:
ports:
- port: 8080
targetPort: 80
selector:
app: hello-kubernetes
Additionally it would be nice to change containerPort as following:
spec:
containers:
- name: hello-kubernetes
image: nginx:latest
ports:
- containerPort: 80

How create service on minikube with yaml configuration,which accessible from host?

How correct write yaml configuration for kubernetes pod and service in minikube cluster with driver on docker with one requirement: 80 port of container must be accessible from host machine. Solution with nodePort doesn't work as excepected:
type: NodePort
ports:
- port: 80
targetPort: 8006
selector:
app: blogapp
Label app: blogapp set on container. Can you show correct configuration for nginx image for example with port accessible from host.
You should create a Kubernetes deployment instead of creating a NodePort. Once you create the deployment(which will also create a ReplicaSet and Pod automatically), you can expose it. The blogapp will not be available to the outside world by default, so you must expose it if you want to be able to access it from outside the cluster.
Exposing the deployment will automatically create a service as well.
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: blogapp
labels:
app: blogapp
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: blogapp
spec:
containers:
- image: <YOUR_NGINX_IMAGE>
name: blogapp
ports:
- containerPort: 8006
resources: {}
restartPolicy: Always
status: {}
Create the deployment
kubectl create -f deployment.yml
Expose the deployment
kubectl expose deployment blogapp --name=blogapp --type=LoadBalancer --target-port=8006
Get the exposed URL
minikube service blogapp --url
You can use the below configuration:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app-server-instance
labels:
app: blog-app
spec:
replicas: 1
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
containers:
- name: blog-app-server-instance
image: blog-app-server
ports:
- containerPort: 8006
---
apiVersion: v1
kind: Service
metadata:
name: blog-app-service
labels:
app: blog-app
spec:
selector:
app: blog-app
type: NodePort
ports:
- port: 80
nodePort: 31364
targetPort: 8006
protocol: TCP
name: http
I guess you were missing spec.ports[0].nodePort.