kubernetes service getting auto deleted after each deployment - kubernetes

We are facing an unexpected behavior in Kubernetes. When we are running the command:
kubectl apply -f my-service_deployment.yml
It's noticed that the associated existing service of the same pod getting deleted automatically. Also, we noticed that when we are applying the deployment file, instead of giving the output as the "deployment got configured" (as its already running), its showing output as "deployment created".. is some problem here?
Also sometimes we have noticed that the service is recreated with different timestamps than we created with different Ip.
What may be the reasons for this unexpected behavior of this service?
Note:- it's noticed that there is another pod and service running in the same cluster with pod name as "my-stage-my-service" and service name as my-stage-service-v1. will this have any impact?
Deployment file:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
labels:
app: my-service
spec:
replicas: 1
selector:
matchLabels:
app: my-service
template:
metadata:
labels:
app: my-service
spec:
containers:
- name: my-service
image: myacr/myservice:v1-dev
ports:
- containerPort: 8080
imagePullSecrets:
- name: my-az-secret
Service file:
apiVersion: v1
kind: Service
metadata:
name: my-stage-service
spec:
selector:
app: my-service
ports:
- protocol: TCP
port: 8880
targetPort: 8080
type: LoadBalancer

Related

load balancer not reachable after creating as service

I have deployed simple app -NGINX and a Load balancer service in Kubernetes.
I can see that pods are running as well as service but calling Loadbalancer external IP is givings server error -site can't be reached .Any suggestion please
app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Service.Yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
P.S. -Attached outcome from terminal.
If you are using Minikube to access the service then you might need to run one extra command. But if this is on a cloud provider then you have an error in your service file.
Please ensure that you put two space in yaml file but your indentation of the yaml file is messed up as you have only added 1 space. Also you made a mistake in the last line of service.yaml file.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx

Kubernetes: Remove NetworkPolicies

I have been experimenting with network policies, and now pods can no longer communicate with each other though I have deleted all the policies.
Namespace
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
env: staging
Service A
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-a
namespace: staging
spec:
replicas: 1
selector:
matchLabels:
app: service-a
template:
metadata:
labels:
app: service-a
spec:
containers:
- name: service-a
image: busybox:1.33.1
command: ["nc", "-lkv", "-p", "8080", "-e", "/bin/sh"]
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-a
namespace: staging
spec:
type: ClusterIP
selector:
app: service-a
ports:
- port: 8080
Service B
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-b
namespace: staging
spec:
replicas: 1
selector:
matchLabels:
app: service-b
template:
metadata:
labels:
app: service-b
spec:
containers:
- name: service-b
image: busybox:1.33.1
command: ["nc", "-lkv", "-p", "8080", "-e", "/bin/sh"]
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-b
namespace: staging
spec:
type: ClusterIP
selector:
app: service-b
ports:
- port: 8080
Testing Communication
kubectl -n staging exec service-a-7c66d7cdf8-72gqq -- nc -vz service-b
Expected behaviour is that they can contact each other, but instead there is a timeout. So I tjeck if there are any network policies left.
kubectl -n staging get networkpolicy
>No resources found in staging namespace.
What I have tried
I have deleted the namespace, recreated it and recreated the two services.
I have gone through all namespaces looking for network policies to delete them, but there are none!
Before I started experimenting with the networkspolicies everything worked fine, but now I cannot get things working again. For the network controller I am using cillum.
I am pretty dumb, I simply forgot to write the port the second time around. It should be:
kubectl -n staging exec service-a-7c66d7cdf8-72gqq -- nc -vz service-b 8080

Why can't I curl endpoint on GCP?

I am working my way through a kubernetes tutorial using GKE, but it was written with Azure in mind - tho it has been working ok so far.
The first part where it has not worked has been with exercises regarding coreDNS - which I understand does not exist on GKE - it's kubedns only?
Is this why I can't get a pod endpoint with:
export PODIP=$(kubectl get endpoints hello-world-clusterip -o jsonpath='{ .subsets[].addresses[].ip}')
and then curl:
curl http://$PODIP:8080
My deployment is definitely on the right port:
ports:
- containerPort: 8080
And, in fact, the deployment for the tut is from a google sample.
Is this to do with coreDNS or authorisation/needing a service account? What can I do to make the curl request work?
Deployment yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
ports:
- port: 80
protocol: TCP
targetPort: 8080
Having a deeper insight on what Gari comments, when exposing a service outside your cluster, this services must be configured as NodePort or LoadBalancer, since ClusterIP only exposes the Service on a cluster-internal IP making the service only reachable from within the cluster, and since Cloud Shell is a a shell environment for managing resources hosted on Google Cloud, and not part of the cluster, that's why you're not getting any response. To change this, you can change your yaml file with the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
After redeploying your service, you can run command kubectl get all -o wide on cloud shell to validate that NodePort type service has been created with a node and target port.
To test your deployment just throw a CURL test to he external IP from one of your nodes incluiding the node port that was assigned, the command should look like something like:
curl <node_IP_address>:<Node_port>

How to access a service from another namespace in kubernetes

I have the zipkin deployment and service below as you can see zipkin is located under monitoring namespace the, i have an env variable called ZIPKIN_URL in each of my pods which are running under default namespace, this varibale takes this URL http://zipkin:9411/api/v2/spans but since zipkin is running in another namespace i tried this :
http://zipkin.monitoring.svc.cluster.local:9411/api/v2/spans
i also tried this format :
http://zipkin.monitoring:9411/api/v2/spans
but when i check the logs of my pods, i see connection refused exception
when i exec into one of my pods and try curl http://zipkin.tools.svc.cluster.local:9411/api/v2/spans
its shows me Mandatory parameter is missing: serviceNameroot
Here is zipkin resource :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zipkin
namespace: monitoring
spec:
template:
metadata:
labels:
app: zipkin
spec:
containers:
- name: zipkin
image: openzipkin/zipkin:2.19.3
ports:
- containerPort: 9411
---
apiVersion: v1
kind: Service
metadata:
name: zipkin
namespace: monitoring
spec:
selector:
app: zipkin
ports:
- name: http
port: 9411
protocol: TCP
type: ClusterIP
What you have is correct, your issue is likely not DNS. You can confirm by doing just a DNS lookup and comparing that to the IP of the Service.

Kubernetes connect service and deployment

I am wondering what to specify in a separate deployment in order to have it access a DB deployment/service. Here is the DB deployment/service:
apiVersion: v1
kind: Service
metadata:
name: oracle-db
labels:
app: oracle-db
spec:
ports:
- name: oracle-db
port: 1521
protocol: TCP
targetPort: 1521
selector:
app: oracle-db
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oracle-db-depl
labels:
app: oracle-db
spec:
selector:
matchLabels:
app: oracle-db
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: oracle-db
spec:
containers:
- name: oracle-db
image: oracledb:latest
imagePullPolicy: Always
ports:
- containerPort: 1521
env:
...
How exactly do I specify the connection in the separate deployment? Do I specify the oracle-db service name somewhere? So far I specify a containerPort in the container.
If the other app deployment is in the same namespace you can refer to the oracle service by oracle-db. Here is an example of a word-press application using oracle.
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: oracle-db
ports:
- containerPort: 80
name: wordpress
As you can see oracle service is being referred by oracle-db as an environment variable.
If the service is in different namespace than the app deployment then you can refer to it as oracle-db.namespacename.svc.cluster.local
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
Services in Kubernetes are an "abstract way to expose an application running on a set of Pods as a network service." (k8s documentation)
You can access your pod by its IP and port that Kubernetes have given to it, but that's not a good practice as the Pods can die and another one will be created (if controlled by a Deployment/ReplicaSet). When the new one is created, a new IP will be used, and everything on your app will start to fail.
To solve this you can expose your Pod using a Service (as you already have done), and use service-name:service-port assigned to the Service to access your Pod. In this case, even if the Pod dies and a new one is created, Kubernetes will keep forwarding the traffic to the right Pod.