Kubernetes: Only one service endpoint working - kubernetes

I've deployed my Django/React app into K8s and exposed both deployments as a service (ClusterIP).
Whenever I try to call the API service through its ClusterIP:8000, it sometimes refuses the connection. So I checked its endpoints and only one out of the three existing endpoints returns what I expect. I understand that when calling the ClusterIP, it redirects to one of those three endpoints.
Is there any way to 'debug' a incoming service request? Can I modify the amount of existing endpoints (so I could limit it to the only working endpoint)? Is there any other way to maybe see logs of the service to find out why only one of the endpoints is working?

I was able to fix it:
I deployed a three-tier-application (Django/React/DB) and used the same selector for every deployment, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-xxx-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
So when exposing this with "kubectl expose deployment/..." it created as many endpoints as equal selectors were found in the deployment. Since I have three deployments (DB/React/Djagno), three endpoints were created.
Changing the deployment .yaml like this fixed my error and only one endpoint was crated:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: myapp-web
spec:
replicas: 1
selector:
matchLabels:
app: mapp-web

ClusterIP:8000 is not seems right to use.
You could replace it to http://$(serviceName).$(namespace):8000/ for using service correctly.

Related

Load distribution: All HTTP requests are getting redirected to a single pod in a k8 cluster

I have created a very simple spring boot application with only one REST service. This app is converted into a docker image ("springdockerimage:1") and deployed in the Kubernetes cluster with 3 replicas. Contents of my "Deployment" definition is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: springapp
labels:
app: distributiondemo
spec:
selector:
matchLabels:
app: distributiondemo
replicas: 3
template:
metadata:
labels:
app: distributiondemo
spec:
containers:
- name: spring-container
image: springdockerimage:1
I have created service for my above deployment as follows:
apiVersion: v1
kind: Service
metadata:
name: springservice
labels:
app: distributiondemo
spec:
selector:
app: distributiondemo
ports:
- port: 8080
protocol: TCP
targetPort: 8080
name: spring-port
nodePort: 32000
type: NodePort
After deploying both the above YAML(deployment and service) files, I noticed that everything has been deployed as expected i.e., 3 replicas are created and my service is having 3 endpoints as well. Below screenshot is the proof of the same:
Since I am using minikube for my local testing, I am port forwarding and accessing the application as kubectl port-forward deployment.apps/springapp 40002:8080 .
But one thing I noticed is that all my HTTP requests are getting redirected to only one pod.
while true ; do curl http://localhost:40002/docker-java-app/test ;done
I am not getting where exactly I am doing it wrong. Any help would be appreciated. Thank you.
The loadbalancing might not work with port-forwarded ports as it might be directly redirecting traffic to pod (read more here). The K8s service is the feature will give you that loadbalancing capability.
So you can try either of below instead
Use http://your_service_dns_name:8080/docker-java-app/test
Use http://service_cluster_ip:8080/docker-java-app/test
Use http://any_host_ip_from_k8s_cluster:32000/docker-java-app/test
Option 1 and 2 works only if you are accessing those urls from a host which is part of K8s cluster. Option 3 just needs connectivity to target host and port, from the host you are accessing url.

Can I use multiple names for a Kubernetes service?

In Kubernetes, one communicates with a specific service X by doing http requests to http://X:9999. X here is the application name. I wonder, can one add multiple names, or alias, to which it will point to http://X:9999? I.e can I forward/point http://Y:9999 to http://X:9999?
Answer
Yes you can have multiple hostnames point to the same Pod(s).
You can achieve this by creating multiple Services with the same label selectors.
Background
A service creates endpoints to Pod IPs based on label selectors.
Services will match their selectors against Pod labels.
If multiple Services (with different names) have the same label selectors, they will create multiple endpoints to the same Pods.
Example
First service:
apiVersion: v1
kind: Service
metadata:
name: nginx1
namespace: nginx
spec:
selector:
app: nginx
...
Second service:
apiVersion: v1
kind: Service
metadata:
name: nginx2
namespace: nginx
spec:
selector:
app: nginx
...
Each Service will create an endpoint pointing to any Pods with the label app: nginx.
So you could hit the same Pods using nginx2:<port> and nginx1:<port>.

Kubernetes - can a Deployment contain a Service?

Just finished reading Nigel Poulton's The Kubernetes Book, but I am somewhat puzzled with Services.
Could a Service be added to the Deployment manifest below somehow?Or does the Service have to be POSTed on its own? Isn't the whole purpose of a deployment to specify everything needed for the app to run?
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: hello-deploy
spec:
replicas: 10
selector:
matchLabels:
app: hello-world
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-pod
image: nigelpoulton/k8sbook : latest
ports:
- containerPort: 8080
They're different objects and you have to submit them separately (HTTP POST, kubectl apply, ...).
There are a couple of tricks you can do to minimize the impact of this:
You can use a multi-document YAML file and submit that as a single thing, like
---
apiVersion: apps/v1
kind: Deployment
...
---
apiVersion: v1
kind: Service
...
There is an undocumented kind: List that could embed multiple objects
apiVersion: v1
kind: List
items:
- apiVersion: apps/v1
kind: Deployment
...
- apiVersion: v1
kind: Service
...
You can use a higher-level deployment manager such as Helm that lets you keep each object in a separate file, but deploy them in a single command.
It's perhaps unfortunate that a couple of Kubernetes objects have names that are different from their plain English meanings (a Deployment doesn't cover all of the steps or parts of deploying a whole application; a Service is just an IP/DNS pointer and not a service implementation) but that's the way it is. I tend to capitalize the Kubernetes object names when it will disambiguate things.
Isn't the whole purpose of a deployment to specify everything needed for the app to run?
The whole purpose of "Deployment" is to manage the deployment of pods/replicasets including replication, scaling, rolling update, rollbacks. The DeploymentController is part of the master node's controller manager, and it makes sure that the current state always matches the desired state.
does the Service have to be POSTed on its own?
If you are familiar with Load balancers terminology, Services are frontends and Pods are its backends. Since it is frontend, Service forwards requests to its backend (pods).

kubernetes: Role of selector in service vs deployment

From the official example of Kubernetes documentation site on deploying a Wordpress application with mysql:
The service definition of mysql:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
The deployment definition of mysql
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
My question is the following:
The Deployment definition, has a matchLabel selector, so that it will match the pod defined below that has the app: wordpress and tier:mysql labels.
Why the Service selector does not require a matchLabel directive for the same purpose? What is the "selection" of service performed upon?
According to the K8S documentation on Labels and Selectors.
The API currently supports two types of selectors: equality-based and set-based.
Newer resources, such as Job, Deployment, Replica Set, and Daemon Set, support set-based requirements as well.
Looks like new resources like Deployment support more featured set-based (with matchLabels) and the old resources like Services follow the old equality-based (without matchLabels).
The Service is a concept that makes your container (in this case hosting wordpress) available on a given port. It maps an external port (the Node's port) to and internal port (the container/pod's port). It does this by using the Pod's networking capabilities. The selector is a way of specifying in the service which Pod the port should be opened on. The Deployment is actually just a way of grouping things together - the Pod itself holds the Wordpress container, and the port that's defined in the service is available through the Pod networking.
This is a simple explanation, there are different kinds of services.

Is it possible to move the running pods from ReplicationController to a Deployment?

We are using RC to run our workload and want to migrate to Deployment. Is there a way to do that with out causing any impact to the running workload. I mean, can we move these running pods under Deployment?
Like, #matthew-l-daniel answered, the answer is yes. But I am more than 80% certain about it. Because I have tested it
Now whats the process we need to follow
Lets say I have a ReplicationController.
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Question: can we move these running pods under Deployment?
Lets follow these step to see if we can.
Step 1:
Delete this RC with --cascade=false. This will leave Pods.
Step 2:
Create ReplicaSet first, with same label as ReplicationController
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
---
So, now these Pods are under ReplicaSet.
Step 3:
Create Deployment Now with same label.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
----
And Deployment will find one ReplicaSet already exists and our job is done.
Now we can check increasing replicas to see if it works.
And It works.
Which way It doesn't work
After deleting ReplicationController, do not create Deployment directly. This will not work. Because, Deployment will find no ReplicaSet, and will create new one with additional label which will not match with your existing Pods
I'm about 80% certain the answer is yes, since they both use Pod selectors to determine whether new instances should be created. The key trick is to use the --cascade=false (the default is true) in kubectl delete, whose help even speaks to your very question:
--cascade=true: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController). Default true.
By deleting the ReplicationController but not its subordinate Pods, they will continue to just hang out (although be careful, if a reboot or other hazard kills one or all of them, no one is there to rescue them). Creating the Deployment with the same selector criteria and a replicas count equal to the number of currently running Pods should cause a "no action" situation.
I regret that I don't have my cluster in front of me to test it, but I would think a small nginx RC with replicas=3 should be a simple enough test to prove that it behaves as you wish.