load balancer not reachable after creating as service - kubernetes

I have deployed simple app -NGINX and a Load balancer service in Kubernetes.
I can see that pods are running as well as service but calling Loadbalancer external IP is givings server error -site can't be reached .Any suggestion please
app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Service.Yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
P.S. -Attached outcome from terminal.

If you are using Minikube to access the service then you might need to run one extra command. But if this is on a cloud provider then you have an error in your service file.
Please ensure that you put two space in yaml file but your indentation of the yaml file is messed up as you have only added 1 space. Also you made a mistake in the last line of service.yaml file.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx

Related

tunnel for service target port empty kubernetes and can't access pod from local browser

apiVersion: apps/v1
kind: Deployment
metadata:
name: identityold-deployment
spec:
selector:
matchLabels:
app: identityold
replicas: 1
template:
metadata:
labels:
app: identityold
spec:
containers:
- name: identityold
image: <image name from docker hub>
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
labels:
app: identityold
name: identityold-svc
namespace: default
spec:
type: NodePort # use LoadBalancer as type here
ports:
- port: 80
targetPort: 8081
nodePort: 30036
selector:
app: identityold
The above code is my deployment YAML file.
and cant access from the browser the service
Exposing a service in minikube cluster is little bit different than in normal kubernetes cluster.
Please follow this guide from kubernetes documentation and use minikube service command in order to expose it properly.

Kubernetes MetalLB External IP not reachable from browser

I have a nginx deployment with service type LoadBalancer.
I got a external IP which is accessible from master and worker node.
I am not able to access it from browser.
What am I missing?
You can follow the below steps to access it from the browser.
Deploy Nginx in your Kubernetes environment by executing the below YAML file.
kubectl create -f {YAML file location}
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Execute below nginx-service YAML to access it from the browser.
kubectl create -f {YAML file location}
#Service
#nginx-svc-np.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- port: 80
targetPort: 80
externalIPs:
- 192.168.1.155
Now you can access Nginx from your browser.
http://192.168.1.155/ (Please use your external IP)
I have had the same. But I am running minikube. So, changing minikube driver helped me.

kubernetes service getting auto deleted after each deployment

We are facing an unexpected behavior in Kubernetes. When we are running the command:
kubectl apply -f my-service_deployment.yml
It's noticed that the associated existing service of the same pod getting deleted automatically. Also, we noticed that when we are applying the deployment file, instead of giving the output as the "deployment got configured" (as its already running), its showing output as "deployment created".. is some problem here?
Also sometimes we have noticed that the service is recreated with different timestamps than we created with different Ip.
What may be the reasons for this unexpected behavior of this service?
Note:- it's noticed that there is another pod and service running in the same cluster with pod name as "my-stage-my-service" and service name as my-stage-service-v1. will this have any impact?
Deployment file:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
labels:
app: my-service
spec:
replicas: 1
selector:
matchLabels:
app: my-service
template:
metadata:
labels:
app: my-service
spec:
containers:
- name: my-service
image: myacr/myservice:v1-dev
ports:
- containerPort: 8080
imagePullSecrets:
- name: my-az-secret
Service file:
apiVersion: v1
kind: Service
metadata:
name: my-stage-service
spec:
selector:
app: my-service
ports:
- protocol: TCP
port: 8880
targetPort: 8080
type: LoadBalancer

Kubernetes Communication between services

For learning purposes, i have two services in a cluster on google cloud:
API Service with the following k8 config:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-api
spec:
containers:
- image: gcr.io/prefab-basis-213412/myapp-api:0.0.1
name: myapp-api
ports:
- containerPort: 3000
service.yaml
kind: Service
apiVersion: v1
metadata:
name: myapp-api
spec:
selector:
app: myapp-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
And a second service, called frontend, that's publicly exposed:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myappfront-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: myappfront
spec:
containers:
- image: gcr.io/prefab-basis-213412/myappfront:0.0.11
name: myappfront-deployment
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: myappfront-service
spec:
type: LoadBalancer
selector:
app: myappfront
ports:
- port: 80
targetPort: 3000
The front service is basically a nodejs app that only does a rest call to the api service like so axios.get('http://myapp-api').
The issue is that the call is failing and it's unable to find the requested endpoint. Is there any additional config that i'm currently missing for the API service to be discovered?
P.S. Both services are running and I can connect to them by proxying from localhost.
Since you're able to hit the services when proxying it sounds like you've already gone through most of the debugging steps for in-cluster issues (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/). That suggests the problem might not be within the cluster. Something to watch out for with frontends is where the http call takes place. It could be in the server with node but given you're seeing this issue I'd suggest that it's in the browser. (If you see can see the call in the browser console then it is.) If the frontend's call is being made in the browser then it doesn't have access to the Kubernetes dns and the cluster-internal service name won't resolve. To handle this you could make the backend service LoadBalancer and pass the external name into the frontend.

Handling long Pod response times in a Service

Given the following configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
name: server
selector:
app: nginx
How would one configure the Service and Deployment here (or if needed, an Ingress object) so that when a Pod takes more than n seconds to return a HTTP response, the Service will try the request on another nginx-deployment Pod?
Kubernetes Services are based on simple iptables rules.
Traffic is NAT'ed only to destination pod. There are no layers you can adjust, for example, timeouts and set quality of services based on it.