I updated my k8 cluster to 1.18 recently. Afterwards I had to recreate a (previously functional) loadBalancer service. It seemed to come up properly but I was unable to access the external ip afterwards. Looking at the dump from kubectl describe service I don't see a field for "loadbalancer ingress" that I see on other services that didn't get restarted.
apiVersion: v1
kind: Service
metadata:
name: search-master
labels:
app: search
role: master
spec:
selector:
app: search
role: master
ports:
- protocol: TCP
port: 9200
targetPort: 9200
name: serviceport
- port: 9300
targetPort: 9300
name: dataport
type: LoadBalancer
loadBalancerIP: 10.95.96.43
I tried adding this (to no avail):
status:
loadBalancer:
ingress:
- ip: 10.95.96.43
What have I missed here?
Updates:
Cluster is running in a datacenter. 10 machines + 1 master (vm)
"No resources found"
Another odd thing: when I dump the service as yaml I get this entry at the top:
apiVersion: v1
items:
- apiVersion: v1
kind: Service
...
spec:
clusterIP: <internal address>
...
type: LoadBalancer
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Something wrong with my yml?
For a distant observer - this is likely due to metallb version conflict. Note that 1.17-> 1.18 introduces some breaking changes.
Related
I have two pods. One pod (main) has an endpoint that queries other pod(other) to get the result. Both pods have services of type ClusterIP, and the main pod also has an ingress. The main pod is not able to connect to other pod to query at the given endpoint.
In the above image, / endpoint works, but /other endpoint fails.
Below are the config files:
# main-service.yaml
apiVersion: v1
kind: Service
metadata:
name: main-service
labels:
name: main-service-label
spec:
selector:
app: main # label selector of pod, not the deployment
type: ClusterIP
ports:
- port: 8001
protocol: TCP
targetPort: 8001
# other-service.yaml
apiVersion: v1
kind: Service
metadata:
name: other-service
labels:
name: other-service-label
spec:
selector:
app: other # label selector of pod, not the deployment
type: ClusterIP
ports:
- port: 8002
protocol: TCP
targetPort: 8002
All the docker images, deployment files, ingress etc are made available at: this repo.
Note:
I entered the other pod using kubectl exec, and I am able to make curl request to main pod, but not vice versa. Not sure what is going wrong.
All pods, services are in default namespace.
I have a running pod that was created with the following pod-definition.yaml:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
I then created a Service using the following service-definition.yaml:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
I then ran kubectl describe node minikube to find the Node IP I should be connecting to -- which yielded:
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
But I get no response when I run the following curl command:
curl 192.168.49.2:30008
The request also times out when I try to access 192.168.49.2:30008 from a browser.
The pod logs show that the container is up and running. Why can't I access my Service?
The problem is that you are trying to access your service at the port parameter which is the internal port at which the service will be exposed, even when using NodePort type.
The parameter you were searching is called nodePort, which can optionally be specified together with port and targetPort. Quoting the documentation:
By default and for convenience, the Kubernetes control plane will
allocate a port from a range (default: 30000-32767)
Since you didn't specify the nodePort, one in the range was automatically picked up. You can check which one by:
kubectl get svc -owide
And then access your service externally at that port.
As an alternative, you can change your service definition to be something like:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
But take in mind that you may need to delete your service and create it again in order to change the nodePort allocated.
I think you missed the Port in your service:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
and your service should be like this:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 2019
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
You can access to your app after enabling the Minikube ingress if you want trying Ingress with Minikube.
minikube addons enable ingress
I'm having issues when accessing a service present in another namespace.
I have 2 namespaces (in the same cluster) airflow-dev and dask-dev.
In dask-dev namespace, I have dask cluster(dask scheduler and workers) deployed. Also, created a service (cluster IP) to dask-scheduler pod. I'm able to access dask-scheduler pod from chrome using 'kubectl port-forward' command.
kubectl port-forward --namespace dask-dev svc/dask-dev-scheduler 5002:80
However, am not able to access the service (or dask-scheduler pod) from a pod (airflow-scheduler) present in airflow-dev namespace. Getting 'Host or service not found' error when trying to access it using the below
dask-dev-scheduler.dask-dev.svc.cluster.local:8786
Below is the service that I have created for dask-dev-scheduler. Could you please let me know how to access the service from airflow-dev namespace.
apiVersion: v1
metadata:
name: dask-dev-scheduler
namespace: dask-dev
labels:
app: dask-dev
app.kubernetes.io/managed-by: Helm
chart: dask-dev-4.5.7
component: scheduler
heritage: Helm
release: dask-dev
annotations:
meta.helm.sh/release-name: dask-dev
meta.helm.sh/release-namespace: dask-dev
spec:
ports:
- name: dask-dev-scheduler
protocol: TCP
port: 8786
targetPort: 8786
- name: dask-dev-webui
protocol: TCP
port: 80
targetPort: 8787
selector:
app: dask-dev
component: scheduler
release: dask-dev
clusterIP: 10.0.249.111
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
[1]: https://i.stack.imgur.com/UrA7u.jpg
You can use a local service to reference an external service (a service in a different namespace) using the service externalName Type.
ExternalName services do not have selectors, or any defined ports or endpoints, therefore, you can use an ExternalName service to direct traffic to an external service.
apiVersion: v1
kind: Service
metadata:
name: service-b
namespace: namespace-b
spec:
selector:
app: my-app-b
ports:
- protocol: TCP
port: 3000
targetPort: 3000
apiVersion: v1
kind: Service
metadata:
name: service-b-ref
namespace: namespace-a
spec:
type: ExternalName
externalName: service-b.namespace-b.svc.cluster.local
Any traffic in namespace-a that connects to service-b-ref:<port> will be routed to service-b in namespace-b (service-b.namespace-b.svc.cluster.local)
Therefore, a call to service-b-ref:3000 will route to our service-b.
In your example, you'd just need to create a service in airflow-dev that will route traffic to the dask-dev-scheduler in the dask-dev namespace:
apiVersion: v1
kind: Service
metadata:
name: dask-dev-svc
namespace: airflow-dev
spec:
type: ExternalName
externalName: dask-dev-scheduler.dask-dev.svc.cluster.local
Therefore, all airflow-dev resources that need to connect to the dask-dev-scheduler would call: dask-dev-svc:8786
apiVersion: v1
metadata:
name: dask-dev-scheduler
namespace: dask-dev
spec:
ports:
- name: dask-dev-scheduler
protocol: TCP
port: 8786
targetPort: 8786
# ...
selector:
app: dask-dev
The cluster domain doesn't always have to be cluster.local. Try just using dask-dev-scheduler.dask-dev.svc. Assuming Airflow respects the ndots lookup strategy in the generated resolv.conf mounted into the pod, that should find it.
Are you upgrading coredns to 1.8.3? I have just faced same issue with my eks cluster after upgrading from 1.19 to 1.20.
Downgrading coredns to 1.8.0 solved the issue.
I have the cluster setup below in AKS
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-example
spec:
replicas: 3
selector:
matchLabels:
app: hpa-example
template:
metadata:
labels:
app: hpa-example
spec:
containers:
- name: hpa-example
image: gcr.io/google_containers/hpa-example
ports:
- name: http-port
containerPort: 80
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: hpa-example
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: http-port
protocol: TCP
selector:
app: hpa-example
type: NodePort
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-example-autoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpa-example
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
The idea of this is to check AutoScaling
I need to have this available externally so I added
apiVersion: v1
kind: Service
metadata:
name: load-balancer-autoscaler
spec:
selector:
app: hpa-example
ports:
- port: 31001
targetPort: 31001
type: LoadBalancer
This now gives me an external IP however, I cannot connect to it in Postman or via a browser
What have I missed?
I have tried to change the ports between 80 and 31001 but that makes no difference
As posted by user #David Maze:
What's the exact URL you're trying to connect to? What error do you get? (On the load-balancer-autoscaler service, the targetPort needs to match the name or number of a ports: in the pod, or you could just change the hpa-example service to type: LoadBalancer.)
I reproduced your scenario and found out issue in your configuration that could deny your ability to connect to this Deployment.
From the perspective of Deployment and Service of type NodePort everything seems to work okay.
If it comes to the Service of type LoadBalancer on the other hand:
apiVersion: v1
kind: Service
metadata:
name: load-balancer-autoscaler
spec:
selector:
app: hpa-example
ports:
- port: 31001
targetPort: 31001 # <--- CULPRIT
type: LoadBalancer
This definition will send your traffic directly to the pods on port 31001 and it should send it to the port 80 (this is the port your app is responding on). You can change it either by:
targetPort: 80
targetPort: http-port
You could also change the Service of the NodePort (hpa-example) to LoadBalancer as pointed by user #David Maze!
After changing this definition you will be able to run:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
load-balancer-autoscaler LoadBalancer 10.4.32.146 AA.BB.CC.DD 31001:31497/TCP 9m41s
curl AA.BB.CC.DD:31001 and get the reply of OK!
I encourage you to look on the additional resources regarding Kubernetes services:
Docs.microsoft.com: AKS: Network: Services
Stackoverflow.com: Questions: Difference between nodePort and LoadBalancer service types
Kubernetes.io: Docs: Concepts: Service
I installed kubernetes cluster on my 3 virtualbox vms. 3 vms all run Ubuntu14.04 with ufw disabled. Kubernetes versin is 1.6. Here is my config files for creating pod and service.
Pod pod.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend
labels:
name: frontend
spec:
replicas: 3
selector:
name: frontend
template:
metadata:
labels:
name: frontend
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: frontend
image: hub.allinmoney.com/kubeguide/guestbook-php-frontend
env:
- name: GET_HOSTS_FROM
value: env
ports:
- containerPort: 80
Service service.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
type: NodePort
ports:
- port: 80
targetPort: 31000
nodePort: 31000
selector:
name: frontend
I create service with type NodePort. When I run command kubectl create -f service.yaml, it outputs like below and I can't find the exposed port 31000 in any kube nodes:
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:31000) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
Could anyone tell how to solve this or give me any tips?
As it says in the error message you need to set up firewall rules for your nodes to accept traffic on the node ports (default: 30000-32767).
Firewall rule example
Name: [firewall-rule-name]
Targets: [node-target-name, node-target2-name]
Source filters: IP ranges: 0.0.0.0/0
Protocols / ports: tcp:80,443,30000-32767
Action: Allow
Priority: 1000
Network: default
Your targetPort is also incorrect it needs to point to the corresponding port in the Pod (Port 80).