I installed kubernetes cluster on my 3 virtualbox vms. 3 vms all run Ubuntu14.04 with ufw disabled. Kubernetes versin is 1.6. Here is my config files for creating pod and service.
Pod pod.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend
labels:
name: frontend
spec:
replicas: 3
selector:
name: frontend
template:
metadata:
labels:
name: frontend
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: frontend
image: hub.allinmoney.com/kubeguide/guestbook-php-frontend
env:
- name: GET_HOSTS_FROM
value: env
ports:
- containerPort: 80
Service service.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
type: NodePort
ports:
- port: 80
targetPort: 31000
nodePort: 31000
selector:
name: frontend
I create service with type NodePort. When I run command kubectl create -f service.yaml, it outputs like below and I can't find the exposed port 31000 in any kube nodes:
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:31000) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
Could anyone tell how to solve this or give me any tips?
As it says in the error message you need to set up firewall rules for your nodes to accept traffic on the node ports (default: 30000-32767).
Firewall rule example
Name: [firewall-rule-name]
Targets: [node-target-name, node-target2-name]
Source filters: IP ranges: 0.0.0.0/0
Protocols / ports: tcp:80,443,30000-32767
Action: Allow
Priority: 1000
Network: default
Your targetPort is also incorrect it needs to point to the corresponding port in the Pod (Port 80).
Related
I am trying to deploy an application to Minikube. However I am having issues connecting the frontend pod to the backend pod.
Each Deployment have a ClusterIP service, and a NodePort service.
I access the frontend via browser, executing the command: minikube service frontend-entrypoint. When the frontend tries to query the backend it requests the URL: http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial, but the status response is: (failed)net::ERR_EMPTY_RESPONSE.
If I access the frontend via cmd, executing the command: kubectl exec -it react-deployment-xxxxxxxxxx-xxxxx -- sh, and execute inside it the command: curl -X GET "http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial" I get what I expect.
So, I understand that NodePorts are used to route external traffic to services inside the cluster by opening a specific port on each node in the cluster and forwarding traffic from that port to the service, and that ClusterIPs, on the other hand, are used to expose services only within the cluster and are not directly accessible from outside the cluster. What I don't understand is why when reaching the frontend via browser, the same is not able to connect internally to the backend? Once playing with the frontend I consider I am inside the cluster...
I tried to expose the cluster using other services such as Ingress or LoadBalancer, but I didn't have success connecting to the frontend, so I rollback to the NodePort solution.
References:
Kubernetes Guide - Deploying a machine learning app built with Django, React and PostgreSQL using Kubernetes
Exposing External-Facing Services In Kubernetes
Files:
component_backend.yaml:
apiVersion: v1
kind: Service
metadata:
name: backend-entrypoint
spec:
selector:
component: fastapi
ports:
- name: http2
port: 8000
targetPort: 8000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi-deployment
spec:
replicas: 1
selector:
matchLabels:
component: fastapi
template:
metadata:
labels:
component: fastapi
spec:
containers:
- name: fastapi-container
image: xxx/yyy:zzz
ports:
- containerPort: 8000
env:
- name: DB_USERNAME
valueFrom:
configMapKeyRef:
name: app-variables
key: DB_USERNAME
[...]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
externalIPs:
- <minikube ip>
componente_frontend.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend-entrypoint
spec:
selector:
component: react
ports:
- name: http1
port: 3000
targetPort: 3000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-deployment
spec:
replicas: 1
selector:
matchLabels:
app: react
template:
metadata:
labels:
app: react
spec:
containers:
- name: react-container
image: xxx/yyy:zzz
ports:
- containerPort: 3000
env:
- name: BASELINE_API_URL
valueFrom:
configMapKeyRef:
name: app-variables
key: BASELINE_API_URL
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: react-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: react
ports:
- port: 3000
targetPort: 3000
externalIPs:
- <minikube ip>
BASELINE_API_URL is declared with the backend ClusterIP service name (i.e., fastapi-cluster-ip-service).
ingress_service.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: sfs.baseline
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-entrypoint
port:
name: http1
- path: /api
pathType: Prefix
backend:
service:
name: backend-entrypoint
port:
name: http2
You have to think about this: If you call the api through your browser or Postman from your host, they dont know anything about the url inside your pod.
For production ingress configuration is needed, so you can call the api like:
https://api.mydomain.com/api/myroute
When you deploy the frontend the paths inside your code should be created dynamically.
Inside your code define the path with env variables and use them inside your code.
On Kubernetes define a configMap with the paths and bind it to your container.
So when you call the Frontend it will have the right paths.
Your Frontend on local can be reached with the Ip address of your masternode and the nodePort.
To not use the ip address you can create an entry in your local hosts file.
nodesipaddress mydomain.local
nodesipaddress api.mydomain.local
so from you browser you can reach the frontend with mydomain.local:nodeportOfFrontend
And your frontend code should call the backend with api.mydomain.local:nodeportOfApi
If you enable Ingress inside your cluster and create an ingress resource in your deployment and a service of type LoadBalancer, then you can call the Frontend and api without the nodePort.
If you are getting in issues with that, please post all your kubernetes yamls. Deployments, Services, configMap and ingress if you decide to use it.
UPDATE
Check if you have ingress enabled on minikube
Modify your ingress-recource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: mydomain.local
http:
paths:
- path: /
Im not sure if minikube supports ClusterIp
Change the type to LoadBalancer in your service:
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
If you verify your service:
kubectl get svc -o wide
kubectl describe services my-service
Look if there is external IP pending if so add externalIp:
ports:
- port: 8000
targetPort: 8000
externalIPs:
- xxx.xxx.xxx.xxx # your minikube ip
However i would try first to create a service type NodePort and then access with xxx.xxx.xxx.xxx:NodePort.
In the yamls you posted i see only the backend.
Think if you use ingress or NodePort, your code must be adapted:
Your Frontend must call the api by api ingress domain or xxx.xxx.xxx.xxx:NodePort / mydomain.local/api
I think you are not understanding how the frontend part of a website works.
To display the website in your browser, you actually download all the required content from the server/pod where you host the frontend.
Once the front is downloaded and displayed on your browser, now you try to hit the backend directly from your browser/computer using either an IP address or an URL.
When using an URL, you browser will try to resolve the hostname using a DNS server
So when you read a website from your browser you are not in the server/pod and you cannot resolve the URL because that URL is not mapped to any IP address (or not to your server).
Also that is why it works when you go inside the pod using kubectl exec, because you are inside the network and you are using the internal DNS.
As David said, to make this work, you need to call the backend from the frontend using sone kind of ingress.
And also you will need to create a DNS entry on your domain (if using an URL).
If not you can directly use the IP of the pod/service.
I have a running pod that was created with the following pod-definition.yaml:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
I then created a Service using the following service-definition.yaml:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
I then ran kubectl describe node minikube to find the Node IP I should be connecting to -- which yielded:
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
But I get no response when I run the following curl command:
curl 192.168.49.2:30008
The request also times out when I try to access 192.168.49.2:30008 from a browser.
The pod logs show that the container is up and running. Why can't I access my Service?
The problem is that you are trying to access your service at the port parameter which is the internal port at which the service will be exposed, even when using NodePort type.
The parameter you were searching is called nodePort, which can optionally be specified together with port and targetPort. Quoting the documentation:
By default and for convenience, the Kubernetes control plane will
allocate a port from a range (default: 30000-32767)
Since you didn't specify the nodePort, one in the range was automatically picked up. You can check which one by:
kubectl get svc -owide
And then access your service externally at that port.
As an alternative, you can change your service definition to be something like:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
But take in mind that you may need to delete your service and create it again in order to change the nodePort allocated.
I think you missed the Port in your service:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
and your service should be like this:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 2019
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
You can access to your app after enabling the Minikube ingress if you want trying Ingress with Minikube.
minikube addons enable ingress
I updated my k8 cluster to 1.18 recently. Afterwards I had to recreate a (previously functional) loadBalancer service. It seemed to come up properly but I was unable to access the external ip afterwards. Looking at the dump from kubectl describe service I don't see a field for "loadbalancer ingress" that I see on other services that didn't get restarted.
apiVersion: v1
kind: Service
metadata:
name: search-master
labels:
app: search
role: master
spec:
selector:
app: search
role: master
ports:
- protocol: TCP
port: 9200
targetPort: 9200
name: serviceport
- port: 9300
targetPort: 9300
name: dataport
type: LoadBalancer
loadBalancerIP: 10.95.96.43
I tried adding this (to no avail):
status:
loadBalancer:
ingress:
- ip: 10.95.96.43
What have I missed here?
Updates:
Cluster is running in a datacenter. 10 machines + 1 master (vm)
"No resources found"
Another odd thing: when I dump the service as yaml I get this entry at the top:
apiVersion: v1
items:
- apiVersion: v1
kind: Service
...
spec:
clusterIP: <internal address>
...
type: LoadBalancer
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Something wrong with my yml?
For a distant observer - this is likely due to metallb version conflict. Note that 1.17-> 1.18 introduces some breaking changes.
I can't access to public IP assigned by MetalLB load Balancer
I created a Kubernetes cluster in Contabo. Its 1 master and 2 workers. Each one has its own public IP.
I did it with kubeadm + flannel. Later I did install MetalLB to use Load Balancing.
I used this manifest for installing nginx:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
It works, pods are running. I see the external IP adress after:
kubectl get services
From each node/host I can curl to that ip and port and I can get nginx's:
<h1>Welcome to nginx!</h1>
So far, so good. BUT:
What I still miss is to access to that service (nginx) from my computer.
I can try to access to each node (master + 2 slaves) by their IP:PORT and nothing happens. The final goal is to have a domain that access to that service but I can't guess witch IP should I use.
What I'm missing?
Should MetalLB just expose my 3 possible IPs?
Should I add something else on each server as a reverse proxy?
I'm asking this here because all articles/tutorials on baremetal/VPS (non aws,GKE, etc...) do this on a kube on localhost and miss this basic issue.
Thanks.
I am having the very same hardware layout:
a 3-Nodes Kubernetes Cluster - here with the 3 IPs:
| 123.223.149.27
| 22.36.211.68
| 192.77.11.164 |
running on (different) VPS-Providers (connected to a running cluster(via JOIN), of course)
Target: "expose" the nginx via metalLB, so I can access my web-app from outside the cluster via browser via the IP of one of my VPS'
Problem: I do not have a "range of IPs" I could declare for the metallb
Steps done:
create one .yaml file for the Loadbalancer, the kindservicetypeloadbalancer.yaml
create one .yaml file for the ConfigMap, containing the IPs of the 3 nodes, the kindconfigmap.yaml
``
### start of the kindservicetypeloadbalancer.yaml
### for ensuring a unique name: loadbalancer name nginxloady
apiVersion: v1
kind: Service
metadata:
name: nginxloady
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
``
below, the second .yaml file to be added to the Cluster:
# start of the kindconfigmap.yaml
## info: the "production-public-ips" can be found
## within the annotations-sector of the kind: Service type: loadbalancer / the kindservicetypeloadbalancer.yaml
## as well... ...namespace: metallb-system & protocol: layer2
## note: as you can see, I added a /32 after every of my node-IPs
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: production-public-ips
protocol: layer2
addresses:
- 123.223.149.27/32
- 22.36.211.68/32
- 192.77.11.164/32
``
add the LoadBalancer:
kubectl apply -f kindservicetypeloadbalancer.yaml
add the ConfigMap:
kubectl apply -f kindconfigmap.yaml
Check the status of the namespace ( "n" ) metallb-system:
kubectl describe pods -n metallb-system
PS:
actually it is all there:
https://metallb.universe.tf/installation/
and here:
https://metallb.universe.tf/usage/#requesting-specific-ips
What you are missing is a routing policy
Your external IP addresses must belong to the same network as your nodes or instead of that you can add a route to your external address at your default gateway level and use a static NAT for each address
I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). But I can't communicate client with server. Infact if lucky-word-client communicates with lucky-word-server, the result is the word "Evviva", else the word is "Default". When I run on terminal: minikube service lucky-client the output is Default, instead of Evviva. I want communicate client with server through DNS. I saw the guide: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ but without success. How i can modify the service or pods to have the link between client and server?
This is the pod of lucky-word-client:
apiVersion: v1
kind: Pod
metadata:
name: lucky-client
namespace: default
spec:
containers:
- image: lucky-client-img
imagePullPolicy: IfNotPresent
name: lucky-client
This is the pod of lucky-word-server:
apiVersion: v1
kind: Pod
metadata:
name: lucky-server
namespace: default
spec:
containers:
- image: lucky-server-img
imagePullPolicy: IfNotPresent
name: lucky-server
This is the service, where the lucky-word-client communicate with lucky-word-server:
kind: Service
apiVersion: v1
metadata:
name: lucky-client
spec:
selector:
app: lucky-client
ports:
- protocol: TCP
targetPort: 8080
port: 80
type: NodePort
If you want to have DNS based service discovery to communicate with the server follow the below steps:
Enable kube-dns addon via minikube addons enable kube-dns command. This will enable the service discovery in your kubernetes cluster.
Make sure that kube-dns addon is enable using minikube addons list command.
In your client application code change the server URL endpoint to the following : http://lucky-server:8888. "lucky-server" is the metadata.name property of your Kubernetes server service yaml definition.
Or else instead of lucky-server you can use fully qualified name lucky-server.default.svc.cluster.local in the server URL since you are deploying your service in default namespace.
You need a service for your lucky-server :
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
targetPort: 8888
port: 80
type: NodePort