I'm studying Kubernetes using my laptop(no cloud). I started with minikube by following the docs at kubernetes(.)io. I'm not sure what I'm missing since I can only access my web app using high TCP port 32744 and not the standard port 80. I want to be able to access my web app using a web browser by visiting http://ipaddress.
Here is my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodeapp
image: localregistry/nodeapp:1.0
ports:
- containerPort: 3000
$ minikube service webapp-deployment
|-----------|-------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------------|-------------|---------------------------|
| default | webapp-deployment | 3000 | http://192.168.64.2:32744 |
|-----------|-------------------|-------------|---------------------------|
This is just how Kubernetes works.
3000 is the port in your container and 32744 is the NodePort where your application got exposed. There are multiple reasons for this, one of them is that port 80 and 443 are standard reserved ports đź”’đź”’ for web services and Kubernetes needs to be able to run many containers and services. Another reason is that ports 0-1024 are restricted to root only on *nix systems đźš«đźš«.
If you really would like to serve it on port 80 on your local machine I would just set up something like Nginx and proxy the traffic to http://192.168.64.2:32744
# nginx.conf
...
location / {
proxy_set_header Accept-Encoding "";
proxy_pass http://192.168.64.2:32744;
}
...
Or you can do port-forward from a non-restricted local post as the other answer here suggested.
✌️
You can use kubectl port-forward for this.
$ kubectl port-forward -n <namespace> pod/mypod 8888:5000
What it does?
Listen on port 8888 locally, forwarding to port 5000 in the pod.
NB: You can also use port-forward with k8s services too using kubectl port-forward svc/<service-name> PORT:PORT.
Related
I have pod1 and pod2 in the same namespace.
pod2 is running an HTTP server.
How can I easily get pod2 be seen as pod2.mydomain.com from pod1?
In this way the HTTPS certificate would work with no problem.
It can be achieved through the kubernetes cluster and it is important to use a valid SSL certificate.
By using a Kubernetes service you can create a service of type “ClusterIP” or “NodePort” in the same namespace as the pods and you need to expose the pod2 HTTP server to a consistent IP and port. In this way you can configure your DNS to map pod2.mydomain.com to the IP address of the service.
Or if your cluster supports load balancing you can create a Service of type “LoadBalancer” and you can expose pod2 HTTP server to a public IP.
You can also create an Ingress service that routes traffic to pod 2 on the hostname.
For more information please check this official Document
You can directly hit the POD with it's IP fine how you are thinking however it would better to use the service with POD.
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/name: proxy
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
name: service-1
spec:
selector:
app.kubernetes.io/name: proxy
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: http-web-svc
Service route the traffic to matching labels PODs, so from POD1 you hit the request to service-1 which will forward traffic to POD-1 and response.
https://service-1.<namespace-name>.svc.cluster.local
with service HTTPS will also work the way you asked. Test with the curl command, start one curl pod
kubectl run mycurlpod --image=curlimages/curl -i --tty -- sh
hit curl request to service
curl https://service-1.<namespace-name>.svc.cluster.local
service ref doc : https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
Extra :
You can also use the ingress & service mesh which will make little simple for you scenario if you don't want to manage SSL/TLS cert for the app.
Service mesh supports the mTLS auth you can force policy and it would be easy however there will be extra management.
I am trying to understand the Kubernetes API Gateway for my Microservices. I have multiple microservices and those are deployed with the Kubernetes deployment type along with its own services.
I also have a front-end application that basically tries to communicate with the above APIs to complete the requests.
Overall, below is something I like to achieve and I like your opinions.
Is my understanding correct with the below diagram? (like Should we have API Gateway on top of all my Microservices and Web Application should use this API Gateway to reach any of those services?
If yes, How can I make that possible? I mean, I tried ISTIO Gateway and that's somehow not working.
Here is istio gateway and virtual service
On another side, below is my service (catalog service) configuration
apiVersin: v1
kind: Service
metadata:
name: catalog-api-service
namespace: local-shoppingcart-v1
labels:
version: "1.0.0"
spec:
type: NodePort
selector:
app: catalog-api
ports:
- nodePort: 30001
port: 30001
targetPort: http
protocol: TCP
name: http-catalogapi
also, at the host file (windows - driver\etc\host file) I have entries for the local DNS
127.0.0.1 kubernetes.docker.internal
127.0.0.1 localshoppingcart.com
istio service side, following screenshot
I am not sure what is going wrong but I try localhost:30139/catalog or localhost/catalog it always gives me connection refuse or connection not found error only.
If you are on the minikube you have to get the IP of minikube and port using these command as mentioned in the document
Get IP of minikube
export INGRESS_HOST=$(minikube ip)
Document
You can get Port and HTTPS port details
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="https")].nodePort}')
If you are on Docker Desktop try forwarding traffic using the kubectl
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Open
localhost:8080 in browser
Read more
I have configured a web application pod exposed via apache on port 80. I'm unable to configure a service + ingress for accessing from the internet. The issue is that the backend services always report as UNHEALTHY.
Pod Config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: webapp
name: webapp
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
name: webapp
template:
metadata:
labels:
name: webapp
spec:
containers:
- image: asia.gcr.io/my-app/my-app:latest
name: webapp
ports:
- containerPort: 80
name: http-server
Service Config:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
name: webapp
ports:
- protocol: TCP
port: 50000
targetPort: 80
Ingress Config:
kind: Ingress
metadata:
name: webapp-ingress
spec:
backend:
serviceName: webapp-service
servicePort: 50000
This results in backend services reporting as UNHEALTHY.
The health check settings:
Path: /
Protocol: HTTP
Port: 32463
Proxy protocol: NONE
Additional information: I've tried a different approach of exposing the deployment as a load balancer with external IP and that works perfectly. When trying to use a NodePort + Ingress, this issue persists.
With GKE, the health check on the Load balancer is created automatically when you create the ingress. Since the HC is created automatically, so are the firewall rules.
Since you have no readinessProbe configured, the LB has a default HC created (the one you listed). To debug this properly, you need to isolate where the point of failure is.
First, make sure your pod is serving traffic properly;
kubectl exec [pod_name] -- wget localhost:80
If the application has curl built in, you can use that instead of wget.
If the application has neither wget or curl, skip to the next step.
get the following output and keep track of the output:
kubectl get po -l name=webapp -o wide
kubectl get svc webapp-service
You need to keep the service and pod clusterIPs
SSH to a node in your cluster and run sudo toolbox bash
Install curl:
apt-get install curl`
Test the pods to make sure they are serving traffic within the cluster:
curl -I [pod_clusterIP]:80
This needs to return a 200 response
Test the service:
curl -I [service_clusterIP]:80
If the pod is not returning a 200 response, the container is either not working correctly or the port is not open on the pod.
if the pod is working but the service is not, there is an issue with the routes in your iptables which is managed by kube-proxy and would be an issue with the cluster.
Finally, if both the pod and the service are working, there is an issue with the Load balancer health checks and also an issue that Google needs to investigate.
As Patrick mentioned, the checks will be created automatically by GCP.
By default, GKE will use readinessProbe.httpGet.path for the health check.
But if there is no readinessProbe configured, then it will just use the root path /, which must return an HTTP 200 (OK) response (and that's not always the case, for example, if the app redirects to another path, then the GCP health check will fail).
I have a timeout problem with my site hosted on Kubernetes cluster provided by DigitalOcean.
u#macbook$ curl -L fork.example.com
curl: (7) Failed to connect to fork.example.com port 80: Operation timed out
I have tried everything listed on the Debug Services page. I use a k8s service named df-stats-site.
u#pod$ nslookup df-stats-site
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
It gives the same output when I do it from node:
u#node$ nslookup df-stats-site.deepfork.svc.cluster.local 10.245.0.10
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
With the help of Does the Service work by IP? part of the page, I tried the following command and got the expected output.
u#node$ curl 10.245.16.96
*correct response*
Which should mean that everything is fine with DNS and service. I confirmed that kube-proxy is running with the following command:
u#node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 13:56 /hyperkube proxy --config=...
But I have something wrong with iptables rules:
u#node$ iptables-save | grep df-stats-site
(unfortunately, I was not able to copy the output from node, see the screenshot below)
It is recommended to restart kube-proxy with with the -v flag set to 4, but I don't know how to do it with DigitalOcean provided cluster.
That's the configuration I use:
apiVersion: v1
kind: Service
metadata:
name: df-stats-site
spec:
ports:
- port: 80
targetPort: 8002
selector:
app: df-stats-site
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: df-stats-site
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- fork.example.com
secretName: letsencrypt-prod
rules:
- host: fork.example.com
http:
paths:
- backend:
serviceName: df-stats-site
servicePort: 80
Also, I have a NGINX Ingress Controller set up with the help of this answer.
I must note that it worked fine before. I'm not sure what caused this, but restarting the cluster would be great, though I don't know how to do it without removing all the resources.
The solution for me was to add HTTP and HTTPS inbound rules in the Firewall (these are missing by default).
For DigitalOcean provided Kubernetes cluster, you can open it at https://cloud.digitalocean.com/networking/firewalls/.
UPDATE: Make sure to create a new firewall record rather than editing an existing one. Otherwise, your rules will be automatically removed in a couple of hours/days, because DigitalOcean k8s persists the set of rules in the firewall.
ClusterIP services are only accessible from within the cluster. If you want to access it from outside the cluster, it needs to be configured as NodePort or LoadBalancer.
If you are just trying to test something locally, you can use kubectl port-forward to forward a port on your local machine to a ClusterIP service on a remote cluster. Here's an example of creating a deployment from an image, exposing it as a ClusterIP service, then accessing it via kubectl port-forward:
$ kubectl run --image=rancher/hello-world hello-world --replicas 2
$ kubectl expose deployment hello-world --type=ClusterIP --port=8080 --target-port=80
$ kubectl port-forward svc/hello-world 8080:8080
This service is now accessible from my local computer at http://127.0.0.1:8080
I've installed minikube to learn kubernetes a bit better.
I've deployed some apps and services which have ip's in a range of 10.x.x.x (private ip). I can expose my services on minikube and visit them in my browser. But I want to use the private IP's and not exposing it.
How can I visit (vpn/proxy wize) private ip's of services in minikube?
Minikube is Kubernetes with only one node and master server running on this node.
It provides the possibility to learn how it works with minimum hardware required.
It's ideal for testing purposes and seamless running on a laptop. Minikube is still software with mature
network stack from Kubernetes. This means that ports are exposed to services and virtually services are
communicating with pods.
To understand what is communicating, let me explain what ClusterIP does - it exposes the service on an internal IP in the cluster. This type makes service only reachable from within the cluster.
Cluster IP you can get by the command:
kubectl get services test_service
So, after you create a new service, you like to establish connections to ClusterAPI.
Basically, there are three ways to connect to backend resource:
1/ use kube-proxy - this proxy reflects services as defined in the Kubernetes API and simple stream TCP and UDP to backend or set of them in advanced configuration. Service cluster IPs and ports are currently found through Docker compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs. The user must create a service with the apiserver API to configure the proxy.
Example shows how can we use nodeselectors to define connection to port 5000 on ClusterIP - config.yaml may consist of:
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci spec:
type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
2/ use port forwarding to access application - first check if kubectl command-line tool to communicate with your minikube cluster works, then if true find service port from ClusterIP configuration.
kubectl get svc | grep test_service
Let assume service test_service works on port 5555 so to do port forwarding run the command:
kubectl port-forward pods/test_service 5555:5555
After that, you service will be available on the localhost:5555
3/ If you are familiar with the concept of pods networking you cat declare public ports in the pod’s manifest file. A user can connect to pods network defining manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 8080
When the container is starting with manifest file like above host port TCP port 8080 will be forwarded to pod port 8080.
Please keep in the mind that ClusterIP is the use of a lot of services regarding to proper works of the cluster. I think it is not good practice to deal with ClusterIP as a regular network service - on worst scenario, it breaks a cluster soon, by invalid internal network state of connections.