I created a local kubernetes cluster with a master and 2 workers using VM(ubuntu 16.04)
I am also using calico for networking and I am exploring istio for the moment.
my problem is the ingress load balancer doesn't get an external IP. to my understanding I should use a node port to access the ingress load balancer but I can find how to do so.
should I have done it when installing, can I add it now and how?
kubernetes version : v1.11.1
calico version : v3.1
istio version : 0.8.0
If don't have a service attached to your deployment, you can use kubectl expose:
kubectl expose deployment istio --type=NodePort --name=istio-service
If you already deployed a service, you can edit the service spec and add type: "NodePort" the quickest way to do this is using kubectl patch:
kubectl patch svc istio-service -p '{"spec":{"type":"NodePort"}}'
More info about NodePort services can be found here
Related
What I'm trying to do
I have deployed an aps.net core gRpc service on Docker for Desktop (Kubernetes enabled). To do client-side load balancing, I want to expose the same via a headless service. The deployment and service definition YAML files are as provided by the link viz. Deployment.yaml , service.yaml, and PV and PVC .yaml. When the deployment is run two replicas will be created. Now I want to expose them via a headless service and do a DNS lookup of the pods' IP addresses and do a client-side load balancing. For this, I installed the bitnami external-dns using the HELM charts. I did not make any modifications to the default chart values. Now when I try to do a nslookup of my service this is not working.
My expectation
Deploy the bitnami external-dns on Docker for Desktop with Kubernetes enabled and configured service to expose as DNS on the load balancer. I was expecting the nslookup to succeed in getting the pod IPs as a result
Can someone help me to get the same working?
I am working on rancher server, k3s to improve my knowledge on these solutions.
I want to expose container services on LAN network, this is why I used kubectl port-forward.
kubectl port-forward --namespace=ns-name --address LAN-IP service/hello 30104:8080
But I can see in several web resources that is not a reliable solution, just for local testing purpose.
I tried to replace them by ingress but I am a bit lost between ingress, DNS and nginx-ingress in addition to rancher component.
I understood than load balancer need a cloud provider, to have a public IP for instance, and handle the <pending> state of load balancer.
Can you highlight me on how replace port-forward in LAN without a cloud provider?
[edit #Rajesh Dutta]
I already use NodePort, but, without port-forward the service is exposed as NODE_IP:PORT, not LAN_IP:PORT. My need is to join it from outside of the cluster.
So this is what i did :
1 - create deployment
kubectl create deployment hello --image=gcr.io/google-samples/node-hello:1.0 --port=8080 --replicas=2
2 - expose deployment(create service)
kubectl expose deployment hello --type=NodePort
3 - forward service
kubectl port-forward --namespace=ns-name --address local-ip service/hello 30104:8080
IP schema
Now, considering that i will have several service, i would find the best ways to replace port-forward.
To start with I would recommend to use NodePort service. This will expose your application to a NodePort(30000-32767). Later if you want you can choose to switch to ingress.
Assuming that you are trying with a deployment type object
command:
kubectl expose deployment deployment-name --type=NodePort --port=8080 --target-port=<rancher server port>
I am using Azure Kubernetes. I installed Istio 1.6.1. It installed the Istio-ingressgateway with LoadBalancer. I don't want to use Istio ingressgateway because I want to kong ingress.
I tried to run below command to change istio-ingress services from LoadBalancer to ClusterIP but getting errors.
$ kubectl patch svc istio-ingressgateway -p '{"spec": {"ports": "type": "ClusterIP"}}' -n istio-system
Error from server (BadRequest): invalid character ':' after object key:value pair
Not sure if I can make the changes and delete and re-create istio-ingress service?
The better option would be to reinstall istio without ingress controller. Do not install default profile in istio as it will install ingress controller along with other component. Check the various settings as mentioned in the installation page of istio and disable ingress controller.
Also check the documentation of using istio and kong together on k8s page and see what needs to be done on kong installation in order for enble communication between kong and other services.
I am deploy traefik 2.0 in kubernetes cluster using deployment type, now the traefik service ip in kubernetes v1.16.0 cluster is: 10.96.0.15, I can access the kuberentes servcie in kubernetes node using this command:
curl -v -k --header 'Host:apollo.xxx.net' http://10.96.0.15
now my user access traffic was received from cloud service(alicloud) loadbalancer, and forward to internal ip, must the node install nginx to receive the traffic and forward to traefik? is it possible to forward the traefik from cloud service loadbalancer to kubernetes traefik ip?
If you deployment traefik 2.X using kubernetes deployment type, the traefik default not listening on host machine.
first way you can add nginx in your machine and forward data to traefik
second way make traefik listening in some ports on your hostmachine, and lsb forward data to hostmachine's ports
if you want traefik listening on host machine, add config in your deployment yaml:
hostNetwork: true
I choose second way.After added config,using this command to see if it take effect in your host machine:
sudo lsof -i:{port}
I have Kubernetes 1.9.3 cluster and deployed Istio 1.0.12 on it. Create a namespace with istio-injection=enabled and created a deployment in that namespace. I don't see envoy proxy getting automatically injected into the pods created by deployments.
Istio calls kube-apiserver to inject envoy proxy into the pods. Two plugins need to be enabled in kube-apiserver for proxy injection to work.
kube-apiserver runs as a static pod and the pod manifest is available at /etc/kubernetes/manifests/kube-apiserver.yaml. Update the line as shown below to include MutatingAdmissionWebhook and ValidatingAdmissionWebhook plugins (available since Kubernetes 1.9).
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
The kubelet will detect the changes and re-create kube-apiserver pod automatically.