Istio egress rules to access services directly - kubernetes

In the guide https://istio.io/docs/tasks/traffic-management/egress.html, there is a way to access non-http traffic by includingIPRanges. However, when I follow the instructions I still am unable to access anything. Should this rule allow me to bypass istio for egress as I think it should or am I missing something?
I run a version of this command
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml --includeIPRanges=172.30.0.0/16,172.20.0.0/16,10.10.10.0/24) is the command it suggests for bluemix users, but this does not work for me. When I try to store from my app in cloud object store I get a 500 error, however with no istio sidecar the store function works perfectly.

It works fine for me using Istio 0.2.10 on a Bluemix free tier cluster. Are you saying that the egress task doesn't even work for you, i.e., you can't do the curl suggested in the task?
kubectl exec -it $SOURCE_POD -c sleep curl http://httpbin.org/headers
A 500 error doesn't sound like a problem related to reach-ability anyway. That sounds like the server you're calling is crashing?

Related

Permission denied using kubectl but able to run helm

I am facing permission denied errors when using kubectl for all commands, be get pods or apply, but I am able to use helm and login with k9s to perform destructive actions. I am using the same context for all of these actions.
kubectl get nodes
# error: You must be logged in to the server (Unauthorized)
kubectl apply -f some-manifest.yaml
# error: You must be logged in to the server (the server has asked for the client to provide credentials)
Does anyone have a hint as to why this is happening or what to look further into? I am using a managed k8s on Vultr, a smaller cloud provider.
Don't know what specifically the issue was but I rebuilt my .kube/config file slowly with all my contexts and it ended up working again.
Very strange though that helm worked and kubectl didn't though...
I am pretty sure that this is a "kubernetes context" problem
Check the solution here: helm and kubectl context mismatch
Solution for k9s can be found here: https://k9scli.io/topics/commands/

Recovery from kubectl crash

What is the best way to troubleshoot when kubectl doesn't responde or exit with timeout? How to get it work again?
I'm having my kubectl as well as helm on my cluster down when installing a helm chart.
General advice:
Check if your kubectl is connecting to the correct kube-api endpoint. You could take a look at your kubeconfig. It is by default stored in $HOME/.kube. Try simple CURL to make sure that it is not DNS problem, etc.
Take a look at your nodes' logs by ssh into the nodes that you have: see this for more details instructions and log locations.
Once you have more information, you could get yourself started in the investigation of problems.

Kubectl documentation without starting Kubernetes

I have installed a K8S cluster on laptop using Kubeadm and VirtualBox. It seems a bit odd that the cluster has to be up and running to see the documentation as shown below.
praveensripati#praveen-ubuntu:~$ kubectl explain pods
Unable to connect to the server: dial tcp 192.168.0.31:6443: connect: no route to host
Any workaround for this?
See "kubectl explain — #HeptioProTip"
Behind the scenes, kubectl just made an API request to my Kubernetes cluster, grabbed the current Swagger documentation of the API version running in the cluster, and output the documentation and object types.
Try kubectl help as an offline alternative, but that won't be as complete (limite to kubectl itself).
So the rather sobering news is that AFAIK there's not out-of-the box way how to do it, though you could totally write a kubectl plugin (it has become rather trivial now in 1.12). But for now, the best I can offer is the following:
# figure out which endpoint kubectl uses to retrieve docs:
$ kubectl -v9 explain pods
# from above I learn that in my case it's apparently
# https://192.168.64.11:8443/openapi/v2 so let's curl that:
$ curl -k https://192.168.64.11:8443/openapi/v2 > resources-docs.json
From here you can, for example, use jq to query for the descriptions. It's not as nice as a proper explain, but kinda is a good enough workaround until someone writes an docs offline query kubectl plugin.
The 'explain' documentation lives in the kube-apiserver and its resource definitions. Hence the need to connect to it through kubectl explain to get any docs. This is different from the standard very basic cli help from kubectl where it's in the kubectl Golang code.
So no workaround really other than setting up a dummy Kubernetes cluster and have kubectl point to it. Please note that CRDs help might not be available since they live in the deployed CRDs themselves.

How can I access an internal HTTP port of a Kubernetes node in Google Cloud Platform

I have a load-balanced service running in a Kubernetes cluster on the Google Cloud Platform. The individual servers expose some debugging information via a particular URL path. I would like to be able to access those individual server URLs, otherwise I just get whichever server the load balancer sends the request to.
What is the easiest way to get access to those internal nodes? Ideally, I'd like to be able to access them via a browser, but if I can only access via a command line (e.g. via ssh or Google Cloud Shell) I'm willing to run curl to get the debugging info.
I think the simplest tool for you would be kubectl proxy or maybe even simpler kubectl port-forward. With the first you can use one endpoint and the apiserver ability to proxy to particular pod by providing appropriate URL.
kubectl proxy
After running kubectl proxy you should be able to open http://127.0.0.1:8001/ in your local browser and see a bunch of paths available on the API server. From there you can proceed with URL like ie. http://127.0.0.1:8001/api/v1/namespaces/default/pods/my-pod-name:80/proxy/ which will proxy to port 80 of your particular pod.
kubectl port-forward
Will do similar but directly to port on your pod : kubectl port-forward my-pod-name 8081:80. At that point any request to 127.0.0.1:8081 will be forwarded to your pods port 80
Port Forward can be used as described in answer from Radek, and it has many advantages. The disadvantage is that it is quite slow, and if you are having a script doing many calles, there is another option for you.
kubectl run curl-mark-friedman --image=radial/busyboxplus:curl -i --tty --rm
This will create a new POD on you network with a busybox that includes the curl command. You can now use interactive mode in that POD to execute curl commands to other PODS from within the network.
You can find many images with the tools included that you like on docker hub. If you for example need jq, there is an image for that:
kubectl run curl-jq-mark-friedman --image=devorbitus/ubuntu-bash-jq-curl -i --tty --rm
The --rm option is used to remove the POD when you are done with it. If you want the POD to stay alive, just remove that option. You may then attach to that POD again using:
kubectl get pods | grep curl-mark-friedman <- get your <POD ID> from here.
kubectl attach <POD ID> -c curl-mark-friedman -i -t

Access to Mongodb in Kubernetes

I created a Mongodb service according to the Kubernetes tutorial.
Now my question is how do I gain access to the database itself, with a client like Robomongo or similar clients? Just for making backups or exploring what data have been entered.
The mongo-pod and service only have an internal endpoint, and a single mount.
Is there any way to safely access this instance with no public endpoint?
Internally URI is mongo:27***
You can use kubectl port-forward mypod 27017:27017 and then just connect your mongodb client to localhost:27017.
If you want to stop, just hit Ctrl+C on the same cmd window to stop the process.
The kubernetes cmd-line tool provides this functionality as #ainlolcat stated
kubectl get pods
Retrieves the pod names currently running and with:
kubectl exec -i mongo-controller-* bash
you get a basic bash, which lets you execute
mongo
to get into the database to create dumps, and so on. The bash is very basic and has no features like completion and so on. I have not found a solution for better shell but it does the job
when you create a service in kubernetes you give it a name, say for example "mymongo". After the service is created then
The DNS service of kubernetes (by default is on) will ensure that any pod can discover this servixe simply by its name. so you can set your uri like
uri: mongodb://**mymongo**:27017/mong
In addition the service IP and port will be set as environment variables at the running pod.
MYMONGO_SERVICE_HOST
MYMONGO_SERVICE_PORT
I have in fact wrote a blog that show a step by step example of an app with nodejs web server and mongo that can explain further
http://codefresh.io/blog/kubernetes-snowboarding-everything-intro-kubernetes/
feedback welcome!
Answer from #grchallenge is correct but it is deprecated as of in 2021
All new comers please use
kubectl exec mongo-pod-name -i -- bash