I have Docker and OpenShift client installed on Ubuntu 16.04.3 LTS
[vagrant#desktop:~] $ docker --version
Docker version 18.01.0-ce, build 03596f5
[vagrant#desktop:~] $ oc version
oc v3.7.1+ab0f056
kubernetes v1.7.6+a08f5eeb62
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://127.0.0.1:8443
openshift v3.7.1+282e43f-42
kubernetes v1.7.6+a08f5eeb62
[vagrant#desktop:~] $
Notice server URL https://127.0.0.1:8443.
I can start a cluster using oc cluster up
vagrant#desktop:~] $ oc cluster up --public-hostname='ocp.devops.ok' --host-data-dir='/var/lib/origin/etcd' --use-existing-config --routing-suffix='cloudapps.lab.example.com'
Starting OpenShift using openshift/origin:v3.7.1 ...
OpenShift server started.
The server is accessible via web console at:
https://ocp.devops.ok:8443
I can access the server using https://ocp.devops.ok:8443 but then the OCP will redirect to https://127.0.0.1:8443. So it redirect to kubernetes server URL I think.
This raises the question about public-hostname. What does it do? It is not used by OpenShift I think because it redirects to Kubernetes server URL.
How do I change this setting in Kubernetes?
I think that because --public-hostname does not specify the ip to be bound, and that ip currently is 127.0.0.1, som of the config is set to that value, and hence the oauth challenge redirects you there. I hope it might be solved in 3.10.
See this issue described in OpensShift's Origin GitHub
The problem is as it turns out is use-existing-config. If I remove that from the command there is no redirect.
Related
I am learning kubernetes and minikube, and I am following this tutorial:
https://minikube.sigs.k8s.io/docs/handbook/accessing/
But I am running into a problem, I am not able to load the exposed service. Here are the steps I make:
minikube start
The cluster info returns
Kubernetes control plane is running at https://127.0.0.1:50121
CoreDNS is running at https://127.0.0.1:50121/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Then I am creating a deployment
kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4
and exposing it as a service
kubectl expose deployment hello-minikube1 --type=NodePort --port=8080
When I list the services, I dont have a url:
minikube service list
NAMESPACE
NAME
TARGET PORT
URL
default
hello-minikube1
8080
and when I try to get the url, I am not getting it, seems to be empty
minikube service hello-minikube1 --url
This is the response (first line is empty):
🏃 Starting tunnel for service hello-minikube2.
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
Why I am not getting the url and cannot connect to it? What did I miss?
Thanks!
Please use minikube ip command to get the IP of minikube and then use port number with it.
Also, refer below link:
https://minikube.sigs.k8s.io/docs/handbook/accessing/#:~:text=minikube%20tunnel%20runs%20as%20a,on%20the%20host%20operating%20system.
As per this issue,the docker driver, which needs an active terminal session. Who are using macOS to use docker driver by default a few releases ago, if no local configuration is found. I believe you can get your original behavior by using the hyperkit driver on macOS:
minikube start --driver=hyperkit
You can also set it to the default using:
minikube config set driver hyperkit
This will help you to solve your issue.
I have a running cluster on Google Cloud Kubernetes engine and I want to access that using kubectl from my local system.
I tried installing kubectl with gcloud but it didn't worked. Then I installed kubectl using apt-get. When I try to see the version of it using kubectl version it says
Unable to connect to server EOF. I also don't have file ~/.kube/config, which I am not sure why. Can someone please tell me what I am missing here? How can I connect to the already running cluster in GKE?
gcloud container clusters get-credentials ... will auth you against the cluster using your gcloud credentials.
If successful, the command adds appropriate configuration to ~/.kube/config such that you can kubectl.
Newbie setup :
Created First project in GCP
Created cluster with default, 3 nodes. Node version 1.7.6. cluster master version 1.7.6-gke.1.
Deployed aan application in a pod, per example.
Able to access "hello world" and the hostname, using the external-ip and the port.
In GCP / GKE webpage of my cloud console, clicked "discovery and loadbalancing", I was able to see the "kubernetes-dashboard" process in green-tick, but cannot access throught the IP listed. tried 8001,9090, /ui and nothing worked.
not using any cloud shell or gcloud commands on my local laptop. Everything is done on console.
Questions :
How can anyone access the kubernetes-dashboard of the cluster created in console?
docs are unclear, are the dashboard components incorporated in the console itself? Are the docs out of sync with GCP-GKE screens?
tutorial says run "kubectl proxy" and then to open
"http://localhost:8001/ui", but it doesnt work, why?
If you create a cluster with with version 1.9.x or greater, then u can access using tokens.
get secret.
kubectl -n kube-system describe secrets `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` | awk '/token:/ {print $2}'
Copy secret.
kubectl proxy.
Open UI using 127.0.0.1:8001/ui. This will redirect to login page.
there will be two options to login, kubeconfig and token.
Select token and paste the secret copied earlier.
hope this helps
It seems to be an issue with the internal Kubernetes DNS service starting at version 1.7.6 on Google Cloud.
The solution is to access the dashboard at this endpoint instead:
http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Github Issue links:
https://github.com/kubernetes/dashboard/issues/2368
https://github.com/kubernetes/kubernetes/issues/52729
The address of the dashboard service is only accessible from inside of the cluster. If you ssh into a node in your cluster, you should be able to connect to the dashboard. You can verify this by noticing that the address is within the services CIDR range for your cluster.
The dashboard in running as a pod inside of your cluster with an associated service. If you open the Workloads view you will see the kubernetes-dashboard deployment and can see the pod that was created by the deployment. I'm not sure which docs you are referring to, since you didn't provide a link.
When you run kubectl proxy it creates a secure connection from your local machine into your cluster. It works by connecting to your master and then running through a proxy on the master to the pod/service/host that you are connecting to via an ssh tunnel. It's possible that it isn't working because the ssh tunnels are not running; you should verify that your project has newly created ssh rules allowing access from the cluster endpoint IP address. Otherwise, if you could explain more about how it fails, that would be useful for debugging.
First :
gcloud container clusters get-credentials cluster-1 --zone my-zone --project my-project
Then find your kubernetes dashboard endpoint doing :
kubectl cluster-info
It will be something like https://42.42.42.42/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Install kube-dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Run:
$ kubectl proxy
Access:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
I'm trying to ssh into my pod with this command
kubectl --namespace=default exec -ti pod-name /bin/bash
I get this error:
Content-Type specified (plain/text) must be 'application/json'
The process gets stuck and I have to close the terminal.
I was able to ssh into my pods before I re install kubernetes in my machine. Is this an issue with latest kubernetes releases?
You're not trying to "ssh", you're forwarding your standard input and receiving a standard output over HTTP through the Kubernetes API.
That said, you're using Docker 1.10 whereas Kubernetes doesn't support it yet. Check this out https://github.com/kubernetes/kubernetes/issues/19720
edit:
Kubernetes supports Docker 1.10+ since the 1.3.0 release.
I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem.
How is this implemented?
kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver.
When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at ~/.kube/config.
In addition to what Robert said: the connection between your local CLI and the cluster is controlled through kubectl config set, see the docs.
The Getting started with Vagrant section of the docs should contain everything you need.