Docker: 1.12.6
rancher/server: 1.5.10
rancher/agent: 1.2.2
Tried two ways to install Kubernetes cluster on rancher/server.
Method 1: Use Kubernetes environment
Infrastructure/Hosts
Agent hosts disconnected sometimes.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Method 2: Use Default environment
Infrastructure/Hosts
Set some labels to rancher server and agent hosts.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Both of them have this issue: kubernetes-ingress-lbs 0 services 0 containers. Then can't access Kubernetes dashboard.
Why didn't been installed by rancher?
And, is it necessary to add those labels for Kubernetes cluster?
Here is RIGHT Kubernetes Cluster deployed on Rancher server:
Turning on the Show System, you can find the service of kubernetes-dashboard under the namespace of kube-system.
Well, by using the version of kubernetes is v1.5.4, you should prepare in advance to pull the below Docker Images:
By reading rancher/catalog and rancher/kuberetes-package, you can know and even modify the config files(like docker-compose.yml, rancher-compose.yml and so on) by yourself.
When you enable to "Show System" containers in the UI, you should be able to see the dashboard container running under Namespace: kube-system. If this container is not running then the dashboard will not be able to load.
You might have to enable kubernetes add-on service within rancher environment template.
manage environments >> edit kubernetes default template >> enable add-on service and save the new template with the preferred name.
Now launch the cluster using customized templates.
Related
After trying all possible configurations I ask here if anyone knows how to enable ttlAfterFinished=true?
I use the K8S version 1.17.1.
You need to enable it via the feature gate in kube controller manager and kube API Server. If Kube controller manager and kube API Server is deployed as static pod then you can find the deployment yaml at
/etc/kubernetes/manifests/kube-controller-manager.yaml
and
/etc/kubernetes/manifests/kube-apiserver.yaml
in the master node.
Edit both manifests files and add this line at the bottom of the command section:
- --feature-gates=TTLAfterFinished=true
After the yaml is edited and saved, the kube controller manager and the kube API Server pod will be automatically recreated with this feature enabled.
You can verify by checking logs of kube controller manager pod and you should see below
I0308 06:04:43.886097 1 ttlafterfinished_controller.go:105] Starting TTL after finished controller
Tip: you can specify multiple feature gates using comma, for example:
--feature-gates=TTLAfterFinished=true,OtherFeature=true
I'm new to OpenShift and Kubernetes.
I need to access kube-apiserver on existing OpenShift environment
oc v3.10.0+0c4577e-1
kubernetes v1.10.0+b81c8f8
how do I know kube-apiserver is already installed, or how to get it installed?
I checked all the containers and there is no even such path /etc/kubernetes/manifests.
Here is the list of docker processes on all clusters, could it hide behind one of these?
k8s_fluentd-elasticseark8s_POD_logging
k8s_POD_tiller-deploy
k8s_api_master-api-ip-...ec2.internal_kube-system
k8s_etcd_master-etcd-...ec2.internal_kube-system
k8s_POD_master-controllers
k8s_POD_master-api-ip-
k8s_POD_kube-state
k8s_kube-rbac-proxy
k8s_POD_node-exporter
k8s_alertmanager-proxy
k8s_config-reloader
k8s_POD_alertmanager_openshift-monitoring
k8s_POD_prometheus
k8s_POD_cluster-monitoring
k8s_POD_heapster
k8s_POD_prometheus
k8s_POD_webconsole
k8s_openvswitch
k8s_POD_openshift-sdn
k8s_POD_sync
k8s_POD_master-etcd
If you just need to verify that the cluster is up and running then you can simply run oc get nodes which communicates with the kube-apiserver to retrieve information.
oc config view will show where kube-apiserver is hosted under the clusters -> cluster -> server section. On that host machine you can run command docker ps to display the running containers, which should include the kube-apiserver
I have been working on this for quite some time now, but google container engine has some missing documentation on installing addons.
First I thought I create my addons as yml files and install them into the kube-system namespace.
But the addon-manager apparently removes everything from the kube-system namespace that in its opinion does not belong there.
How do I add any kubernetes addon to my google container engine cluster?
For example I would like to install:
cluster-monitoring (heapster, influxdb, grafana addon)
The add-on manager is removing everything from the kube-system namespace that has the label addonmanager.kubernetes.io/mode: Reconcile which doesn't exist in a "source of truth" location. Since your resources aren't in the source of truth, they get removed.
You can remove that label and the add-on manager should leave your deployments alone. But I'd recommend running them in a different namespace instead.
I'm referring to this [1] document about creating nfs mounts on kubernetes. But I have couple of issues to be clarified.
What is meant by privileged containers?
How can I set allow-privileged true of a kubernetes setup installed on bare metal ubuntu machine? I setup it using kebe-up.sh script.
[1] http://kubernetes.io/v1.1/examples/nfs/index.html
https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/pods.md#privileged-mode-for-pod-containers
That refers to the corresponding field in the yaml: https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L2089
And it should be on per the document in 1.1 (https://github.com/kubernetes/kubernetes/blob/b9cfab87e33ea649bdd13a1bd243c502d76e5d22/cluster/saltbase/pillar/privilege.sls#L2, you can confirm by creating the pod and running kubectl describe like the docs says).
I'm setting up a kubernetes cluster on digitalocean ubuntu machines. I got the cluster up and running following this get started guide ubuntu. During the setup the ENABLE_NODE_LOGGING, ENABLE_CLUSTER_LOGGING and ENABLE_CLUSTER_DNS variables are set to true in the config-default.sh.
However there is no controller, services created for elasticsearch/kabana. I did have to run the deployAddon.sh manually for the skydns, do I need to do the same for logging and monitoring ? or am I missing something in the default configuration.
By default the logging and monitoring services are not in the default namespace.
You should be able to see if the services are running with kubectl cluster-info.
To look at the individual services/controllers, specify the kube-system namespace:
kubectl get service --namespace=kube-system
By default, logging and monitor is not enabled if you are installing kubernetes on ubuntu machines. It looks like someone has copied the config-default.sh script from some other folder, hence the variables ENABLE_NODE_LOGGING and ENABLE_CLUSTER_LOGGING are copied but are not used to bring up the relevant logging deployments and services.
As #Jon Mumm said, kubectl cluster-info gives you the info. But if you want to install the logging service, go to
kubernetes/cluster/addons/fluentd-elasticsearch
and run
kubectl create -f es-controller.yaml -f es-service.yaml -f kibana-controller.yaml -f kibana-service.yaml
with right setup. Change the yaml files to suit your configuration and ensure kubectl is in your path.
Update 1: This will bring up kibana and logstash services