Recently, I tried to setup Jenkins X on a kubernetes cluster. However there exists some problem during installation.
There are several options in jx create cluster such as aks(create with AKS), aws(create with AWS), minikube(create with Minikube) and etc.
However there is no option which create a cluster with local kubernetes cluster. I want to setup Jenkins X with my own cluster.
Can I get some advice?
Thanks.
when you have your cluster setup such that you can run kubectl commands against it, you can run jx boot to setup your jx installation. You don't need to use jx create cluster as your cluster already exists.
To install Jenkins X on already existing cluster, you have to use the below command:
jx install --provider=kubernetes --on-premise
Above command will install the jx on your cluster.
Related
We wanna use TDengine on Kubernetes. But I dont see any docs,is there any problem runing in k8s or somethings?
As helm chart is popular in kubernetes to deploy service,If can use helm install tdengine to install a cluster in kubernetes,that will be wonderful.
If possible,I can contribute helm chart and test it in my cluster.
https://github.com/taosdata/TDengine-Operator/tree/3.0/helm/tdengine
how about these two guys
https://docs.tdengine.com/deployment/helm/
it could help you on TDengine database K8s cluster with Helm
We setup a rancher cluster with 3 nodes for testing and I would like to apply for k8s conformance using this rancher cluster. However, while running sonobuoy it returns error
ERRO[0000] could not create sonobuoy client: failed to get rest config: invalid configuration: no configuration has been provided
It seems like Rancher does not have any kubernates binaries built-in (Kubectl, kubeadm etc). May I know if it is possible to be k8s conformance on a rancher cluster?
You should have kubeernetes cluster kubeconfig localy where you are running sonobuoy.
from Rancher documentation: How to Manage Kubernetes With Kubectl:
RKE:
When you create a Kubernetes cluster with RKE, RKE creates a
kube_config_rancher-cluster.yml file in the local directory that
contains credentials to connect to your new cluster with tools like
kubectl.
You can copy this file to $HOME/.kube/config or, if you are working
with multiple Kubernetes clusters
Rancher-Managed Kubernetes Clusters:
Within Rancher, you can download a kubeconfig file through the web UI
and use it to connect to your Kubernetes environment with kubectl.
From the Rancher UI, click on the cluster you would like to connect to
via kubectl. On the top right-hand side of the page, click the
Kubeconfig File button: Click on the button for a detailed look at
your config file as well as directions to place in ~/.kube/config.
Upon copying your configuration to ~/.kube/config, you will be able to
run kubectl commands without having to specify the –-kube-config file
location:
Check First launch with sonobuoy requests for a configuration - maybe it will be useful for you.
Also, look here - just for you: Conformance tests for Rancher 2.x Kubernetes
Run Conformance Test
Once you Rancher Kubernetes cluster is active, Fetch it's kubeconfig.yml file and save it locally.
Download a sonobuoy binary release of the CLI, or build it yourself by running:
$ go get -u -v github.com/heptio/sonobuoy
Configure your kubeconfig file by running:
$ export KUBECONFIG="/path/to/your/cluster/kubeconfig.yml"
Run sonobuoy:
$ sonobuoy run
Watch the logs:
$ sonobuoy logs
Check the status:
$ sonobuoy status
Once the status commands shows the run as completed, you can download the results tar.gz file:
$ sonobuoy retrieve
I have a running cluster on Google Cloud Kubernetes engine and I want to access that using kubectl from my local system.
I tried installing kubectl with gcloud but it didn't worked. Then I installed kubectl using apt-get. When I try to see the version of it using kubectl version it says
Unable to connect to server EOF. I also don't have file ~/.kube/config, which I am not sure why. Can someone please tell me what I am missing here? How can I connect to the already running cluster in GKE?
gcloud container clusters get-credentials ... will auth you against the cluster using your gcloud credentials.
If successful, the command adds appropriate configuration to ~/.kube/config such that you can kubectl.
I'm new to OpenShift and Kubernetes.
I need to access kube-apiserver on existing OpenShift environment
oc v3.10.0+0c4577e-1
kubernetes v1.10.0+b81c8f8
how do I know kube-apiserver is already installed, or how to get it installed?
I checked all the containers and there is no even such path /etc/kubernetes/manifests.
Here is the list of docker processes on all clusters, could it hide behind one of these?
k8s_fluentd-elasticseark8s_POD_logging
k8s_POD_tiller-deploy
k8s_api_master-api-ip-...ec2.internal_kube-system
k8s_etcd_master-etcd-...ec2.internal_kube-system
k8s_POD_master-controllers
k8s_POD_master-api-ip-
k8s_POD_kube-state
k8s_kube-rbac-proxy
k8s_POD_node-exporter
k8s_alertmanager-proxy
k8s_config-reloader
k8s_POD_alertmanager_openshift-monitoring
k8s_POD_prometheus
k8s_POD_cluster-monitoring
k8s_POD_heapster
k8s_POD_prometheus
k8s_POD_webconsole
k8s_openvswitch
k8s_POD_openshift-sdn
k8s_POD_sync
k8s_POD_master-etcd
If you just need to verify that the cluster is up and running then you can simply run oc get nodes which communicates with the kube-apiserver to retrieve information.
oc config view will show where kube-apiserver is hosted under the clusters -> cluster -> server section. On that host machine you can run command docker ps to display the running containers, which should include the kube-apiserver
Docker: 1.12.6
rancher/server: 1.5.10
rancher/agent: 1.2.2
Tried two ways to install Kubernetes cluster on rancher/server.
Method 1: Use Kubernetes environment
Infrastructure/Hosts
Agent hosts disconnected sometimes.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Method 2: Use Default environment
Infrastructure/Hosts
Set some labels to rancher server and agent hosts.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Both of them have this issue: kubernetes-ingress-lbs 0 services 0 containers. Then can't access Kubernetes dashboard.
Why didn't been installed by rancher?
And, is it necessary to add those labels for Kubernetes cluster?
Here is RIGHT Kubernetes Cluster deployed on Rancher server:
Turning on the Show System, you can find the service of kubernetes-dashboard under the namespace of kube-system.
Well, by using the version of kubernetes is v1.5.4, you should prepare in advance to pull the below Docker Images:
By reading rancher/catalog and rancher/kuberetes-package, you can know and even modify the config files(like docker-compose.yml, rancher-compose.yml and so on) by yourself.
When you enable to "Show System" containers in the UI, you should be able to see the dashboard container running under Namespace: kube-system. If this container is not running then the dashboard will not be able to load.
You might have to enable kubernetes add-on service within rancher environment template.
manage environments >> edit kubernetes default template >> enable add-on service and save the new template with the preferred name.
Now launch the cluster using customized templates.