I have a kubernetes cluster in google cloud created by cloud container clusters create command. I want to use elasticsearch logging. How should I install "fluentd-elasticsearch" addon? Or where is another way?
When launching the cluster you can disable the default logging to Cloud Logging by passing the --no-enable-cloud-logging flag. Once that is done you can install the fluentd elasticsearch cluster addon from the Kubernetes repo.
Related
I'm investigating whether it's possible to use cloud-code (inside VSCode) on a private RKE cluster? Using VSCode, the only options for connecting to clusters seem to involve GCP (or the other large cloud providers) or Minikube. Kubectl is all set up and working great on the cluster - just no support in cloud-code for running/debugging, etc? Am I out of luck?
Thanks.
It should be possible to connect an RKE cluster to Cloud Code. If you authenticate against the cluster using the command line, it will create an entry in your ~/.kube/config that will allow Cloud Code access to your cluster.
(And it can't hurt to double check that the Default KubeConfig in Cloud Code's Kubernetes Explorer is pointing to the correct ~/.kube/config file.)
I have a Kubernetes cluster created using kubeadm tool in AWS instance. I used the "Weave" Network plugin. I just saw amazon-vpc-cni-k8s plugin in EKS Documentation. So can I use this Network plugin in my cluster created by kubeadm tool?
No, you would need to separately run the amazon-vpc-cni-k8s command.
kubectl apply -f aws-k8s-cni.yaml from here
Alternatively you could use Kubespray that allows you to configure the CNI plugin as part of the deployment
One K8saaS cluster in the IBM-Cloud runs preinstalled fluentd. May I use it on my own, too?
We think about logging strategy, which is independed from the IBM infrastrukture and we want to save the information inside ES. May I reuse the fluentd installation done by IBM for sending my log information or should I install my own fluentd? If so, am I able to install fluentd on the nodes via kubernetes API and without any access to the nodes themselfes?
The fluentd that is installed and managed by IBM Cloud Kubernetes Service will only connect to the IBM cloud logging service.
There is nothing to stop you installing your own Fluentd as well though to send your logs to your own logging service, either running inside your cluster or outside. This is best done via a daemonset so that it can collect logs from every node in the cluster.
I am trying to do some experiments with Kubernetes in google cloud.
I have docker image in google cloud registry and need to deploy that image to a kubernetes cluster.
Here are the steps I need to perform.
Create a Kubernetes cluster.
Copy the image from GCR and deploy to Kubernetes cluster.
Expose the cluster to internet via load balancer.
I know, it is possible to do via google cloud sdk cli. Is there way to do these steps via Java/node js?
There is a RESTful kubernetes-engine API:
https://cloud.google.com/kubernetes-engine/docs/reference/api-organization
e.g. create a cluster:
https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.zones.clusters/create
The container registry should be standard docker APIs.
Both Java and Node have kubernetes clients:
https://github.com/kubernetes-client/java
https://github.com/godaddy/kubernetes-client
I want to enable Stackdriver logging with my Kubernetes cluster on GKE.
It's stated here: https://kubernetes.io/docs/user-guide/logging/stackdriver/
This article assumes that you have created a Kubernetes cluster with cluster-level logging support for sending logs to Stackdriver Logging. You can do this either by selecting the Enable Stackdriver Logging checkbox in the create cluster dialogue in GKE, or by setting the KUBE_LOGGING_DESTINATION flag to gcp when manually starting a cluster using kube-up.sh.
But my cluster was created without this option enabled.
How do I change the environment variable while my cluster is running?
Unfortunately, logging isn't a setting that can be enabled/disabled on a cluster once it is running. This is something that we hope to change in the near future, but in the mean time your best bet is to delete and recreate your cluster (sorry!).