I want to set up federation between clusters but because of the differences in the documentation both on the Kubernetes website and also federation repo docs I am a little confused.
On the website it is mentioned that "Use of Federation v1 is strongly discouraged." but their own link is pointing to v1 releases (v1.10.0-alpha.0, v1.9.0-alpha.3, v1.9.0-beta.0) and the latest release there is 2 years old:
v1.10.0-alpha.0:
federation-client-linux-amd64.tar.gz 11.47 MB application/x-tar Standard 2/20/18, 8:44:21 AM UTC+1
federation-client-linux-amd64.tar.gz.sha 103 B application/octet-stream Standard 2/20/18, 8:44:20 AM UTC+1
federation-server-linux-amd64.tar.gz 131.05 MB application/x-tar Standard 2/20/18, 8:44:23 AM UTC+1
federation-server-linux-amd64.tar.gz.sha 103 B application/octet-stream Standard 2/20/18, 8:44:20 AM UTC+1
On the other hand, I followed the instruction at installation and I installed kubefedctl-0.1.0-rc6-linux-amd64.tgz but it doesn't have any init command which mentioned in the official Kubernetes website.
Kubernetes website:
kubefed init fellowship \
--host-cluster-context=rivendell \
--dns-provider="google-clouddns" \
--dns-zone-name="example.com."
Latest release kubefedctl help:
$kubefedctl -h
kubefedctl controls a Kubernetes Cluster Federation. Find more information at https://sigs.k8s.io/kubefed.
Usage:
kubefedctl [flags]
kubefedctl [command]
Available Commands:
disable Disables propagation of a Kubernetes API type
enable Enables propagation of a Kubernetes API type
federate Federate creates a federated resource from a kubernetes resource
help Help about any command
join Register a cluster with a KubeFed control plane
orphaning-deletion Manage orphaning delete policy
unjoin Remove the registration of a cluster from a KubeFed control plane
version Print the version info
Flags:
--alsologtostderr log to standard error as well as files
-h, --help help for kubefedctl
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-file string If non-empty, use this log file
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
--logtostderr log to standard error instead of files (default true)
--skip-headers If true, avoid header prefixes in the log messages
--stderrthreshold severity logs at or above this threshold go to stderr
-v, --v Level number for the log level verbosity
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Use "kubefedctl [command] --help" for more information about a command.
And then there is the helm chart which says "It builds on the sync controller (a.k.a. push reconciler) from Federation v1 to iterate on the API concepts laid down in the brainstorming doc and further refined in the architecture doc." Therefore if I am not wrong it means the helm chart is based on Federation v1 which is Deprecated.
Also, the userguid on the repo is not helpful in this case. It shows how to "Enables propagation of a Kubernetes API type" but nothing about setting up a host cluster (something equal to kubefed init).
Can someone please let me know how can I set up a federated multi-cluster Kubernetes on bare metal and join another cluster to it?
Related
I installed the traffic-manager by using the official helm chart by datawire and flux components. But when i try to list interceptable services with telepresence list i get following message:
No Workloads (Deployments, StatefulSets, or ReplicaSets)
First i used the default namespace ambassador without further configurations. Then i tried to activate the RBCA users and restricted the namespaces. In the cluster are several namespaces with different purposes like flux-system, kube-system. Services, where i want to intercept, are deployed in the same namespace. Therefore i tried to install the traffic-manager directly into this namespace, but the same message occured (i also configured my kubeconfig, so the traffic-manager can be found, as the documentation says).
In the logs of the traffic-manager i get following warning:
agent-configs : Issuing a systema request without ApiKey or InstallID may result in an error
What does that mean? Could that be part of the issue?
I am new to cluster topics in general but couldn't find anything by research, hence i decided to ask in the community.
Some hints would be very helpful, because i don't know what i could try next. In the first place it would be enough when it works trough the whole cluster without restrictions.
telepresence version:
Client: v2.6.6 (api v3)
Root Daemon: v2.6.6 (api v3)
User Daemon: v2.6.6 (api v3)
kubernetes: v1.22.6
We have an on-premise kubernetes deployment in our data center. I just finished deploying the pods for Dex, configured hooked up with our LDAP server to allow LDAP based authentication via Dex, ran tests and was able to retrieve the OpenID connect token for authentication.
Now I would like to change our on-premise k8s API server startup parameters to enable OIDC and point it to the Dex container.
How do I enable OIDC to the API server startup command without downtime to our k8s cluster? Was reading this doc https://kubernetes.io/docs/reference/access-authn-authz/authentication/ but the site just says "Enable the required flags" without the steps
Thanks!
I installed Dex + Active Directory Integration few months ago on a cluster installed by kubeadmn .
Let's assume that Dex is now running and it can be accessible thru
https://dex.example.com .
In this case,..
Enabling ODIC at the level of API server has 3 steps :
These steps have to be done on each of your Kubernetes master nodes.
1- SSH to your master node.
$ ssh root#master-ip
2- Edit the Kubernetes API configuration.
Add the OIDC parameters and modify the issuer URL accordingly.
$ sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
command:
- /hyperkube
- apiserver
- --advertise-address=x.x.x.x
...
- --oidc-issuer-url=https://dex.example.com # <-- 🔴 Please focus here
- --oidc-client-id=oidc-auth-client # <-- 🔴 Please focus here
- --oidc-username-claim=email # <-- 🔴 Please focus here
- --oidc-groups-claim=groups # <-- 🔴 Please focus here
...
3- The Kubernetes API will restart by itself.
I recommend also to check a full guide like this tuto.
The OIDC flags are for Kubernetes API Server. You have not mentioned how you have installed Kubernetes on prem. Ideally you should have multiple master nodes fronted by a LoadBalancer.
So you would disable traffic to one master node from the loadbalancer and login to that master node and edit the manifest of api server in /etc/kubernetes/manifests and add the OIDC flags. Once you change the manifest api server pod will be restarted automatically.
You repeat the same process for all master nodes and since at any given point in time you have at least one master node available there should not be any downtime.
Using datadog official docs, I am able to print the K8s stdout/stderr logs in DataDog UI, my motive is to print the app logs which are generated by spring boot application at a certain location in my pod.
Configurations done in cluster :
Created ServiceAccount in my cluster along with cluster role and cluster role binding
Created K8s secret to hold DataDog API key
Deployed the DataDog Agent as daemonset in all nodes
Configurations done in App :
Download datadog.jar and instrument it along with my app execution
Exposed ports 8125 and 8126
Added environment tags DD_TRACE_SPAN_TAGS, DD_TRACE_GLOBAL_TAGS in deployment file
Changed pattern in logback.xml
Added logs config in deployment file
Added env tags in deployment file
After doing above configurations I am able to log stdout/stderr logs where as I wanted to log application logs in datadog UI
If someone has done this please let me know what am I missing here.
If required, I can share the configurations as well. Thanks in advance
When installing Datadog in your K8s Cluster, you install a Node Logging Agent as a Daemonset with various volume mounts on the hosting nodes. Among other things, this gives Datadog access to the Pod logs at /var/log/pods and the container logs at /var/lib/docker/containers.
Kubernetes and the underlying Docker engine will only include output from stdout and stderror in those two locations (see here for more information). Everything that is written by containers to log files residing inside the containers, will be invisible to K8s, unless more configuration is applied to extract that data, e.g. by applying the side care container pattern.
So, to get things working in your setup, configure logback to log to stdout rather than /var/app/logs/myapp.log
Also, if you don't use APM there is no need to instrument your code with the datadog.jar and do all that tracing setup (setting up ports etc).
I'm reading the Reserve Compute Resources for System Daemons task in Kubernetes docs and it briefly explains how to allocate a compute resource to a node using kubelet command and flags --kube-reserved, --system-reserved and --eviction-hard.
I'm learning on Minikube for masOS and as far as I got, minikube is configured to use command kubectl along with minikube command.
For local learning purposes on minikube I don't need to have it set (maybe it can't be done on minikube) but
How this could be done let's say in K8's development environment on a node?
This could be be done by:
1. Passing config file during cluster initialization or initilize kubelet with additional parameters via config file,
For cluster initialization using config file it should contains at least:
kind: InitConfiguration
kind: ClusterConfiguration
additional configuratuion types like:
kind: KubeletConfiguration
In order to get basic config file you can use kubeadm config print init-defaults
2. For the live cluster please consider reconfigure current cluster using steps "Generate the configuration file" and "Push the configuration file to the control plane" like described in "Reconfigure a Node's Kubelet in a Live Cluster"
3. I didn't test it but for minikube - please take a look here:
Note:
Minikube has a “configurator” feature that allows users to configure the Kubernetes components with arbitrary values. To use this feature, you can use the --extra-config flag on the minikube start command.
This flag is repeated, so you can pass it several times with several different values to set multiple options.
This flag takes a string of the form component.key=value, where component is one of the strings from the below list, key is a value on the configuration struct and value is the value to set.
Valid keys can be found by examining the documentation for the Kubernetes componentconfigs for each component. Here is the documentation for each supported configuration:
kubelet
apiserver
proxy
controller-manager
etcd
scheduler
Hope this helped:
Additional community resources:
Memory usage in kubernetes cluster
Without using Heapster is there any way to collect like CPU or Disk metrics about a node within a Kubernetes cluster?
How does Heapster even collect those metrics in the first place?
Kubernetes monitoring is detailed in the documentation here, but that mostly covers tools using heapster.
Node-specific information is exposed through the cAdvisor UI which can be accessed on port 4194 (see the commands below to access this through the proxy API).
Heapster queries the kubelet for stats served at <kubelet address>:10255/stats/ (other endpoints can be found in the code here).
Try this:
$ kubectl proxy &
Starting to serve on 127.0.0.1:8001
$ NODE=$(kubectl get nodes -o=jsonpath="{.items[0].metadata.name}")
$ curl -X "POST" -d '{"containerName":"/","subcontainers":true,"num_stats":1}' localhost:8001/api/v1/proxy/nodes/${NODE}:10255/stats/container
...
Note that these endpoints are not documented as they are intended for internal use (and debugging), and may change in the future (we eventually want to offer a more stable versioned endpoint).
Update:
As of Kubernetes version 1.2, the Kubelet exports a "summary" API that aggregates stats from all Pods:
$ kubectl proxy &
Starting to serve on 127.0.0.1:8001
$ NODE=$(kubectl get nodes -o=jsonpath="{.items[0].metadata.name}")
$ curl localhost:8001/api/v1/proxy/nodes/${NODE}:10255/stats/summary
...
I would recommend using heapster to collect metrics. It's pretty straight forward. However, in order to access those metrics, you need to add "type: NodePort" in hepaster.yml file. I modified the original heapster files and you can found them here. See my readme file how to access metrics. More metrics are available here.
Metrics can be accessed via a web browser by accessing http://heapster-pod-ip:heapster-service-port/api/v1/model/metrics/cpu/usage_rate. The Same result can be seen by executing following command.
$ curl -L http://heapster-pod-ip:heapster-service-port/api/v1/model/metrics/cpu/usage_rate