How to set API Server parameters on kubespray deployment - kubernetes

I am using kubespray for the deployment of a kubernetes cluster and
want to set some API Server parameters for the deployment. In specific I want to configure the authentication via OpenID Connect (e.g set the oidc-issuer-url parameter). I saw that kubespray has some vars to set (https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vars.md), but not the ones I am looking for.
Is there a way to set these parameters via kubespray? I don't want to configure each master manually (e.g by editing the /etc/kubernetes/manifests/kube-apiserver.yaml files).
Thanks for your help

On the bottom of the page you are referring to there is description how to define custom flags for various components of k8s:
kubelet_custom_flags:
- "--eviction-hard=memory.available<100Mi"
- "--eviction-soft-grace-period=memory.available=30s"
- "--eviction-soft=memory.available<300Mi"
The possible vars are:
apiserver_custom_flags
controller_mgr_custom_flags
scheduler_custom_flags
kubelet_custom_flags
kubelet_node_custom_flags

The k8s-cluster.yml file has some parameters which allow to set the OID configuration:
kube_oidc_auth: true
...
kube_oidc_url: https:// ...
kube_oidc_client_id: kubernetes
kube_oidc_ca_file: "{{ kube_cert_dir }}/ca.pem"
kube_oidc_username_claim: sub
kube_oidc_username_prefix: oidc:
kube_oidc_groups_claim: groups
kube_oidc_groups_prefix: oidc:
These parameters are the counter parts to the oidc api server parameters

Related

Alter auto-generation of host for routes in openshift on namespace basis

I try to adjust the template for generating routes within a single namespace.
So basically what openshift does, when I enter a route without settng the host via yaml is generate a route in the following way:
${name}-${namespace}.myapps.mycompany.com
I would prefer to have a base domain for many routes which differs in the path, e.g.:
${namespace}.myapps.mycompany.com/${name}
Is this possible? Especially If I am not an admin of openshift at my company but a dev whose team is responsible just for a few namespaces?
For context: We want to use ArgoCD + Git to use gitops, but do not want to hardcode any infrastructure knowledge like the host or domain in our git repo. We came from using ingresses, but if we omit the host there no routes are generated at all...
Thanks in advance for any help!
You can have path-based routes, e.g., [host]/[path]. If you don't provide your own value for [host], it will use the same OpenShift ${name}-${namespace}.myapps.mycompany.com based values.
I'm not sure that you can change OpenShift's default route template, but you can definitely provide your own path values.

Config server with Vault backend - fetch secrets from multiple paths

We are using config server with Vault backend to fetch application secrets.
Config server project is using spring-vault-core dependency and spring-vault-dependencies dependency management for Vault.
Vault related config in application yml file is as follows:
spring:
cloud:
config:
server:
vault:
order: 0
uri: <complete URI>
connection-timeout: 5000
read-timeout: 15000
kvVersion: 2
backend: secret
defaultKey: config
This works fine and fetches me the Vault secrets in secret/config.
I am unable to add secret fetching from multiple paths in Vault (secret/config + secret/customFolder). I have tried adding comma separated application-name etc as suggested across various posts but does not work. Has anyone tried something similar?
You can take a look to the composite profile.
There are a lot of additional questions - what exactly you are trying to do, and why do you want to have this?
For us, for example, it was important to split infra services configurations and also split, actually, microservices configurations by itself. And, important requirement, to be able to "overwrite" it (in case of migrations, for instance).
We have achieve that with two things:
on config server side we are using composite configuration (with exactly the same type and uri, but little bit different backend and keys),
on config client's side we are specifying several values for spring.cloud.config.name property (coma separated list).

Syndesis (Fuse-online) Integration build failed for unknown host "maven1.repo.org"

We installed fuse-online 7.4 on openshift 3.11. We created an integration containing an OpenApiProvider connection and an SQL connection.
When we publish the integration, the build fails with the following error:
"repo1.maven.org: Name or service not known: Unknown host repo1.maven.org: Name or service not known"
Openshift is installed behing an enterprise http proxy
The image registry.access.redhat.com/fuse7/fuse-ignite-s2i is pulled correctly since docker is configured with proxy.
syndesis-server DeploymentConfig has been set with proxies environment variables
I suppose that, since the buildconfig for the integration is created dynamically, is not possible to inject HTTP_PROXY,HTTPS_PROXY,NO_PROXY env variables to the build pod.
We read https://docs.openshift.com/container-platform/3.11/install_config/http_proxies.html#s2i-builds but since we don't have any rights to modify s2i image we cannot proceed.
Is there any way to provide proxy information during during fuse-online integration build?
Finally we succeeded to inject http proxy environment variables in dynamic created build pods.
We modified syndesis-server-config config map reporting proxy variables on mavenOptions key like this:
mavenOptions: "-XX:+UseG1GC -XX:+UseStringDeduplication -Xmx310m -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort= -Dhttp.nonProxyHosts="
Thanks for the support
Let me know if you have any other idea of resolving the issue
Can you check the DNS of your network connection? Not sure why but sometimes I have to use one of the "reliable" DNS on my machine (like the 8.8.8.8 from Google) to make sure repo1.maven.org is reachable.
You can check if this is the problem trying a simple
$ ping repo1.maven.org
If that doesn't work, you have to check your DNS.

nginx-ingress within kuberntes / how to enable and use geoip?

Just realized that geoip was present by default within the nginx-ingress in the context of kubernetes; that is, looked around, being new into nginx geoip, I don't have much clue about how to benefit from this
Firstly, is there any declarative setup to effectively have it working ? A configmap setup, or so ?
Secondly, how such info is passed from the nginx-ingress to an app ? Is the info present in the headers ? is there any extra setup to apply ?
thanks a lot for any experienced input; best
Find usefull documentation about how to configure Geoip2 for nginx ingress kubernetes deployment.
Example Nginx Configuration ConfigMap
You will find the expected ConfigMap name at the nginx controller container entrypoint or environment variables. Furthermore you can override this name, the way to do so will depend on your nginx installation/deployment method.
ConfiMap Nginx supported configurations
You will find there a listed all the supported configs/properties plus a sort description about them and how to use them.
For this specific question, the property to configure Geoip2 is "use-geoip2" (link below)
Enable GeoIP2
remark: you will need a license and add a flag at nginx entry command providing it
The nginx_http_geoip_module module creates variables with values depending on the client IP address, using the precompiled MaxMind databases.
This module is not built by default, it should be enabled with the --with-http_geoip_module configuration parameter.
The module analyze headers, next connect to defined database, fetch the localization information and offers a variables regarding to them like
country or city of connection origin. Some examples:
$geoip_country_code - two-letter country code
$geoip_city - city name
$geoip_postal_code - postal code

issue when deploy basic auth in kubernetes dashboard

I try to add a basic authentication to my kubernetes cluster by changing the file.
/etc/kubernetes/manifest/kubernetes-apiserver.yaml
There i add 3 flags
-- basic-auth-file=/etc/kubernetes/basic-auth.csv
-- authorization-mode=ABAC
-- authentication-mode=basic
But when i add those lines and i restart my system. My kubernetes freezes and won't start. Is this the right way to add flags to an already running kubernetes cluster ? Is this the right way to add basic authentication to kubernetes dashboard ?
I used this tutorial for the basic authentication: https://github.com/kubernetes/dashboard/wiki/Access-control#basic
Conceptually you doing everything right, but the problem is that for Modern Kubernetes version, at least for 1.9, authentication-mode is not a valid CLI flag for API server. All available flags you can check in documentation.
It is a bit outdated documentation in the repo. Actually, basic authentification will be enabled when you provided basic-auth-file option.
So, just remove authentication-mode flag and use only basic-auth-file and authorization-mode. If should help.
For enable a user/password authorization, based on documentation of dashboard, you need to add authentication-mode CLI arg to a Dashboard.