Regarding the current installation options for Istio 1.1.4 it should be possible to define a default node selector which gets added to all Istio deployments
The documentation does not show a dedicated sample how the selector has to be defined, only {} as value.
Currently I was not able to find a working format to pass the values to the helm charts by using --set, e.g:
--set global.defaultNodeSelector="{cloud.google.com/gke-nodepool:istio-pool}"
I tried several variations, with and without escapes, JSON map, ... But currently everything results into the same Helm error message:
2019/05/06 15:58:10 Warning: Merging destination map for chart 'istio'. Cannot overwrite table item 'defaultNodeSelector', with non table value: map[]
Istio version 1.1.4
Helm 2.13.1
The expectation would be to have a more detailed documentation, giving some samples on Istio side.
When specifying overrides with --set, multiple key/value pairs are deeply merged based on keys. It means in your case, that only last item will be present in the generated template. The same will happen even if you override with -f (YAML file) option.
Here is an example of -f option usage with custom_values.yaml, with distinguished keys:
#custom_values.yaml
global:
defaultNodeSelector:
cloud.google.com/bird: stork
cloud.google.com/bee: wallace
helm template . -x charts/pilot/templates/deployment.yaml -f
custom_values.yaml
Snippet of rendered Istio`s Pilot deployment.yaml manifest file:
volumes:
- name: config-volume
configMap:
name: istio
- name: istio-certs
secret:
secretName: istio.istio-pilot-service-account
optional: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
- key: cloud.google.com/bee
operator: In
values:
- wallace
- key: cloud.google.com/bird
operator: In
values:
- stork
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
The same can be achieved with --set:
--set global.defaultNodeSelector."cloud\.google\.com/bird"=stork,global.defaultNodeSelector."cloud\.google\.com/bee"=wallace
After searching for some hours I found a solution right after posting the question by digging in the Istio commits.
I'll leave my findings as a reference, maybe someone can safe some time that way.
Setting a default node selector works, at least for me, by separating the key by dots and escaping additional ones with \ (if there are dots in the label of interest)
--set global.defaultNodeSelector.cloud\\.google\\.com/gke-nodepool=istio-pool
To create a defaultNodeSelector for a node pool labeled with
cloud.google.com/gke-nodepool: istio-pool
I was not able to add multiple values that way the {} notation for adding lists in Helm doesn't seem to get respected.
Related
I have read the Helm docs and various StackOverflow questions - this is not (I hope!) a lazy question. I'm having an issue overriding a single particular value in a Helm chart, not having trouble with the concept in general.
I'm trying to install the Gitea helm chart on a k8s cluster on Raspberry Pis (that is - on arm64 architecture). Since the default memcached dependency chart is from Bitnami, who don't support arm64, I have overridden the image appropriately (to arm64v8/memcached, link).
However, this new image has a different entrypoint - /entrypoint.sh instead of /run.sh. Referencing the relevant part of the template, I believed I needed to override memcached.args, but that didn't work as expected:
$ cat values.yaml
memcached:
image:
repository: "arm64v8/memcached"
tag: "1.6.17"
args:
- "/entrypoint.sh"
diagnosticMode:
enabled: false
$ helm template gitea-charts/gitea --values values.yaml
[...]
# Source: gitea/charts/memcached/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-memcached
namespace: gitea
labels: [...]
spec:
selector:
matchLabels: [...]
replicas: 1
template:
metadata:
labels: [...]
spec:
[...]
serviceAccountName: release-name-memcached
containers:
- name: memcached
image: docker.io/arm64v8/memcached:1.6.17
imagePullPolicy: "IfNotPresent"
args:
- /run.sh # <----- this should be `/entrypoint.sh`
env:
- name: BITNAMI_DEBUG
value: "false"
ports:
- name: memcache
containerPort: 11211
[...]
However, when I instead overrode memcached.arguments, the expected behaviour occurred - the contents of memcached.arguments rendered in the template's args (or, if memcached.arguments was empty, no args were rendered)
Where is this mapping from arguments to args taking place?
Note in particular that the Bitnami chart docs refer to args, so this is unexpected - though note also that the Bitnami chart's values.yaml refers to arguments in the comment (this is what prompted me to try this "obviously wrong" approach!). In the "Upgrade to 5.0.0 notes", we see "arguments has been renamed to args." - but the Gitea chart is using a >5.0.0 version of the Bitnami chart.
You're reasoning is correct. And the current parameter name is definitely called args (arguments is deprecated, someone just forgot to update the comment here).
Now, why arguments work for you and args? I think you're just using the old version, before it was renamed. I checked it and:
Gitea chart uses version 5.9.0 from the repo https://raw.githubusercontent.com/bitnami/charts/pre-2022/bitnami
This corresponds to the following Helm Chart: https://charts.bitnami.com/bitnami/memcached-5.9.0.tgz (you can check it here).
When you extract this file chart, you see it's the old version of chart (with arguments not yet renamed to args).
I have an argocd ApplicationSet created. I have the following merge keys setup:
generators:
- merge:
mergeKeys:
- path
generators:
- matrix:
generators:
- git:
directories:
- path: aws-ebs-csi-driver
- path: cluster-autoscaler
repoURL: >-
...
revision: master
- clusters:
selector:
matchLabels:
argocd.argoproj.io/secret-type: cluster
- list:
elements:
- path: aws-ebs-csi-driver
namespace: system
- path: cluster-autoscaler
namespace: system
Syncing the application set however generates:
- lastTransitionTime: "2022-08-08T21:54:05Z"
message: the parameters from a generator were not unique by the given mergeKeys,
Merge requires all param sets to be unique. Duplicate key was {"path":"aws-ebs-csi-driver"}
reason: ApplicationGenerationFromParamsError
status: "True"
Any help is appreciated.
The matrix generator is producing one set of parameters for each combination of directory and cluster.
If there is more than one cluster, then there will be one parameter set with path: aws-ebs-csi-driver for each cluster.
The merge generator requires that each parameter used as a merge key be completely unique. That mode was the original design of the merge generator, but more modes may be supported in the future.
Argo CD v2.5 will support go templated ApplicationSets, which might provide an easier way to solve your problem.
UPDATE: Apologies for perhaps causing controversy but it seems like there was another cronjob running that was also calling a function that was grabbing those apiKeys from the DB but I was not sure until I seperated the part where it was grabbing them from the environment variables ;_;.
So basically this whole post is wrong and one container was not grabbing env variables from another container. I am so ashamed I wanted to delete this question but not sure if a good idea or not?
Kubernetes pod running two of basically the same NodeJS application seems to be taking environment variables from another container, I logged the variable and it logged me the correct one but when it makes a request it seems to show two different results.
These variables are taken from two different secrets.
I have checked inside of each container that they do indeed have different env variables but for some reason inside of NodeJS when it makes these requests out to a third-party API it grabs both of the variables.
Yes, they do have the same name.
In the image below you, can see some logs these entries show the Authorization header for an http request, and this header is taken from an environment variable. Technically speaking it should always stay the same but it grabs the other one for some reason as well.
Here is the pod in YAML:
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP: <REDACTED>/32
cni.projectcalico.org/podIPs: <REDACTED>32
kubectl.kubernetes.io/restartedAt: '2021-01-20T15:29:12Z'
labels:
app: mimercado-api
pod-template-hash: 77fb65575
name: mimercado-deployment-77fb65575-tpbsp
namespace: default
spec:
containers:
- envFrom:
- secretRef:
name: secrets-mimercado-a
image: hsduiii/mindi-mimercado:82aae456ee6b637cfefe50c323c2c5b98d2c88f2
imagePullPolicy: Always
name: mimercado-a
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /srv/mindi-mimercado/logfiles
name: mindi-mimercado-a-logdir
- envFrom:
- secretRef:
name: secrets-mimercado-b
image: hsduiii/mindi-mimercado:82aae456ee6b637cfefe50c323c2c5b98d2c88f2
imagePullPolicy: Always
name: mimercado-b
ports:
- containerPort: 8085
volumeMounts:
- mountPath: /srv/mindi-mimercado/logfiles
name: mindi-mimercado-b-logdir
imagePullSecrets:
- name: regcred
preemptionPolicy: PreemptLowerPriority
priority: 0
serviceAccountName: default
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- hostPath:
path: /microk8s-files/logs/mindi-mimercado/mindi-mimercado-a/82aae456ee6b637cfefe50c323c2c5b98d2c88f2
type: DirectoryOrCreate
name: mindi-mimercado-a-logdir
- hostPath:
path: /microk8s-files/logs/mindi-mimercado/mindi-mimercado-b/82aae456ee6b637cfefe50c323c2c5b98d2c88f2
type: DirectoryOrCreate
name: mindi-mimercado-b-logdir
There is still a lot of unknown regarding your overall config but if it can help, here is the potential issues that I see.
The fact that your requests return each secrets in such a consistent manner leads me to believe that your pod configuration might be fine, but something else is routing your requests to both containers. This is easy to verify. Display simultaneously the logs of both containers by running the following commands in two different terminals:
kubectl logs -f mimercado-deployment-77fb65575-tpbsp -c mimercado-a
kubectl logs -f mimercado-deployment-77fb65575-tpbsp -c mimercado-b
Send some requests like you did in your screenshot. If your requests appear to be distributed to both containers, it means that something is miss-configured in your service or ingress.
You might have old resources, still around, with slightly different configurations or your service label selector is matching more than just your pod. Check that only this pod, only one service and only one ingress are present. Also check that you don't have other deployments/pods/services with labels that might be overlapping with our pod.
You are using envFrom which load all the entries from your secret into your environment. Check that you don't have both entries in one of your secret. You can also switch to the env form to be safe:
env:
- name: MY_SECRET
valueFrom:
secretKeyRef:
name: secrets-mimercado-a
key: my-secret-key
This is probably not even possible but... I don't see any config to change the port on which your app is listening. containerPort only tells kubernetes which port your container is using but node on which port your node app should bind. It shouldn't be possible for both container to bind to the same port of the pod, but if you are running a deployment and not a single pod some pod of your deployment might have different containers bound to a specific port.
UPDATE: Apologies for perhaps causing controversy but it seems like there was another cronjob running that was also calling a function that was grabbing those apiKeys from the DB but I was not sure until I seperated the part where it was grabbing them from the environment variables ;_;. So basically this whole post is wrong and one container was not grabbing env variables from another container. I am so ashamed I wanted to delete this question but not sure if a good idea or not?
I've been making some tests with a Kubernetes cluster and I installed the loki-promtail stack by means of the helm loki/loki-stack chart.
The default configuration works fine, but now I would like to add some custom behaviour to the standard promtail config.
According to the Promtail documentation I tried to customise the values.xml in this way:
promtail:
extraScrapeConfigs:
- job_name: dlq-reader
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key
The expected behaviour is that every log line is replaced by the static text "test" (of course this is a silly test just to get familiar with this environment).
What I see is that this configuration is correctly applied to the loki config-map but without any effect: the log lines looks exactly as if this additional configuration wasn't there.
The loki-stack chart version is 0.39.0 which installs loki 1.5.0.
I cannot see any error in the loki/promtails logs... Any suggestion?
I finally discovered the issue then I post what I found in case this might help anyone else with the same issue.
In order to modify the log text or to add custom labels, the correct values.yaml section to provide is pipelineStages instead of extraScrapeConfigs. Then, the previous snippet must be changed in the following way:
promtail:
pipelineStages:
- docker: {}
- match:
selector: '{container="dlq-reader"}'
stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key
We wanted to set up a high available Keycloak cluster on Kubernetes (with ldap as a user federation). We decided to use codecentrics helm charts since we were trying them out for a single Keycloak instance setup and that worked well. For the cluster we ran into a few issues while trying to set up everything correctly and didn't find the best sources in the wide internet. Therefore I decided to write a short summary what our main issues where and how we got through them.
Solutions to our problems where described on this website (amongst others), but things are described kind of very briefly and felt partly incomplete.
Issues we faced where:
Choosing the correct jgroups.discoveryProtocol
Adding the correct discoveryProperties
Parts that need to be overridden in your own values.yaml
Bonus issues (we already faced with the single instance setup):
Setting up an truststore to connect ldap as a user federation via ladps
Adding a custom theme for keycloak
I will try and update this if things change due to codecentrics updating their helm charts.
Thanks to codecentrics for providing the helm charts by the way!
Disclaimer:
This is the way we set it up - I hope this is helpful, but I do not take responsibility for configuration errors and resulting security flaws. Also we went through many different sources on the internet, I am sorry that I can't give credits to all of them, but it has been a few days since than an I can't get them together anymore...
CODECENTRIC CHART VERSION < 9.0.0
The main issues:
1. Choosing the correct jgroups.discoveryProtocol:
I will not explain things here but for us the correct protocol to use was org.jgroups.protocols.JDBC_PING. Find out more about the protocols (and general cluster setup) here.
discoveryProtocol: org.jgroups.protocols.JDBC_PING
With JDBC_PING jgroups will manage instance discovery. Therefore and for caching user sessions the database provided for keycloak will be enhanced with extra tables, e.g. JGROUPSPING.
2. Setting up the discoveryProperties:
This needs to be set to
discoveryProperties: >
"datasource_jndi_name=java:jboss/datasources/KeycloakDS"
to avoid an error like:
java.lang.IllegalStateException: java.lang.IllegalArgumentException:
Either the 4 configuration properties starting with 'connection_' or
the datasource_jndi_name must be set
3. Other parts that need to be set (as mostly described in the readme of codecentrics github and in the comments of the values.yaml in github as well):
setting the clusterDomain according to your cluster
setting the number of replicas greater than 1 to enable clustering
setting the service.type: We went with ClusterIP but it also can work with other setups like LoadBalancer depending on your setup
optional but recommended: Setting either maxUnavailable or minAvailable to always have sufficient pods available according to your needs.
setting up our Ingress (which looks pretty much standard):
ingress:
enabled: true
path: /
annotations: {
kubernetes.io/ingress.class: nginx
}
hosts:
- your.host.org
Bonus issues:
1. The truststore:
To have Keycloak communicate with ldap via ldaps we had to set up a truststore with the certificate of our ldap in it:
Receive the certificate from ldap and save it somewhere:
openssl s_client -connect your.ldap.domain.org < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /path/to/ldap.cert
Create a new keystore:
keytool -genkey -alias replserver \
-keyalg RSA -keystore /path/to/keystore.jks \
-dname "CN=AddCommonName, OU=AddOrganizationalUnit, O=AddOrganisation, L=AddLocality, S=AddStateOrProvinceName, C=AddCountryName" \
-storepass use_the_same_password \
-keypass use_the_same_password \
-deststoretype pkcs12
Add the downloaded certificate to the keystore:
keytool -import -alias ldaps -file /path/to/ldap.cert -storetype JKS -keystore path/to/keystore.jks
Type in the required password: use_the_same_password.
Trust the certificate by typing 'yes'.
Provide the keystore in a configmap:
kubectl create configmap cert-keystore --from-file=path/to/keystore.jks
Enhance your values.yaml for the truststore:
Add and mount the config map:
extraVolumes: |
- name: cert-keystore
configMap:
name: cert-keystore
extraVolumeMounts: |
- name: cert-keystore
mountPath: "/keystore/"
readOnly: true
Tell java tu use it:
javaToolOptions: >-
-[maybe some other settings of yours]
-Djavax.net.ssl.trustStore=/keystore/keystore.jks
-Djavax.net.ssl.trustStorePassword=<<keystore_password>>
Since we didn't want to upload the keystore password to git we added a step to our pipeline where it gets sed into the values.yaml, replacing the <<keystore_password>>.
2. Adding a custom theme:
Mainly we are providing a docker container with our custom theme in it:
extraInitContainers: |
- name: theme-provider
image: docker_repo_url/themeContainer:version
imagePullPolicy: IfNotPresent
command:
- sh
args:
- -c
- |
echo "Copying theme..."
cp -R /custom-theme/* /theme
volumeMounts:
- name: theme
mountPath: /theme
Add and mount the theme:
extraVolumes: |
- name: theme
emptyDir: {}
extraVolumeMounts: |
- name: theme
mountPath: /opt/jboss/keycloak/themes/custom-theme
You now should be able to choose the custom theme in the Keycloak admin UI via Realm Settings -> Themes.
CODECENTRIC CHART VERSION 9.0.0 to 9.3.2 (and maybe higher)
1. Clustering
We are still going with JDBC_PING since we had problems with DNS_PING as described in the Codecentric Repo readme:
extraEnv: |
## KEYCLOAK CONFIG
- name: PROXY_ADDRESS_FORWARDING
value: "true"
### CLUSTERING
- name: JGROUPS_DISCOVERY_PROTOCOL
value: org.jgroups.protocols.JDBC_PING
- name: JGROUPS_DISCOVERY_PROPERTIES
value: 'datasource_jndi_name=java:jboss/datasources/KeycloakDS'
- name: CACHE_OWNERS_COUNT
value: "2"
- name: CACHE_OWNERS_AUTH_SESSIONS_COUNT
value: "2"
With the service set up as ClusterIP:
service:
annotations: {}
labels: {}
type: ClusterIP
loadBalancerIP: ""
httpPort: 80
httpNodePort: null
httpsPort: 8443
httpsNodePort: null
httpManagementPort: 9990
httpManagementNodePort: null
extraPorts: []
2. 502 Error Ingress Problem
We encountered a 502 error with Codecentrics chart 9.x.x for which fixing took a while to figure out. A solution for this is also described here, where we took our inspiration but for us the following ingress setup was enough:
ingress:
enabled: true
servicePort: http
# Ingress annotations
annotations: {
kubernetes.io/ingress.class: nginx,
nginx.ingress.kubernetes.io/proxy-buffer-size: 128k,
}
CODECENTRIC CHART VERSION 9.5.0 (and maybe higher)
Updating to 9.5.0 needs to be tested. Especially if desired to go with KUBE_PING and maybe even Autoscaling.
I will update after testing if something changed significantly.