If I add (by oc/kubectl patch like this) a custom image pull secret to the default serviceaccount (in k8s or OKD), and given that I cannot effectively delete the default image pull secret (it gets automatically regenerated), how can I determine and control the order in which imagePullSecrets are used (to use custom one first, before default is used)?
Note that after such patching of the default serviceaccount with the custom Secret (here named readonly-docker-hub-cred-for-mirekphd) this secret gets always inserted at the top (above default-dockercfg-* ones that get auto-regenerated each time I delete them), but does this top position give it also priority when pulling images (over the default one with much worse pull rate limits)?:
$ oc get sa default -o yaml
apiVersion: v1
imagePullSecrets:
- name: readonly-docker-hub-cred-for-mirekphd
- name: default-dockercfg-?????
kind: ServiceAccount
metadata:
creationTimestamp: 20??-08-09T06:57:32Z
name: default
namespace: ??
resourceVersion: "370297143"
selfLink: /api/v1/namespaces/??/serviceaccounts/default
uid: 7d5ac6cf-9ba1-11e8-bfd8-0050569f????
secrets:
- name: default-token-?????
- name: default-dockercfg-?????
Related
I have a number of repeated values in my kubernetes yaml file and I wondering if there was a way I could store variables somewhere in the file, ideally at the top, that I can reuse further down
sort of like
variables:
- appName: &appname myapp
- buildNumber: &buildno 1.0.23
that I can reuse further down like
labels:
app: *appname
tags.datadoghq.com/version:*buildno
containers:
- name: *appname
...
image: 123456.com:*buildno
if those are possible
I know anchors are a thing in yaml I just couldn't find anything on setting variables
You can't do this in Kubernetes manifests, because you need a processor to manipulate the YAML files. Though you can share the anchors in the same YAML manifest like this:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: &cmname myconfig
namespace: &namespace default
labels:
name: *cmname
deployedInNamespace: *namespace
data:
config.yaml: |
[myconfig]
example_field=1
This will result in:
apiVersion: v1
data:
config.yaml: |
[myconfig]
example_field=1
kind: ConfigMap
metadata:
creationTimestamp: "2023-01-25T10:06:27Z"
labels:
deployedInNamespace: default
name: myconfig
name: myconfig
namespace: default
resourceVersion: "147712"
uid: 4039cea4-1e64-4d1a-bdff-910d5ff2a485
As you can see the labels name && deployedInNamespace have the values resulted from the anchor evaluation.
Based on your use case description, what you would need is going the Helm chart path and template your manifests. You can then leverage helper functions and easily customize when you want these fields. From my experience, when you have an use case like this, Helm is the way to go, because it will help you customize everything within your manifests when you decide to change something else.
I guess there is a similar question with answer.
Please check below
How to reuse an environment variable in a YAML file?
I have a k8s secret yaml definition with some data items already applied in the cluster. After removing some data items from the yaml file, and updating the secret with kubectl apply, those removed data items still persists in the secret object existing in the k8s cluster, not being able to remove them without deleting and recreating the secret from scratch. However, this is not the usual behavior and only happens on rare occasions. Any idea why this is happening and how can I fix it without deleting the whole secret?
Example:
$ cat <<EOF|kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: db-credentials-secret
namespace: default
type: Opaque
stringData:
user: foo
password: bar
EOF
The secret is created with data items user and password.
$ cat <<EOF|kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: db-credentials-secret
namespace: default
type: Opaque
stringData:
password: bar
EOF
After removing user from the secret definition, the secret is updated with kubectl apply but the user data item still remains in the secret.
I'm going through this post, where we bind a Role to a Service Account and then query the API Server using said Service Account. The role only has list permission to the pods resource.
I did an experiment where I mounted a random Secret into a Pod that is using the above Service Account and my expectation was that the Pod would attempt to query the Secret and fail the creation process, but the pod is actually running successfully with the secret mounted in place.
So I'm left wondering when does a pod actually needs to query the API Server for resources or if the pod creation process is special and gets the resources through other means.
Here is the actual list of resources I used for my test:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: example-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: example-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: example-rb
subjects:
- kind: ServiceAccount
name: example-sa
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: example-secret
data:
password: c3RhY2tvdmVyZmxvdw==
---
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
serviceAccountName: example-sa
containers:
- name: webserver
image: nginx
volumeMounts:
- name: secret-volume
mountPath: /mysecrets
volumes:
- name: secret-volume
secret:
secretName: example-secret
...
I must admit that at first I didn't quite get your point, but when I read your question again I think now I can see what it's all about. First of all I must say that your initial interpretation is wrong. Let me explain it.
You wrote:
I did an experiment where I mounted a random Secret into a Pod that
is using the above Service Account
Actually the key word here is "I". The question is: who creates a Pod and who **mounts a random Secret into this Pod ? And the answer to that question from your perspective is simple: me. When you create a Pod you don't use the above mentioned ServiceAccount but you authorize your access to kubernetes API through entries in your .kube/config file. During the whole Pod creation process the ServiceAccount you created is not used a single time.
and my expectation was that the
Pod would attempt to query the Secret and fail the creation process,
but the pod is actually running successfully with the secret mounted
in place.
Why would it query the Secret if it doesn't use it ?
You can test it in a very simple way. You just need to kubectl exec into your running Pod and try to run kubectl, query kubernetes API directly or use one of the officially supported kubernetes cliet libraries. Then you will see that you're allowed to perform only specific operations, listed in your Role i.e. list Pods. If you attempt to run kubectl get secrets from within your Pod, it will fail.
The result you get is totally expected and there is nothig surprising in the fact that a random Secret is successfully mounted and a Pod is being created successfully every time. It's you who query kubernetes API and request creation of a Pod with a Secret mounted. **It's not Pod's
ServiceAccount.
So I'm left wondering when does a pod actually needs to query the API
Server for resources or if the pod creation process is special and
gets the resources through other means.
If you don't have specific queries e.g. written in python that use Kubernetes Python Client library that are run by your Pod or you don't use kubectl command from within such Pod, you won't see it making any queries to kubernetes API as all the queries needed for its creation process are performed by you, with permissions given to your user.
(using kubernetes v1.15.7 in minikube and matching client version and minikube 1.9.0)
If I kubectl apply a secret like this:
apiVersion: v1
data:
MY_KEY: dmFsdWUK
MY_SECRET: c3VwZXJzZWNyZXQK
kind: Secret
metadata:
name: my-secret
type: Opaque
then subsequently kubectl apply a secret removing the MY_SECRET field, like this:
apiVersion: v1
data:
MY_KEY: dmFsdWUK
kind: Secret
metadata:
name: my-secret
type: Opaque
The data field in the result is what I expect when I kubectl get the secret:
data:
MY_KEY: dmFsdWUK
However, if I do the same thing using stringData instead for the first kubectl apply, it does not remove the missing key on the second one:
First kubectl apply:
apiVersion: v1
stringData:
MY_KEY: value
MY_SECRET: supersecret
kind: Secret
metadata:
name: my-secret
type: Opaque
Second kubectl apply (stays the same, except replacing MY_KEY's value with b2hubyEK to show the configuration DID change)
apiVersion: v1
data:
MY_KEY: b2hubyEK
kind: Secret
metadata:
name: my-secret
type: Opaque
kubectl get result after applying the second case:
data:
MY_KEY: b2hubyEK
MY_SECRET: c3VwZXJzZWNyZXQ=
The field also does not get removed if the second case uses stringData instead. So it seems that once stringData is used once, it's impossible to remove a field without deleting the secret. Is this a bug? Or should I be doing something differently when using stringData?
kubectl apply need to merge / patch the changes here. How this works is described in How apply calculates differences and merges changes
I would recommend to use kustomize with kubectl apply -k and use the secretGenerator to create a unique secret name, for every change. Then you are practicing Immutable Infrastructure and does not get this kind of problems.
A brand new tool for config manangement is kpt, and kpt live apply may also be an interesting solution for this.
The problem is that stringData is a write only field. It doesn’t have convergent behavior so it breaks the merge patch generator system. Most high level tools fix this by converting to normal data before dealing with the patch.
i'm playing around with knative currently and bootstrapped a simple installation using gloo and glooctl. Everything worked fine out of the box. However, i just asked myself if there is a possibility to change the generated url, where the service is made available at.
I already changed the domain, but i want to know if i could select a domain name without containing the namespace, so helloworld-go.namespace.mydomain.com would become helloworld-go.mydomain.com.
The current YAML-definition looks like this:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
labels:
name: helloworld-go
namespace: default
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: Go Sample v1
Thank you for your help!
This is configurable via the ConfigMap named config-network in the namespace knative-serving. See the ConfigMap in the deployment resources:
apiVersion: v1
data:
_example: |
...
# domainTemplate specifies the golang text template string to use
# when constructing the Knative service's DNS name. The default
# value is "{{.Name}}.{{.Namespace}}.{{.Domain}}". And those three
# values (Name, Namespace, Domain) are the only variables defined.
#
# Changing this value might be necessary when the extra levels in
# the domain name generated is problematic for wildcard certificates
# that only support a single level of domain name added to the
# certificate's domain. In those cases you might consider using a value
# of "{{.Name}}-{{.Namespace}}.{{.Domain}}", or removing the Namespace
# entirely from the template. When choosing a new value be thoughtful
# of the potential for conflicts - for example, when users choose to use
# characters such as `-` in their service, or namespace, names.
# {{.Annotations}} can be used for any customization in the go template if needed.
# We strongly recommend keeping namespace part of the template to avoid domain name clashes
# Example '{{.Name}}-{{.Namespace}}.{{ index .Annotations "sub"}}.{{.Domain}}'
# and you have an annotation {"sub":"foo"}, then the generated template would be {Name}-{Namespace}.foo.{Domain}
domainTemplate: "{{.Name}}.{{.Namespace}}.{{.Domain}}"
...
kind: ConfigMap
metadata:
labels:
serving.knative.dev/release: "v0.8.0"
name: config-network
namespace: knative-serving
Therefore, your config-network should look like this:
apiVersion: v1
data:
domainTemplate: {{ '"{{.Name}}.{{.Domain}}"' }}
kind: ConfigMap
metadata:
name: config-network
namespace: knative-serving
You can also have a look and customize the config-domain to configure the domain name that is appended to your services.
Assuming you're running knative over an istio service mesh, there's an example of how to use an Istio Virtual Service to accomplish this at the service level in the knative docs.