I wasn't really sure how to label this question, because i'll be good with any of the solutions above (inheritance of containers or defining parameters for the entire workflow without explicitly setting them in each step template).
i am currently working with argo yaml's and i want to define certain values that will be inputted once (and also, be optional), and will be used by every pod in the yaml.
i'm sure there's a better way to do this than what i found by now, but i can't find anything in the docs.
currently the way i saw was defining that parameter as a workflow argument, and then for each container defined - defining it as an input parameter/env parameter.
my question is this - isn't there a way to define those 'env' variables at the top level? of the workflow? so that every pod will use them without me explicitly telling it to?
or - maybe even create one container that has those arguments defined, so that every other container i define inherits from that container and i wouldn't have to write those parameters as input/env for each one i add?
i wouldn't want to add these three values to each container i define. it makes the yaml very big and hard to read and maintain.
container:
env:
- name: env_config
value: "{{workflow.parameters.env_config}}"
- name: flow_config
value: "{{workflow.parameters.flow_config}}"
- name: flow_type_config
value: "{{workflow.parameters.flow_type_config}}"
would love to get your input, even if it's pointing me at the direction of the right doc to read, as i haven't found anything close to it yet.
Thanks!
just realised i haven't updated, so for anyone interested, what i ended up doing is setting an anchor inside a container template:
templates:
#this template is here to define what env parameters each container has using an anchor.
- name: env-template
container:
env: &env_parameters
- name: env_config
value: "{{workflow.parameters.env_config}}"
- name: flow_config
value: "{{workflow.parameters.flow_config}}"
- name: run_config
value: "{{workflow.parameters.run_config}}"
and than using that anchor in each container.
container:
image: image
imagePullPolicy: Always
env: *env_parameters
You could use a templating tool like Kustomize or Helm to cut down on the duplication.
You could also write the params to a JSON file, pull it into each Pod as an artifact, and then have a script loop over the values and assign them to env vars. But for this to be worth the additional write step and artifacts yaml, you'd need to be dealing with a lot of env vars.
If you're using the exact same inputs for a large number of steps, it's probably worth considering whether those steps are similar enough to abstract out to one parameterized template. Perhaps you could loop over an array like ["mode1", "mode2", "mode3"...] instead of writing the steps out in series.
Honestly, duplication isn't the worst thing ever. An IDE with a nice find/replace feature should make it simple enough to make changes as necessary.
Related
I would like to have something like this:
- ${{ if or(eq(parameters.RunTestsOnPRBuildOnly, false), eq(variables.Build.Reason, 'PullRequest')) }}:
- template: ps-module-run-tests.yml
This does not work, as variables.Build.Reason is empty. Is it possible at all?
Note that I know how to modify the ps-module-run-tests.yml template to express my desire as a runtime condition. In other words I know how to arrive at this:
What I am curious is whether Build.Reason can be used in a compile time condition, so that these steps are not even rendered. On the surface, there is no inherent problem with that, because the value is known right at the start, but it depends on when the template is compiled. If too early, then it is impossible, but I am unaware of such details. Maybe I cannot do it, because I am missing something.
So, is it possible?
Instead of
variables.Build.Reason
try to use
variables['Build.Reason']
According to https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml#build-variables-devops-services
Build.Reason is available in template expressions at compile time
I have looked at https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#conditional-insertion there is one example also there is note that if you want to use property dereference syntax it can contain only
and so dot sign is not acceptable (dot between Build and Reason)
I'm struggling a little with Fn::Sub and Fn::FindInMap in order to create a dynamic value for a resource.
I would like something in the mapping like
Mappings:
Mapping01:
common:
SyncName: "archive-uploader-${AWS::Region}-synchronisation-a2"
Then I'd like to use it something like
Name: !Sub !FindInMap [Mapping01, common, SyncName]
Which I know I can't do because the Sub function cannot take intrinsic functions at the String parameter. But I cannot see a neat way to do this. At the moment I just use Sub and the hardcoded string where I need it.
I'd prefer to have a single place for the string and then use it with Sub where I need it. How can I do that in a CFT?
I don't want to have a large map that just varies the name's region. That's what most of the documentation shows.
As you pointed out you can't do this in plain CFN. Your only options to work around this is through development of CFN macro or custom resource.
Alternative is simply not to use CFN, there are far superior IaC tools (terraform, AWS CDK), pre-process all templates before applying them, or keep hardcoding these values in your templates.
Lets say I need to create environment variables or ConfigMap entries like this:
- name: JDBC_URL
value: "jdbc:db2://alice-service-a:50000/db1"
- name: KEYCLOAK_BASE_URL
value: "http://alice-keycloak:8080/auth"
Where alice- is the namePrefix. How do I do this using Kustomize?
The containers I use actually do need references to other containers that are string concatenations of "variables" like above.
It doesn't look like Kustomize's vars can do this. The documentation entry Unstructured Edits seems to describe this and is under a heading called "Eschewed Features", so I guess that isn't going to happen. A similar feature request, #775 Support envsubst style variable expansion was closed.
Coming from Helm, that was easy.
What are my options if I want to move from Helm to Kustomize, but need to create an env or ConfigMap entry like e.g. jdbc:db2://${namePrefix}-service-b:${dbPort}/${dbName} (admittedly a contrived example)?
I'm guessing I'll have to resort to functionality external to Kustomize, like envsubst. Are there any best practices for cobbling this together, or am I writing my own custom-deploy-script.sh?
I'm afraid I've come up against one of the limitations of Kustomize.
The State of Kubernetes Configuration Management: An Unsolved Problem | by Jesse Suen | Argo Project has this to say under "Kustomize: The Bad":
No parameters & templates. The same property that makes kustomize applications so readable, can also make it very limiting. For example, I was recently trying to get the kustomize CLI to set an image tag for a custom resource instead of a Deployment, but was unable to. Kustomize does have a concept of “vars,” which look a lot like parameters, but somehow aren’t, and can only be used in Kustomize’s sanctioned whitelist of field paths. I feel like this is one of those times when the solution, despite making the hard things easy, ends up making the easy things hard.
Instead, I've started using gomplate: A flexible commandline tool for template rendering in addition to Kustomize to solve the challenge above, but having to use two tools that weren't designed to work together is not ideal.
EDIT: We ended up using ytt for this instead of gomplate.
I can heavily recommend the article: The State of Kubernetes Configuration Management: An Unsolved Problem. Nice to know I'm not the only one hitting this road block.
I would like to implement functionality (or even better reuse existing libraries/APIs!) that would intercept a kubectl command to create an object and perform some pre-creation validation tasks on it before allowing kubectl command to proceed.
e.g.
check various values in the yaml against external DB for example
check a label conforms to the internal naming convention
and so on..
Is there an accepted pattern or existing tools etc?
Any guidance appreciated
The way to do this is by creating a ValidatingAdmissionWebhook. It's not for the faint of heart and even a brief example would be an overkill as a SO answer. A few pointers to start:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook
https://banzaicloud.com/blog/k8s-admission-webhooks/
https://container-solutions.com/a-gentle-intro-to-validation-admission-webhooks-in-kubernetes/
I hope this helps :-)
I usually append - - dry-run to kubectl command to check and validate the YAML config
I have a gke cluster with a number of different namespaces. I would like to be able to in effect namespace my images the same way my other resources are namespaced. That is, I would like pods in different namespaces to be able to reference an image using the same name but for them to get different images depending on which namespace they are in. One way to achieve this (if it were supported) might be to substitute the name of the namespace into the image name in the yml, eg:
containers:
- image: eu.gcr.io/myproject/$(NAMESPACE)-myimage
name: myimage
Then I could push eu.gcr.io/myproject/mynamespace-myimage to make my image available to namespace mynamespace.
Is there any tidy way to achieve this kind of thing. If not, and since I've been unable to find anybody else asking similar questions, is there some way in which this is a bad thing to want to do?
I don't think this is possible. Kubernetes supports expansion on fields like command and args. This lets you use substitutions with a variable set in container's env field, which can come from configmap/secret. Example: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments
However I don't think variable expansion works for the image field. :(
What you're trying to do does not seem like a great idea: Having different images across multiple environments defeats the purpose of having test/staging environments.
Instead you should probably should use the same image to test on all platforms, by changing env vars, configMaps etc.