I'm confused when to use reuse-values option while upgrading a helm chart
As I know that reuse-values: reuse the last release's values and merge in any overrides from the command line via --set and -f, but when I use it when upgrading chart, for example, "stable/elasticsearch-curator" the upgrade fails due to UPGRADE FAILED Error: CronJob.batch "es-curator-elasticsearch-curator" is invalid: spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", "Never" Error: UPGRADE FAILED: CronJob.batch "es-curator-elasticsearch-curator" is invalid: spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", "Never"
So, when do I use it and when should I don't? and what is the effect of not using it while upgrading?
Related
I am using the pluck command to insert values per environment from my values yaml. The chart passes validation but throws the below error on dry-run installation.
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(StatefulSet.spec.template.spec.containers[0]):
unknown field "extraVolumeMounts" in io.k8s.api.core.v1.Container, ValidationError(StatefulSet.spec.template.spec): unknown field "extraVolumes" in io.k8s.api.core.v1.PodSpec]
helm.go:94: [debug] error validating "": error validating data: [ValidationError(StatefulSet.spec.template.spec.containers[0]): unknown field "extraVolumeMounts" in io.k8s.api.core.v1.Container, ValidationError(StatefulSet.spec.template.spec): unknown field "extraVolumes" in io.k8s.api.core.v1.PodSpec]
Can someone tell me what I am missing;
I am running helm3 2to3 convert --dry-run elasticsearch, but running into an issue. Any idea what could be the issue?
2021/08/18 15:52:41 NOTE: This is in dry-run mode, the following actions will not be executed.
2021/08/18 15:52:41 Run without --dry-run to take the actions described below:
2021/08/18 15:52:41
2021/08/18 15:52:41 Release "elasticsearch" will be converted from Helm v2 to Helm v3.
2021/08/18 15:52:41 [Helm 3] Release "elasticsearch" will be created.
Error: elasticsearch has no deployed releases
Error: plugin "2to3" exited with error
In Chart.yaml
I have kubeVersion: ">=1.10.1"
Cluster nodes have below version
v1.18.0-rc.1
and helm installation fails with error
Error: chart requires kubeVersion: >=1.10.1 which is incompatible with Kubernetes v1.18.0-rc.1
I tried changing kubeVersion to 1.10.1-rc.1 but some new error
error unmarshaling JSON: while decoding JSON: json: cannot unmarshal bool into Go value of type releaseutil.SimpleHead
# helm version
version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}
To allow prereleases (such as rc), at least in Helm, the constraints suffix needs to be -0. For example, >=1.20.0-0 will allow versions greater or equal to 1.20.0, including any prereleases.
On Kubernetes 1.7, I am trying to create an ExternalAdmissionHookConfiguration. I have tried to set the failurePolicy: Fail, however, I get the following error:
The ExternalAdmissionHookConfiguration "policy-agent" is invalid: externalAdmissionHooks[0].failurePolicy: Unsupported value: "Fail": supported values: Ignore
The documentation suggests that Fail is a valid option.
https://kubernetes.io/docs/admin/extensible-admission-controllers/
It is valid as of 1.9
I'd recommend building on 1.9 admission webhooks. The pre-1.9 versions were discontinued at alpha level and redone as validating and mutating versions in 1.9
We are using some community custom resource types (https://github.com/ljfranklin/terraform-resource and https://github.com/cloudfoundry/bosh-deployment-resource). After upgrading to concourse 3.3.0, we've begun consistently seeing the following error on a few of our jobs at the same step: json: unsupported type: map[interface {}]interface {}.
This is fairly hard to debug as there is no other log output other than that. We are unsure what is incompatible between those resources and Concourse.
Notes about our pipeline:
We originally had substituted all of our usages of {{}} to (()), but reverting that did not lead to the error going away.
We upgraded concourse from v3.0.1.
The failing step can be found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L731-L739
We are using a resource called elsa-aws-storage-terraform, found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L731-L739
That resource is of a custom resource-type terraform found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L45-L48
A similar failing step can be found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L871-L886
This is related to issue of not being able to define nested maps in resource configuration https://github.com/concourse/concourse/issues/1345