Is there any command that can be used to apply new changes, because when I apply new changes with:
istioctl apply manifest --set XXX.XXXX=true
It overwrites the current value and set it to default.
That´s might not work because you have used istioctl manifest apply, which is deprecated and it´s istioctl install since istio 1.6 version.
Quoted from the documentation
Note that istioctl install and istioctl manifest apply are exactly the same command. In Istio 1.6, the simpler install command replaces manifest apply, which is deprecated and will be removed in 1.7.
AFAIK there are 2 ways to update new changes in istio
istioctl install
To enable the Grafana dashboard on top of the default profile, set the addonComponents.grafana.enabled configuration parameter with the following command:
$ istioctl install --set addonComponents.grafana.enabled=true
In general, you can use the --set flag in istioctl as you would with Helm. The only difference is you must prefix the setting paths with values. because this is the path to the Helm pass-through API in the IstioOperator API.
istio operator
In addition to installing any of Istio’s built-in configuration profiles, istioctl install provides a complete API for customizing the configuration.
The IstioOperator API
The configuration parameters in this API can be set individually using --set options on the command line. For example, to enable the control plane security feature in a default configuration profile, use this command:
$ istioctl install --set values.global.controlPlaneSecurityEnabled=true
Alternatively, the IstioOperator configuration can be specified in a YAML file and passed to istioctl using the -f option:
$ istioctl install -f samples/operator/pilot-k8s.yaml
For backwards compatibility, the previous Helm installation options, with the exception of Kubernetes resource settings, are also fully supported. To set them on the command line, prepend the option name with “values.”. For example, the following command overrides the pilot.traceSampling Helm configuration option:
$ istioctl install --set values.pilot.traceSampling=0.1
Helm values can also be set in an IstioOperator CR (YAML file) as described in Customize Istio settings using the Helm API, below.
If you want to set Kubernetes resource settings, use the IstioOperator API as described in Customize Kubernetes settings.
Related documentation and examples for istio operator.
https://istio.io/latest/docs/setup/install/istioctl/#customizing-the-configuration
https://istio.io/latest/docs/setup/install/standalone-operator/#update
https://stackoverflow.com/a/61865633/11977760
https://github.com/istio/operator/blob/master/samples/pilot-advanced-override.yaml
the way ive managed to upgrade is this:
do "istioctl upgrade" to upgrade the control plane in place
apply your custom configuration over the upgraded control plane
its not ideal by far but istio does a relatively bad job on dealing with customizations.
Related
I want to intercept the helm YAML and customize it using a Python script, and then install it. I have been doing something like helm template | python3 script... | kubectl apply -f - but of course this doesn't create a helm release in my cluster, so I lose out on helm rollback etc.
I have considered using Kustomize but it doesn't have the features that I'd like.
Is there a way to take pre-generated YAML, like that from helm template or helm install --dry-run and then install/upgrade that using helm?
Isn't that what post-renderers are for?
See https://helm.sh/docs/topics/advanced/#post-rendering
A post-renderer can be any executable that accepts rendered Kubernetes manifests on STDIN and returns valid Kubernetes manifests on STDOUT. It should return an non-0 exit code in the event of a failure. This is the only "API" between the two components. It allows for great flexibility in what you can do with your post-render process.
A post renderer can be used with install, upgrade, and template. To use a post-renderer, use the --post-renderer flag with a path to the renderer executable you wish to use:
$ helm install mychart stable/wordpress --post-renderer ./path/to/executable
I haven't used it myself yet, but it looks interesting if you want to run your own alternative kustomize.
See https://github.com/vmware-tanzu/carvel-ytt/tree/develop/examples/helm-ytt-post-renderer for an example that is not kustomize.
How to do a canary upgrade to existing istio customised setup.
Requirements:
We have existing customised setup of istio 1.7.3 (installed using istoctl method and no revision set for this) for AKS 1.18.14.
Now we need to upgrade to istio 1.8 with no downtime or minimal.
The upgrade should be safer and it wont break our prod environemnt in any ways.
How we installed the current istio customised environment:
created manifest.
istioctl manifest generate --set profile=default -f /manifests/overlay/overlay.yaml > $HOME/generated-manifest.yaml
installed istio.
istioctl install --set profile=default -f /manifests/overlay/overlay.yaml
Verified istio against the deployed manifest.
istioctl verify-install -f $HOME/generated-manifest.yaml
Planned upgrade process Reference
Precheck for upgrade.
istioctl x precheck
export the current used configuration of istio using below command to a yaml file.
kubectl -n istio-system get iop installed-state-install -o yaml > /tmp/iop.yaml
Download istio 1.8 binary and extract the directory and navigate the directory to where we have the 1.8 version istioctl binary.
cd istio1.8\istioctl1.8
from the new version istio directory, create a new controlplane for istio(1.8) with proper revision set and use the previously exported installed-state "iop.yaml".
./istioctl1.8 install --set revision=1-8 --set profile=default -f /tmp/iop.yaml
Expect that it will create new control plane with existing
costamised configuration and now we will have two control plane
deployments and services running side-by-side:
kubectl get pods -n istio-system -l app=istiod
NAME READY STATUS RESTARTS AGE
istiod-786779888b-p9s5n 1/1 Running 0 114m
istiod-1-7-6956db645c-vwhsk 1/1 Running 0 1m
After this, we need to change the existing label of all our cluster namespaces where we need to inject the istio proxy containers. Need to remove the old "istio-injection" label, and add the istio.io/rev label to point to the canary revision 1-8.
kubectl label namespace test-ns istio-injection- istio.io/rev=1-8
Hope, at this point also the environment is stable with old istio configurations and we can make decision on which app pods can be restarted to make the new control plane changes as per our downtime, and its allowed to run some apps with older control plane and another with new controller plane configs t this point. eg:
kubectl rollout restart deployment -n test-ns (first)
kubectl rollout restart deployment -n test-ns2 (later)
kubectl rollout restart deployment -n test-ns3 (again after sometieme later)
Once we planed for downtime and restarted the deployments as we decided, confirm all the pods are now using dataplane proxy injector of version 1.8 only
kubectl get pods -n test-ns -l istio.io/rev=1-8
To verify that the new pods in the test-ns namespace are using the istiod-canary service corresponding to the canary revision
istioctl proxy-status | grep ${pod_name} | awk '{print $7}'
After upgrading both the control plane and data plane, can uninstall the old control plane
istioctl x uninstall -f /tmp/iop.yaml.
Need to clear below points before upgrade.
Are all the steps prepared for the upgrade above are good to proceed for highly used Prod environment?
By exporting the installed state iop is enough to get all customised step to proceed the canary upgrade? or is there any chance of braking the upgrade or missing any settings?
Whether the step 4 above will create the 1.8 istio control plane with all the customization as we already have without any break or missing something?
after the step 4, do we need to any any extra configuration related to istiod service configuration> the followed document is not clear about that,
for the step 5 above, how we can identy all the namespaces, where we have the istio-injection enabled and only modify those namespace alone and leave others as it was before?
so for the step 8 above, how to ensure we are uninstalling old control plane only ? we have to get the binary for old controlplane say (1.7 in my case)and use that binary with same exported /tmp/iop.yaml?
No Idea about how to rollback any issues happened in between.. before or after the old controlplane deleted
No. You should go through changelog and upgrade notes. See what's new, what's changed, depracted etc. Adjust your configs accordingly.
In theory - yes, in practice - no. See above. That's why you should always check upgarde notes/changelog and plan accordingly. There is always a slim chance something will go wrong.
It should, but again, be prepared that something may break (One more time - go through changelog/upgrade notes, this is important).
No.
You can find all namespaces with Istio injection enabled with:
kubectl get namespaces -l=istio-injection=enabled
Istio upgrade process should only modify namespaces with injection enabled (and istio-system namespace).
If your old control plane does not have a revision label, you have to uninstall it using its original installation options (old yaml file)
istioctl x uninstall -f /path/to/old/config.yaml
If it does have revision label:
istioctl x uninstall --revision <revision>
You can just uninstall new control plane with
istioctl x uninstall revision=1-8
This will revert to the old control plane, assuming you have not yet uninstalled it. However, you will have to reinstall gateways for the old version manually, as the uninstall command does not revert them automatically.
I would strongly recommend creating a temporary test environment. Recreating existing cluster on test env. Performing upgrade there, and adjusting the process to meet your needs.
This way you will avoid catastrofic failures on your production environment.
How can I execute "helm install" command and re-install resources that I have defined in "templates"? I have some custom resources that already exist so I want to re-install them. It is possible to do that through a parameter in helm command?
I think your main question is:
I have some custom resources that already exist so I want to re-install them.
Which means DELETE then CREATE again.
Short answer
No.. but it can be done thru workaround
Detailed answer
Helm manages the RELEASE of the Kubernetes manifests by either:
creating helm install
updating helm upgrade
deleting helm delete
However, you can recreate resources following one of these approaches :
1. Twice Consecutive Upgrade
If your chart is designed to enable/disable installation of resources with Values ( .e.g: .Values.customResources.enabled) you can do the following:
helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=false
# Then another Run
helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=true
So, if you are the builder of the chart, your task is to make the design functional.
2. Using helmfile hooks
Helmfile is Helm of Helm.
It manage your helm releases within a single file called helmfile.yaml.
Not only that, but it also can call some LOCAL commands before/or/after installing/or/upgrading a Helm release.
This call which happen before or after, is named hook.
For your case, you will need presync hook.
If we organize your helm release as a Helmfile definition , it should be :
releases:
- name: <helm-release>
chart: <chart>
namespace: <namespace>
hooks:
- events: ["presync"]
showlogs: true
command: kubectl
args: [ "-n", "{{`{{ .Release.Namespace }}`}}", "delete", "crd", "my-custom-resources" ]
Now you just need to run helmfile apply
I know that CRD are not namespaced, but I put namespace in the hook just to demonstrate that Helmfile can give you the namespace of release as variable and no need to repeat your self.
You can use helm upgrade to upgrade any existing deployed chart with changes.
The upgrade arguments must be a release and chart. The chart argument can be either: a chart reference(example/mariadb), a path to a chart directory, a packaged chart, or a fully qualified URL. For chart references, the latest version will be specified unless the --version flag is set.
To override values in a chart, use either the --values flag and pass in a file or use the --set flag and pass configuration from the command line, to force string values, use --set-string. In case a value is large and therefore you want not to use neither --values nor --set, use --set-file to read the single large value from file.
You can specify the --values'/'-f flag multiple times. The priority will be given to the last (right-most) file specified. For example, if both myvalues.yaml and override.yaml contained a key called 'Test', the value set in override.yaml would take precedence
For example
helm upgrade -f myvalues.yaml -f override.yaml redis ./redis
easier way I follow, especially for pre existing jobs during helm upgrade is do kubectl delete job db-migrate-job --ignore-not-found
I am using the https://github.com/helm/charts/tree/master/stable/cert-manager.
It happens that I have a very frequent INFO logging. I was not yet able to find the source of that frequent logging. The problem is that the Google Cloud Platform Stackdriver feature is increasing in costs because of that high amount of logs.
Therefore I'd love to know how I can turn down INFO logging via the helm chart for the cert-manager.
I noticed that the Helm chart for cert-manager from community charts has been deprecated. The suggested official alternative does support config option to specify loglevel since release v0.7.2. see this pull request jetstack/cert-manager/1527.
So please use the official chart like:
$ helm repo add jetstack https://charts.jetstack.io
$ ## Install the cert-manager helm chart
$ helm install --name my-release --namespace cert-manager \
jetstack/cert-manager --set global.logLevel=1
I used HELM to install the Prometheus operator and kube-prometheus into my kubernetes cluster using the following commands:
helm install coreos/prometheus-operator --name prometheus-operator --namespace monitoring --set rbacEnable=false
helm install coreos/kube-prometheus --name kube-prometheus --set global.rbacEnable=false --namespace monitoring
Everything is running fine, however, I want to set up email alerts and in order to do so I must configure the SMTP settings in "custom.ini" file according to the grafana website. I am fairly new to Kuberenetes and using HELM charts, therefore I have no idea which command I would use to access this file or make updates to it? Is it possible to do so without having to redeploy?
Can anyone provide me with a command to update custom values?
You could pass grafana.env value to add SMTP-related settings:
GF_SMTP_ENABLED=true,GF_SMTP_HOST,GF_SMTP_USER and GF_SMTP_PASSWORD
should do the trick. The prometheus-operator chart relies on the upstream stable/grafana chart (although, still using the 1.25 version)