I am trying to move our app deployment to helm and facing a obstacle with injecting istio in it. We do not have namespaces wide istio enabled, so have to inject only for specfic apps.
Tried googling and nothing came up. Did anyone came through this issue.
So far, we was running a shell script directly through ansible for injecting and deploying the app which cannot be used with helm.
I am not Istio expert but what i have found:
1 - Installing the Sidecar/More control, it can be helpful in this case to reuse specific helm labels:
policy: enabled
neverInjectSelector:
- matchExpressions:
- {key: openshift.io/build.name, operator: Exists}
2 - Dynamic Admission Webhooks in order to change the default settings during deployments,
3 - Helm templating customization + annotation, posprocessing (labeling),
annotations:
sidecar.istio.io/inject: "true"
4 - Helm Inject Plugin,
Please let me know if it helped.
Related
I'm following along with a video explaining blue/green Deployments in Kubernetes. They have a simple example with a Deployment named blue-nginx and another named green-nginx.
The blue Deployment is exposed via a Service named bgnginx. To transfer traffic from the blue deployment to the green deployment, the Service is deleted and the green deployment is exposed via a Service with the same name. This is done with the following one-liner:
kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx
Obviously, this works successfully. However, I'm wondering why they don't just use kubectl edit to change the labels in the Service instead of deleting and recreating it. If I edit bgnginx and set .metadata.labels.app & .spec.selector.app to green-nginx it achieves the same thing.
Is there a benefit to deleting and recreating the Service, or am I safe just editing it?
Yes, you can follow the kubectl edit svc and edit the labels & selector there.
it's fine, however YAML and other option is suggested due to kubectl edit is error-prone approach. you might face indentation issues.
Is there a benefit to deleting and recreating the Service, or am I
safe just editing it?
It's more about following best practices, and you have YAML declarative file handy with version control if managing.
The problem with kubectl edit is that it requires a human to operate a text editor. This is a little inefficient and things do occasionally go wrong.
I suspect the reason your writeup wants you to kubectl delete the Service first is that the kubectl expose command will fail if it already exists. But as #HarshManvar suggests in their answer, a better approach is to have an actual YAML file checked into source control
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app.kubernetes.io/name: myapp
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: blue
You should be able to kubectl apply -f service.yaml to deploy it into the cluster, or a tool can do that automatically.
The problem here is that you still have to edit the YAML file (or in principle you can do it with sed) and swapping the deployment would result in an extra commit. You can use a tool like Helm that supports an extra templating layer
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: {{ .Values.color }}
In Helm I might set this up with three separate Helm releases: the "blue" and "green" copies of your application, plus a separate top-level release that just contained the Service.
helm install myapp-blue ./myapp
# do some isolated validation
helm upgrade myapp-router ./router --set color=blue
# do some more validation
helm uninstall myapp-green
You can do similar things with other templating tools like ytt or overlay layers like Kustomize. The Service's selectors: don't have to match its own metadata, and you could create a Service that matched both copies of the application, maybe for a canary pattern rather than a blue/green deployment.
The hazelcast members are able to communicate with each other in non-kube environments. However, the same is not happening in Kube environments. The pod is not able to resolve its own domain.
I am working with Spring Boot 2.1.8 and Hazelcast 3.11.4 (TcpIp config)
Hazelcast configuration:
Config config = new Config();
config.setInstanceName("hazelcast-instance")
.setGroupConfig(new GroupConfig(hazelcastGroupName, hazelcastGroupPassword))
.setProperties(hzProps)
.setNetworkConfig(
new NetworkConfig()
.setPort(hazelcastNetworkPort)
.setPortAutoIncrement(hazelcastNetworkPortAutoIncrement)
.setJoin(
new JoinConfig()
.setMulticastConfig(new MulticastConfig().setEnabled(false))
.setAwsConfig(new AwsConfig().setEnabled(false))
.setTcpIpConfig(new TcpIpConfig().setEnabled(true).setMembers(memberList))))
.addMapConfig(initReferenceDataMapConfig());
return config;
Members definition in Statefulset config file:
- name: HC_NETWORK_MEMBERS
value: project-0.project-ha, project-1.project-ha
Headless service config:
---
apiVersion: v1
kind: Service
metadata:
namespace: {{ kuber_namespace }}
labels:
app: project
name: project-ha
spec:
clusterIP: None
selector:
app: project
Errors:
Resolving domain name 'project-0.project-ha' to address(es): [10.42.30.215]
Cannot resolve hostname: 'project-1.project-ha'
Members {size:1, ver:1} [
Member [10.42.11.173]:5001 - d928de2c-b4ff-4f6d-a324-5487e33ca037 this
]
The other pod has a similar error.
Fun facts:
It works fine when I set the full hostname in the HC_NETWORK_MEMBERS var. For example: project-0.project-ha.namespace.svc.cluster.local, project-1.project-ha.namespace.svc.cluster.local
There is a discovery plugin for Kubernetes, see here, and varies "how to" guides for Kubernetes here
If you use that, most of your problems should go away. The plugin looks up the member list from Kubernete's DNS.
If you can, it's worth upgrading to Hazelcast 4.2, as 3.11 is not the latest. Make sure to get a matching version of the plugin.
On 4.2, discovery has auto-detection, so will do it's best to help.
Please check the related guides:
Hazelcast Guide: Hazelcast for Kubernetes
Hazelcast Guide: Embedded Hazelcast on Kubernetes
For the embedded Hazelcast, you need to use the Hazelcast Kubernetes plugin. For the client-server, you can use TCP-IP configuration on the member side and use Hazelcast service name as the static DNS. It will be resolved automatically.
I was trying to do a configuration on my cluster and I found out that there's an object called NodeConfig with the tags
apiVersion: acm.vmware.com/v1alpha1
kind: NodeConfig
spec:
config: |
nicNaming:
- match:
deviceLabel: Ethernet1
targetName: XXXXX
- match:
deviceLabel: Ethernet2
targetName: XXXXX
- match:
deviceLabel: Ethernet3
targetName: XXXXX
Is suppose, ConfigMap does the same thing of doing or still there's a difference?
NodeConfig is the Custom Resource Definition (CRD) under Node Operator created by VMWare. According to the VMWare's definition here, NodeConfig is used for the definition in the Node under VMWare's Cloud Platform
ConfigMap is a built-in Kubernetes object for storing the configuration you needed for application.
They are totally two different things in general. CRD is a way to extend the functionality of Kubernetes. There will be a custom controller for reconciliation, in other word, handling the CRUD logic of the resource. You can use your own controller to extend the feature under Kubernetes. In your case, VMWare uses the CRD to let you configure the Node within the cluster.
I have a nginx ingress controller installed via gitlab managed apps.
I would like to disable hsts for subdomains. I known I can disable it via a custom ConfigMap (https://kubernetes.github.io/ingress-nginx/user-guide/tls/)
But I don't know where to place this and how to name it so the gitlab ingress will pick it up.
So what I did in the end is using: https://docs.gitlab.com/ee/user/clusters/applications.html#install-using-gitlab-cicd.
So the gitlab-managed-apps are not managed via ui but with a "cluster management project".
So now I don't have to figure out how to place that config map in my cluster (and how to name it) but I can just configure the ingress controller (and everything else) via the helm chart with a simple values.yaml.
I just cloned the https://gitlab.com/gitlab-org/cluster-integration/example-cluster-applications/ example and added:
# .gitlab/managed-apps/ingress/values.yml
controller:
replicaCount: 1
config:
hsts-include-subdomains: "false"
So this is still an alpha feature but for now it works well for me :-)
I created a new chart with 2 podPresets and 2 deployments and when I go to run helm install the deployment(pod) object is created first and then podPresets hence my values from podPreset are not applied to the pods, but when I manually create podPreset first and then deployment the presets are applied properly, Is there a way I can specify in helm as to which object should be created first.
Posting this as Community Wiki for better visibility as answer was provided in comments below another answer made by #Rastko.
PodPresents
A Pod Preset is an API resource for injecting additional runtime
requirements into a Pod at creation time. Using a Pod Preset allows
pod template authors to not have to explicitly provide all information
for every pod. This way, authors of pod templates consuming a specific
service do not need to know all the details about that service.
For more information, please check official docs.
Order of deploying objects in Helm
Order of deploying is hardcoded in Helm. List can be found here.
In addition, if resource is not in the list it will be executed as last one.
Answer to question from comments*
Answer to your question - To achieve order different then default one, you can create two helm charts in which one with deployments is executed afterwards with preinstall hook making sure that presets are there.
Pre-install hook annotation allows to execute after templates are rendered, but before any resources are created.
This workaround was mentioned on Github thread. Example for service:
apiVersion: v1
kind: Service
metadata:
name: foo
annotations:
"helm.sh/hook": "pre-install"
As additional information, there is possibility to define weight for a hook which will help build a deterministic executing order.
annotations:
"helm.sh/hook-weight": "5"
For more details regarding this annotation, please check this Stackoverflow qustion.
Since you are using Helm charts and have full control of this part, why not make optional parts in your helm charts that you can activate with an external value?
This would be a lot more "Helm native" way:
{{- if eq .Values.prodSecret "enabled"}}
- name: prod_db_password
valueFrom:
secretKeyRef:
name: prod_db_password
key: password
{{- end}}
Then you just need to add --set prodSecret=enabled when executing your Helm chart.