I was trying to do a configuration on my cluster and I found out that there's an object called NodeConfig with the tags
apiVersion: acm.vmware.com/v1alpha1
kind: NodeConfig
spec:
config: |
nicNaming:
- match:
deviceLabel: Ethernet1
targetName: XXXXX
- match:
deviceLabel: Ethernet2
targetName: XXXXX
- match:
deviceLabel: Ethernet3
targetName: XXXXX
Is suppose, ConfigMap does the same thing of doing or still there's a difference?
NodeConfig is the Custom Resource Definition (CRD) under Node Operator created by VMWare. According to the VMWare's definition here, NodeConfig is used for the definition in the Node under VMWare's Cloud Platform
ConfigMap is a built-in Kubernetes object for storing the configuration you needed for application.
They are totally two different things in general. CRD is a way to extend the functionality of Kubernetes. There will be a custom controller for reconciliation, in other word, handling the CRUD logic of the resource. You can use your own controller to extend the feature under Kubernetes. In your case, VMWare uses the CRD to let you configure the Node within the cluster.
Related
I use deployment.yaml kind: Deployment and specify sidecar.istio.io/proxyCPU: "100m" and other sidecar annotations in .spec.templace.metadata.annotations. The problem is this requires me to repeat the same in every application and there are too many of them.
I understand there is a global MeshConfig in istio-system namespace where we can set these defaults but we are in a multi-tenant environment and objects in that namespaces are not in our control.
Is there a way to specify these defaults at the namespace levels so that when sidecars (istio-proxy) gets injected they alongside the app in the pod they get these defaults ?
Why kind: Sidecar that is deployed for the mesh does not allow these annotations?
Is there a way to override the default MeshConfig somewhere single place rather than every deployment spec?
I have a few external CRDs with old apiVersion applied in the cluster, and operators based on those CRDs deployed.
As said in official docs about Kubernetes API and Feature Removals in 1.22.
You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version. If you defined any custom resources in your cluster, those are still served after you upgrade.
Based on the quote, does it mean I could leave those apiextensions.k8s.io/v1beta1 CRDs in the cluster? Will controllers/operators continue to work normally?
The custom resources will still be served after you upgrade
Suppose we define a resource called mykind
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: mykinds.grp.example.com
spec:
group: grp.example.com
versions:
- name: v1beta1
served: true
storage: true
Then, on any cluster where this has been applied I can always define a mykind resource:
apiVersion: grp.example.com/v1beta1
kind: Mykind
metadata:
name: mykind-instance
And this resource will still be served normally after upgrade even if the CRD for mykind was created under v1beta1.
However, anything in the controller / operator code referencing v1beta1 CRD won't work. This could be applying the CRD itself (if your controller has permissions to do that) for example. That's something to watch out for if your operator is managed by the Operator Lifecycle Manager. But watching for changes in the CRs would be unaffected by the upgrade.
So if your controller / operator isn't watching CustomResourceDefinitions then technically you can leave these CRDs on the cluster and your operator will work as normal. But you won't be able to uninstall + reinstall should you need to.
Another thing to explore is if / how that might affect your ability to bump API versions later though.
I have a configuration file as part of the Kubernetes ConfigMap data section. Whenever there is a change in the content of the configuration file(ConfigMap data:), there should be some trigger/listener resulting in some invocation. (As part of this invocation, I need to have some code implemented which restarts some service objects).
Is there some Kubernetes configuration available that can be used to configure this listener for ConfigMap?.
ConfigMap sample :
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-glare-configurations
data:
glare.conf.template: |
use_stderr = false
bind_port = 9494
default_api_limit = {{ .Values.configruations.default_api_limit | default 150 }}
There is no built-in support in Kubernetes for this feature at the time of writing this answer afaik.
Update:
You can use one of the supported client libraries and write a custom listener.
Here is an example of how to watch for pods using the python client library.
The below section is useful to provide rolling updates to the objects like Deployments in case of changes in the resources that they reference like ConfigMap/Secret.
There is a feature request in the works for the requirement.
You can have a look at Reloader which is a custom plugin used exactly for this requirement.
I want to use the same hostname e.g. example.com in two different namespaces with different paths. e.g. in namespace A I want example.com/clientA and in namespace B I want example.com/clientB. Any ideas on how to achieve this?
nginxinc has Cross-Namespace Configuration feature that allows you do exactly what you described.
You can also find there prepared examples with deployments, services, etc.
The only thing you most probably wont like..nginxinc is not free..
Also look here
Cross-namespace Configuration You can spread the Ingress configuration
for a common host across multiple Ingress resources using Mergeable
Ingress resources. Such resources can belong to the same or different
namespaces. This enables easier management when using a large number
of paths. See the Mergeable Ingress Resources example on our GitHub.
As an alternative to Mergeable Ingress resources, you can use
VirtualServer and VirtualServerRoute resources for cross-namespace
configuration. See the Cross-Namespace Configuration example on our
GitHub.
If you do not want to change your default ingress controller (nginx-ingress), another option is to define a service of type ExternalName in your default namespace that points to the full internal service name of the service in the other namespace.
Something like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: my-svc
name: webapp
namespace: default
spec:
externalName: my-svc.my-namespace.svc # <-- put your service name with namespace here
type: ExternalName
If I have deployments for two haproxy ingress controllers in two different namespaces on the same cluster, but the functionality of haproxy ingress controller is misbehaving. I just wanted to know if I can create two haproxy deployments in two namespaces and use them effortlessly?
Yes, you can.
In order to achieve this you should use correctly configured RBAC and --namespace-whitelist flag to point to correct namespace.
HAProxy documentation says you can whitelist/blacklist namespaces HAProxy uses:
--namespace-whitelist
The controller watches all namespaces, but you can specify a specific
namespace to watch. You can specify this setting multiple times.
--namespace-blacklist
The controller watches all namespaces, but you can blacklist a
namespace that you do not want to watch for changes. You can specify
this setting multiple times.
You can customize the ingress controller Deployment resource in the file haproxy-ingress.yaml in the repository https://github.com/haproxytech/ by adding any of the following arguments under the section spec.template.spec.containers.args
You deployments should look like
spec:
serviceAccountName: haproxy-ingress-service-account
containers:
- name: haproxy-ingress
image: haproxytech/kubernetes-ingress
args:
- --default-ssl-certificate=default/tls-secret
- --configmap=default/haproxy-configmap
- --default-backend-service=haproxy-controller/ingress-default-backend
- --namespace-whitelist=mynamespace1