I am following this procedure to deploy konghq in my Kubernetes.
The key installation command there is this:
$ kubectl create -f https://konghq.com/blog/kubernetes-ingress-api-gateway/
It works fine when I create one single kinghq deployment. But it doesn't work for two deployments. What would I need to do? I changed the namespace but realized that about of resources are created outside of the namespace.
There is no sense to create 2 ingress controllers under 1 namespace. Would you like have multiple ingress rules under 1 namespace - you are welcome to create 1 Ingress controller and multiple rules.
Consider creating 2 ingress controllers in case you have multiple namespaces.
For example, check Multiple Ingress in different namespaces
I am trying to setup 2 Ingress controllers in my k8s cluster under 2
namespaces. Reason for this setup: Need one to be public which has
route to only one service that we want to expose. Need another one to
be private and has routes to all services including the internal
services
To deep dive into your issue it would be nice to have logs, errors, etc.
In case you still DO need 2 controllers, I would recommend you change namespace resource limits(to avoid issues) and then try deploy again.
To check: Multiple kong ingress controller or just one to different environments
Related
I have an application that has 5 micro services lets say, nginx,mysql,phpmyadmin,backend-1 and backend-2.
I'm to deploy this application in six different namespace lets say project-1, project-2, project-3, project-4 and project-5.
i would like to config one DNS host "app.dummy.com" to access these application in a way-
app.dummy.com/project-1/nginx
app.dummy.com/project-1/phpmyadmin
app.dummy.com/project-2/nginx
app.dummy.com/project-2/phpmyadmin
app.dummy.com/project-3/nginx
app.dummy.com/project-3/phpmyadmin
app.dummy.com/project-4/nginx
app.dummy.com/project-4/phpmyadmin
app.dummy.com/project-5/nginx
app.dummy.com/project-5/phpmyadmin
how should is config ingress for nginx,phpmyadmin each namespace? becaz the service name for these service remains same across all the namespace.
Is it possible to run multiple IngressController in the same Namespace with the same IngressClass?
I have multiple IngressController with different LoadBalancer IP Addresses and would like to continue with this setup.
I upgraded the first IngressController to the latest version.
Updating the second/third/.. IngressController fails because of:
rendered manifests contain a resource that already exists. Unable to continue with update: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nginx-ingress-lb-02": current value is "nginx-ingress-lb-01"
Any idea how to fix this?
The issue you mention here is mainly with Helm, preventing you from overwriting some resources - your IngressClass - that belongs to another helm deployment.
One way to work around this may be to use helm "--dry-run" option. Once you have the list of objects written into a file: remove the IngressClass, then apply that file.
Another way may be to patch the chart deploying your controller. As a contributor to the Traefik helm chart, I know that we would install IngressClasses named after the Traefik deployment we operate. The chart you're using, for Nginx, apparently does not implement support for that scenario. Which doesn't mean it shouldn't work.
Now, answering your first question, is it possible to run multiple IngressController in the same Namespace with the same IngressClass: yes.
You may have several Ingress Controllers, one that watches for Ingresses in namespace A, another in namespace B, both ingresses referencing the same class. Deploying those ingresses into the same namespace is possible - although implementing NetworkPolicies, isolating your controllers into their own namespace would help in distinguishing who's who.
An options that works for me, when deploying multiple ingress controllers with Helm, is setting controller.ingressClassResource.enabled: false in every Helm deployment, except the first.
The comments in the default values.yaml aren't entirely clear on this, but after studying the chart I found that controller.ingressClassResource.enabled is only evaluated to determine whether or not to create the IngressClass, not whether or not to attach the controller.ingressClassResource.controllerValue to the controller. (This is true at least for helm-chart-4.0.13).
So, the first Helm deployment, if you don't override any of the default controller.ingressClassResource settings, the following values will be used to create an IngressClass, and attach the controllerValue to the controller:
controller:
ingressClassResource:
name: nginx
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx"
For all other controllers that you want to run with the same IngressClass, simply set controller.ingressClassResource.enabled: false, to prevent Helm from trying (and failing) to create the IngressClass again.
I am trying to create an architecture where every deployment deploys with a cluster IP and the rule get automatically added to the ingress rule as a new path.
My initial thinking was to give the Deployment a ServiceAccount that has access to manage ingress rule and before the main pod would run, an init container would fetch the YAML and add the ruleset, while deleting maybe delete that as well?
But the more I think about it, more loopholes start coming to mind. For eg: What happens when 2 Deployment's start at the same time?
and things like that.
Any idea on how to tackle this would be appreciated.
My background: I am a cloud engineer trying to shift to DevOps, have beginner-intermediate level knowledge of Kubernetes.
I am trying to create an architecture where every deployment deploys with a cluster IP and the rule gets automatically added to the ingress rule as a new path.
Few options:
Init container (You figured this one out and mentioned it in your question)
ADd init container to your Deployment which will add the desired rule to your Ingress
Probes
Add a post-start probe which will be executed once your pod is running and kicking will update the Ingress rules
CronJob
Add a CronJob which will "scan" for changes and will update the Ingress again
Can anyone point me to the common strategy to setup a Kubernetes cluster according to the principles of infrastructure as code and automatic deployment for different developer teams with Git repos and an undefined CI/CD platform.
Let's say I am going to use Terraform to deploy a Kubernetes cluster on a hypothetical cloud service named QKS with a commonly used service, for example Apache Airflow, for which a public helm chart is available. There are two custom services (from two independent developer groups) to deploy named "apples" and "bananas".
I am struggling with the separation of responsibilities of different code bases. Which steps in this process can best still be done manually. A lot is being written about this technology, but I cannot find any articles on this issue in particular.
This is my own proposal.
Have three git repositories:
my-infrastructure: includes the Terraform files, the Airflow Helm deployment and deployment of two namespaces included access roles to these namespaces. CICD tracks for changes and deploys them on QKS
apples: code base and corresponding helm template. CICD can deploy on the apples namespace only.
bananas: code base and corresponding helm template. CICD can deploy on the bananas namespace only.
Notes:
subdivision of the cluster into namespaces is obvious;
all secrets and authorization tokens for the namespaces can be created via Terraform using Terraform kubernetes provider.
https://www.terraform.io/docs/providers/kubernetes/r/secret.html
There is an interesting kubernetes project for this called cluster-api that lets you create, configure & manage kubernetes clusters in a declarative fashion in a way similar to how we manage different resources in kubernetes itself. It defines new resources of different kinds like Cluster, Machine
e.g. You could define a cluster like this:
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
name: capi-quickstart
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
name: capi-quickstart
Of course you would need a starting / bootstrap kubernetes cluster where you will deploy this resource. This project is still in prototype stage, so use caution.
Check out the cluster-api repository on Github: https://github.com/kubernetes-sigs/cluster-api
I am new to Kubernetes. I was going through some tutorials related to Kubernetes deployment. I am seeing two different commands which looks like doing similar things.
The below command is from google code lab (URL: https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..index#7 )
$ kubectl create service loadbalancer hello-java --tcp=8080:8080
Another command is being seen in a different place along with the Kubernetes site (https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/)
$ kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
Now as per my understanding both the command are creating services from deployments with loadbalancer and exposing them to the outer world.
I don't think there will be two separate commands for the same task. There should be some difference that I am not able to understand.
Would anyone please clarify this to me?
There are cases where the expose command is not sufficient & your only practical option is to use create service.
Overall there are 4 different types of Kubernetes services, for some it really doesn't matter if you use expose or create, while for others it maters very much.
The types of Kubernetes services are:
ClusterIP
NodePort
LoadBalancer
ExternalName
So for example in the case of the NodePort type service let's say we wanted to set a node port with value 31888 :
Example 1:
In the following command there is no argument for the node port value, the expose command creates it automatically:
kubectl expose deployment demo --name=demo --type=NodePort --port=8080 --target-port=80
The only way to set the node port value is after being created using the edit command to update the node port value: kebctl edit service demo
Example 2:
In this example the create service nodeport is dedicated to creating the NodePort type and has arguments to enable us to control the node port value:
kubectl create service nodeport demo --top=8080:80 --node-port=31888
In this Example 2 the node port value is set with the command line and there is no need to manually edit the value as in case of Example 1.
Important :
The create service [service-name] does not have an option to set the service's selector , so the service wont automatically connect to existing pods.
To set the selector labels to target specific pods you will need to follow up the create service [service-name] with the set selector command :
kubectl set selector service [NAME] [key1]=[value1]
So for above case 2 example, if you want the service to work with a deployment with pods labeled myapp: hello then this is the follow-up command needed:
kubectl set selector service demo myapp=hello
The main differences can be seen from the docs.
1.- kubectl create command
Create a resource from a file or from stdin.
JSON and YAML formats are accepted.
2.- kubectl expose command
Expose a resource as a new Kubernetes service.
Looks up a deployment, service, replica set, replication controller or
pod by name and uses the selector for that resource as the selector
for a new service on the specified port. [...]
Even though both achieve the same thing in the examples you provided, the create command is kind of a more global one, with it you can create all resources by using the command line or a yaml/json file. However, the expose command will only create a service resource, and it's mainly used to expose other already existing resources.
Source: K8s Docs
I hope this helps a little : Here the key would be to understand the difference between services and deployments. As per this link [1] you will notice that a deployment deals with the mortality of Pods automatically. However , if a Pod is terminated and then another is spun up how do the
Pods continue to communicate when their IPs change? They use Services : “a Service is an abstraction which defines a logical set of Pods and a policy by which to access them”. Additionally, it may be of interest to view this link [2] as it describes that the kubectl expose command creates a service which in turn creates an external IP and a Load Balancer. As a beginner it may be of help to review the command language used with Kubernetes, this link [3] describes (as mentioned in another answer) that the kubectl create command is used to be more specific about the objects it creates. As well using the create command you can create a larger variety of objects.
[1]:Service :https://kubernetes.io/docs/concepts/services-networking/service/
[2]:Deploying a containerized web application :https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app#step_6_expose_your_application_to_the_internet
[3]:How to create objects: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/#how-to-create-objects
From my understanding, approach 1 (using create service) just creates service object and as label selector is not specified it does not have any underlying target pods. But in approach 2 (using expose deployment) the service load balances all the pods which are created using deployment as the service is attached with required labels automatically.