I have an application that has 5 micro services lets say, nginx,mysql,phpmyadmin,backend-1 and backend-2.
I'm to deploy this application in six different namespace lets say project-1, project-2, project-3, project-4 and project-5.
i would like to config one DNS host "app.dummy.com" to access these application in a way-
app.dummy.com/project-1/nginx
app.dummy.com/project-1/phpmyadmin
app.dummy.com/project-2/nginx
app.dummy.com/project-2/phpmyadmin
app.dummy.com/project-3/nginx
app.dummy.com/project-3/phpmyadmin
app.dummy.com/project-4/nginx
app.dummy.com/project-4/phpmyadmin
app.dummy.com/project-5/nginx
app.dummy.com/project-5/phpmyadmin
how should is config ingress for nginx,phpmyadmin each namespace? becaz the service name for these service remains same across all the namespace.
Related
I have an svc running eg my-svc-1,
and I run a deployment that makes an svc of the same name my-svc-1. What would happen?
Service should be unique within a namespace but not across a namespaces. The uniqueness applies to all namespace-based scoping objects for example Deployment, service, secrets etc.
Namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces.
If you really need the same service with the different software version, you can create another namespace named B and then you can create a service having name my-svc-1.
working-with-objects-namespaces
I am following this procedure to deploy konghq in my Kubernetes.
The key installation command there is this:
$ kubectl create -f https://konghq.com/blog/kubernetes-ingress-api-gateway/
It works fine when I create one single kinghq deployment. But it doesn't work for two deployments. What would I need to do? I changed the namespace but realized that about of resources are created outside of the namespace.
There is no sense to create 2 ingress controllers under 1 namespace. Would you like have multiple ingress rules under 1 namespace - you are welcome to create 1 Ingress controller and multiple rules.
Consider creating 2 ingress controllers in case you have multiple namespaces.
For example, check Multiple Ingress in different namespaces
I am trying to setup 2 Ingress controllers in my k8s cluster under 2
namespaces. Reason for this setup: Need one to be public which has
route to only one service that we want to expose. Need another one to
be private and has routes to all services including the internal
services
To deep dive into your issue it would be nice to have logs, errors, etc.
In case you still DO need 2 controllers, I would recommend you change namespace resource limits(to avoid issues) and then try deploy again.
To check: Multiple kong ingress controller or just one to different environments
I recently successfully deployed my Vue.JS webapp to Cloud Run. Beforehand the webapp was deployed by a Kubernetes Deployment and Service. I also had an Ingress running that redirect my http requests to that service. Now Cloud Run takes over the work.
Unfortunately the new Cloud Run driven Knative "Service" does not seem to work anymore.
My Ingress is showing me the following error message:
(Where importer-controlroom is my application's name)
The error message is not cromprehensible to me. I hereby try to provide you with some more information with what you maybe be able to help me out with this issue.
This is current list of resources that have been created. I especially was looking at the importer-controlroom-frontend External Name. I somewhat think this is the Service that replaced the old one?
Because I used it's name in my Ingress Rules to map it to a domain as you can see here:
The error message in the Ingress says:
could not find port "80" in service "dev/importer-controlroom-frontend"
However the Cloud Run revision shows that port 80 is being provided:
A friend of mine redirect me to this article: https://cloud.google.com/solutions/integrating-https-load-balancing-with-istio-and-cloud-run-for-anthos-deployed-on-gke?hl=de#handling_health_check_requests
Unfortunately I have no idea what it is talking about. True thing is that we are using Istio but I did not configure it and have a very hard time getting my head around it for this particular case.
INFO_1
Dockerfile contains:
EXPOSE 80
CMD [ "http-server", "dist", "-p 80"]
Cloud Run for Anthos apps do not work with a GKE Ingress.
Knative services are exposed through a public gateway service called istio-ingress on the gke-system namespace:
$ kubectl get svc -n gke-system
NAME TYPE CLUSTER-IP EXTERNAL-IP
istio-ingress LoadBalancer 10.4.10.33 35.239.55.104
Domain names etc work very differently on Cloud Run for Anthos so make sure to read the docs on that.
I am new to Kubernetes. I was going through some tutorials related to Kubernetes deployment. I am seeing two different commands which looks like doing similar things.
The below command is from google code lab (URL: https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..index#7 )
$ kubectl create service loadbalancer hello-java --tcp=8080:8080
Another command is being seen in a different place along with the Kubernetes site (https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/)
$ kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
Now as per my understanding both the command are creating services from deployments with loadbalancer and exposing them to the outer world.
I don't think there will be two separate commands for the same task. There should be some difference that I am not able to understand.
Would anyone please clarify this to me?
There are cases where the expose command is not sufficient & your only practical option is to use create service.
Overall there are 4 different types of Kubernetes services, for some it really doesn't matter if you use expose or create, while for others it maters very much.
The types of Kubernetes services are:
ClusterIP
NodePort
LoadBalancer
ExternalName
So for example in the case of the NodePort type service let's say we wanted to set a node port with value 31888 :
Example 1:
In the following command there is no argument for the node port value, the expose command creates it automatically:
kubectl expose deployment demo --name=demo --type=NodePort --port=8080 --target-port=80
The only way to set the node port value is after being created using the edit command to update the node port value: kebctl edit service demo
Example 2:
In this example the create service nodeport is dedicated to creating the NodePort type and has arguments to enable us to control the node port value:
kubectl create service nodeport demo --top=8080:80 --node-port=31888
In this Example 2 the node port value is set with the command line and there is no need to manually edit the value as in case of Example 1.
Important :
The create service [service-name] does not have an option to set the service's selector , so the service wont automatically connect to existing pods.
To set the selector labels to target specific pods you will need to follow up the create service [service-name] with the set selector command :
kubectl set selector service [NAME] [key1]=[value1]
So for above case 2 example, if you want the service to work with a deployment with pods labeled myapp: hello then this is the follow-up command needed:
kubectl set selector service demo myapp=hello
The main differences can be seen from the docs.
1.- kubectl create command
Create a resource from a file or from stdin.
JSON and YAML formats are accepted.
2.- kubectl expose command
Expose a resource as a new Kubernetes service.
Looks up a deployment, service, replica set, replication controller or
pod by name and uses the selector for that resource as the selector
for a new service on the specified port. [...]
Even though both achieve the same thing in the examples you provided, the create command is kind of a more global one, with it you can create all resources by using the command line or a yaml/json file. However, the expose command will only create a service resource, and it's mainly used to expose other already existing resources.
Source: K8s Docs
I hope this helps a little : Here the key would be to understand the difference between services and deployments. As per this link [1] you will notice that a deployment deals with the mortality of Pods automatically. However , if a Pod is terminated and then another is spun up how do the
Pods continue to communicate when their IPs change? They use Services : “a Service is an abstraction which defines a logical set of Pods and a policy by which to access them”. Additionally, it may be of interest to view this link [2] as it describes that the kubectl expose command creates a service which in turn creates an external IP and a Load Balancer. As a beginner it may be of help to review the command language used with Kubernetes, this link [3] describes (as mentioned in another answer) that the kubectl create command is used to be more specific about the objects it creates. As well using the create command you can create a larger variety of objects.
[1]:Service :https://kubernetes.io/docs/concepts/services-networking/service/
[2]:Deploying a containerized web application :https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app#step_6_expose_your_application_to_the_internet
[3]:How to create objects: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/#how-to-create-objects
From my understanding, approach 1 (using create service) just creates service object and as label selector is not specified it does not have any underlying target pods. But in approach 2 (using expose deployment) the service load balances all the pods which are created using deployment as the service is attached with required labels automatically.
I have two KONG instances in a k8s cluster with their respective database
Kong sandbox instance is named kong-ingress-controller and it's configuration is such as follow:
And also I have a Kong production instance which is named kong-ingress-controller-production and it's configuration is such as follow:
Even, under this configuration schema, I can deploy both kongs (sandbox and production instances) in the same port, I mean 8001, because each kong is located in a different pod machine.
In kong sandbox instance I have created the following kong resources:
basic-auth and acl KongPlugins
2 KongConsumer resources with its respective KongCredentials
And also I have configured the - --ingress-class=kong parameter in kong sandbox and I have an Ingress resource pointing to it
In the kong sandbox environment all resources previously mentioned are created and stored on its kong database.
Not of this way in the kong production environment. Let's see ...
I am creating also in kong production environment the following:
basic-auth and acl KongPlugins
1 KongConsumer resource with its respective KongCredential
And also I have configured the - --ingress-class=kong-production parameter in kong production and I have an Ingress resource pointing to it
What is happening?
My two kong sandbox and production instances are running and working, but
kong sandbox environment database is taking over of the creation and storing of the KongConsumers and KongCredentials.
These resources are not being stored to the kong production database, the credentials, consumers, basic-auth and ACL plugins are stored on sandbox database ...
Only the KongPlugins are being stored on kong production database.
The situation looks like the different kong connections were crossed at some point or at least kong sandbox environment is listening and taking the request to kong production environment.
A test or proof of that I am saying, is that even Kong production controller environment ignores the creation of the Kongconsumer and its KongCredential. THese are the logs in relation to this affirmation
I0509 14:23:21.759720 6 kong.go:113] syncing global plugins
I0509 14:29:57.353944 6 store.go:371] ingress rule without annotations
I0509 14:29:57.353963 6 store.go:373] ignoring add event for plugin swaggerapi-production-basic-auth based on annotation kubernetes.io/ingress.class with value
I0509 14:29:57.395732 6 store.go:371] ingress rule without annotations
I0509 14:29:57.395756 6 store.go:373] ignoring add event for plugin swaggerapi-production-acl based on annotation kubernetes.io/ingress.class with value
I0509 14:29:57.438604 6 store.go:439] ignoring add event for consumer zcrm365dev-consumer based on annotation kubernetes.io/ingress.class with value
I0509 14:29:57.487996 6 store.go:505] ignoring add event for credential zcrm365dev-credential based on annotation kubernetes.io/ingress.class with value
I0509 14:29:57.529698 6 store.go:505] ignoring add event for credential zcrm365-prod-acl-credential based on annotation kubernetes.io/ingress.class with value
It's weird because I am specifying in each Kong deployment the --ingress-class parameter and each one has a specific value of this way:
kong production environment --> - --ingress-class=kong-production
kong sandbox environment --> - --ingress-class=kong
And also in each Ingress resource which is pointing to each specific kong class using the kubernetes.io/ingress.class annotation of this way:
Ingress pointing to kong sandbox ---> kubernetes.io/ingress.class: "kong"
Ingress pointing to kong production ---> kubernetes.io/ingress.class: "kong-production"
Someone know what happen here?
How to can I redirect or at least perform debug of this behavior?
I have been checking the logs, and port-forward operation in order to confirm the availability of both kong instances and also that any of them is not defeating the other, such as we can see in this picture: