Create a externalName service to point to a route in another project in OpenShift - kubernetes

We are using openShift V3 Enterprise product.
I would like to create a externalName type service called serviceA in ProjectA and it will point to a route in projectB. and I will create a another route in ProjectA which will point to ServiceA service.
Is this possible to do?
Thanks!!!

You don't need to involve a route, you can use the service name directly to connect to it. The only caveat on that is that you need to (as admin), set up a pod network between the two projects. This is better as creating a route means it will also be exposed outside of the OpenShift cluster and so publicly accessible. You do not want that if these are internal services that you don't want exposed.
For details on pod networks see:
https://docs.openshift.com/container-platform/latest/admin_guide/managing_networking.html#admin-guide-pod-network

Related

Microservice structure using helm and kubernetes

We have several microservices(NodeJS based applications) which needs to communicate each other and two of them uses Redis and PostgreSQL. Below are the name of of my microservices. Each of them has their own SCM repository and Helm Chart.Helm version is 3.0.1. We have two environments and we have two values.yaml per environments.We have also three nodes per cluster.
First of all, after end user's action, UI service triggers than it goes to Backend. According to the end user request Backend services needs to communicate any of services such as Market, Auth and API.Some cases API and market microservice needs to communicate with Auth microservice as well.
UI -->
Backend
Market --> use postgreSQL
Auth --> use Redis
API
So my questions are,
What should we take care to communicate microservices among each other? Is this my-svc-namespace.svc.cluster.local enough to provide developers or should we specify ENV in each pod as well?
Our microservices is NodeJS application. How developers. will handle this in application source code? Did they use this service name if first question's answer is yes?
We'd like to expose our application via ingress using host per environments? I guess ingress should be only enabled for UI microservice, am I correct?
What is the best way to test each service can communicate each other?
kubectl get svc --all-namespaces
NAMESPACE NAME TYPE
database my-postgres-postgresql-helm ClusterIP
dev my-ui-dev ClusterIP
dev my-backend-dev ClusterIP
dev my-auth-dev ClusterIP
dev my-api-dev ClusterIP
dev my-market-dev ClusterIP
dev redis-master ClusterIP
ingress ingress-traefik NodePort
Two ways to perform Service Discovery in K8S
There are two ways to perform communication (service discovery) within a Kubernetes cluster.
Environment variable
DNS
DNS is the simplest way to achieve service discovery within the cluster.
And it does not require any additional ENV variable setting for each pod.
As its simplest, a service within the same namespace is accessible over its service name. e.g http://my-api-dev:PORT is accessible for all the pods within the namespace, dev.
Standard Application Name and K8s Service Name
As a practice, you can give each application a standard name, eg. my-ui, my-backend, my-api, etc. And use the same name to connect to the application.
That practice can be even applied testing locally from developer environment, with entry in the /etc/host as
127.0.0.1 my-ui my-backend my-api
(Above is nothing to do with k8s, just a practice for the communication of applications with their names in local environments)
Also, on k8s, you may assign service name as the same application name (Try to avoid, suffix like -dev for service name, which reflect the environments (dev, test, prod, etc), instead use namespace or separate cluster for that). So that, target application endpoints can be configured with their service name on each application's configuration file.
Ingress is for services with external access
Ingress should only be enabled for services which required external accesses.
Custom Health Check Endpoints
Also, it is a good practice to have some custom health check that verify all the depended applications are running fine, which will also verify the communications of application are working fine.

What is the best practice for service discovery on the application layer in Kubernetes?

I have 3 applications.
A Gateway, ServiceA & ServiceB.
Each application sits in its own namespace. Whenever there is a push to the CI/CD server on one of the all the 3. All get deployed based on the branch name.
Example:
Create a new branch (feature-1) in ServiceA repo.
Make and commit some changes
The build server builds and deploys the feature-1 branch with a unique service name to the Kubernetes cluster.
The build server looks at ServiceB and Gateway for branches feature-1 if not found defaults to develop. For the gateway it creates a feature-1 from develop and deploys that one.
The gateway then needs to know the DNS URL of the ServiceA from feature-1 in order to be able to call it.
So my question is how to do service discovery on the application lvl using kubernetes?
I think there are two ways to achieve that.
1) Query all services from Kubernetes master, with the API equivalent of kubectl get services --all-namespaces. Then you will ned to configure some logic for choosing the right service.
For this you can use for example Selector, targetPort or specify ClusterIP.
More details can be find in documentation.
2) Put application built from each branch in a new namespace, and let them route within the namespace to their usual names, without requiring any application changes. More information in documentation.

OpenShift access service in other namespace without network join

I'm new to OpenShift. I have two projects|namespaces. In each I have a rest service. What I want is service from NS1 access service from NS2 without joining projects networks. Also SDN with multi tenant plugin.
I found example on how to add external services to cluster as native. In NS1 I created an Endpoint for external IP of Service form NS2, but when I tried to create a Service in NS1 for this Endpoint, it failed cause there was no type tag (which wasn't in example also).
I also tried ExternalName. For externalName key my value was URL of router to service in NS2. But it doesn't work pretty well, cause it always returns me a page with Application is not available. But app\service works.
Services in different namespaces are not external, but local to the cluster. So you simply access the services using DNS:
for example: servicename.svc.cluster.local or simply servicename.svc
see also https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/networking.html
Your question is not very clear and lacks information regarding your network setup and what you mean by joining projects network. What does the SDN multi-tenancy do for example?
By default, the network within the cluster is routable within the whole cluster. If you expose a service in a namespace NS_A, it can access a services in namespace NS_B like so:
Pod in namespace A : curl NS_B.servicename:port
vice versa:
Pod in namespace B : curl NS_A.servicename:port
If your SDN setup makes that impossible, you can expose both service with an Ingress / route and address is from the network where you expose those ( public or not ).
Read the docs on those, for example:
https://kubernetes.io/docs/concepts/services-networking/ingress/
That website is a great resource for all things Kubernetes (like OpenShift).
In OpenShift a slightly different take on it is with routes :
https://docs.openshift.com/container-platform/4.11/networking/routes/route-configuration.html
Basically, try to understand how the networks are set up and how these principles work.
If this does not answer your question, please make it more clear and specific.

How to achieve hazelcast syncing in kubernetes with different pod (App and Hazel insatnce)?

They should able to communicate and update should visible to each other i mean mainly syncing.
DiscoveryStrategyConfig strategyConfig = new DiscoveryStrategyConfig(factory);
Blockquote
// strategyConfig.addProperty("service-dns",
"my-serice-name.my-namespace.svc.cluster.local");
// strategyConfig.addProperty("service-dns-timeout", "300");
strategyConfig.addProperty("service-name", "my-service-name");
strategyConfig.addProperty("service-label-name",
"my-service-label");
strategyConfig.addProperty("service-label-value", true);
strategyConfig.addProperty("namespace", "my-namespace");
I have followed the https://github.com/hazelcast/hazelcast-kubernetes.I have used the first approach was able to see the instance(per pod not in one members list) but they were not communicating (if I am doing crud in one hazel instance it's not reflecting in other). I want to use DNS strategy but was not able to create the instance only.
Please check the followings:
1. Discovery Strategy
For Kubernetes you need to use the HazelcastKubernetesDiscoveryStrategy class. It can be defined in the XML configuration or in the code (as in your case).
2. Labels
Check that the service for your Hazelcast cluster has the labels you specified. The same when it comes to the service name and namespace.
3. Configuration
There are two ways to configure the discovery: DNS Lookup and REST API. Each has special requirements. You mentioned DNS Lookup, but the configuration you've sent actually uses REST API.
DNS Lookup
Your Hazelcast cluster service must be headless ClusterIP.
spec:
type: ClusterIP
clusterIP: None
REST API
You need to grant access for you app to access Kubernetes API. Please check: https://github.com/hazelcast/hazelcast-code-samples/blob/master/hazelcast-integration/kubernetes/rbac.yaml
Other helpful resources
Hazelcast Kubernetes Code Sample
Hazelcast OpenShift Client app (should also work in Kubernetes)

Do we need External Endpoints for orchestration micro services

I have a question about following architecture, I could not find a clear cut answer in the Kubernetes documentation may be you can help me.
I have a service called 'OrchestrationService' this service is dependant to 3 other services 'ServiceA', 'ServiceB', 'ServiceC' to be able to do its job.
All these services have their Docker Images and deployed to Kubernetes.
Now, the 'OrchestrationService' will be the only one that is going to have a contact with outside world so it would definitely have an external endpoint, my question is does 'ServiceA', 'ServiceB', 'ServiceC' would need one or Kubernetes would make those services available for 'OrchestrationService' via KubeProxy/LoadBalancer?
Thx for answers
No, you only expose OrchestrationService to public and other service A/B/C need to be cluster services. You create selector services for A/B/C so OrchestrationService can connect to A/B/C services. OrchestrationService can be defined as NodePort with fixed port or you can use ingress to route traffic to OrchestrationService.
No you dont need external end points for ServiceA,ServiceB and ServiceC.
If these pods are running successfully depending upon your labels you can access this in OrchestrationService you can refer those by saying
http://servicea/context_path
servicea in the url is the label in the defined service for ServiceA
Not as external services like loadbalancer but your services A/B/C need to publish themselves as services inside cluster so that other services like OrchestrationService can use them.