How to achieve hazelcast syncing in kubernetes with different pod (App and Hazel insatnce)? - kubernetes

They should able to communicate and update should visible to each other i mean mainly syncing.
DiscoveryStrategyConfig strategyConfig = new DiscoveryStrategyConfig(factory);
Blockquote
// strategyConfig.addProperty("service-dns",
"my-serice-name.my-namespace.svc.cluster.local");
// strategyConfig.addProperty("service-dns-timeout", "300");
strategyConfig.addProperty("service-name", "my-service-name");
strategyConfig.addProperty("service-label-name",
"my-service-label");
strategyConfig.addProperty("service-label-value", true);
strategyConfig.addProperty("namespace", "my-namespace");
I have followed the https://github.com/hazelcast/hazelcast-kubernetes.I have used the first approach was able to see the instance(per pod not in one members list) but they were not communicating (if I am doing crud in one hazel instance it's not reflecting in other). I want to use DNS strategy but was not able to create the instance only.

Please check the followings:
1. Discovery Strategy
For Kubernetes you need to use the HazelcastKubernetesDiscoveryStrategy class. It can be defined in the XML configuration or in the code (as in your case).
2. Labels
Check that the service for your Hazelcast cluster has the labels you specified. The same when it comes to the service name and namespace.
3. Configuration
There are two ways to configure the discovery: DNS Lookup and REST API. Each has special requirements. You mentioned DNS Lookup, but the configuration you've sent actually uses REST API.
DNS Lookup
Your Hazelcast cluster service must be headless ClusterIP.
spec:
type: ClusterIP
clusterIP: None
REST API
You need to grant access for you app to access Kubernetes API. Please check: https://github.com/hazelcast/hazelcast-code-samples/blob/master/hazelcast-integration/kubernetes/rbac.yaml
Other helpful resources
Hazelcast Kubernetes Code Sample
Hazelcast OpenShift Client app (should also work in Kubernetes)

Related

How to add external GCP loadbalancer to kubespray cluster?

I deployed a kubernetes cluster on Google Cloud using VMs and Kubespray.
Right now, I am looking to expose a simple node app to external IP using loadbalancer but showing my external IP from gcloud to service does not work. It stays on pending state when I query kubectl get services.
According to this, kubespray does not have any loadbalancer mechanicsm included/integrated by default. How should I progress?
Let me start of by summarizing the problem we are trying to solve here.
The problem is that you have self-hosted kubernetes cluster and you want to be able to create a service of type=LoadBalancer and you want k8s to create a LB for you with externlIP and in fully automated way, just like it would if you used a GKE (kubernetes as a service solution).
Additionally I have to mention that I don't know much of a kubespray, so I will only describe all the steps that need to bo done to make it work, and leave the rest to you. So if you want to make changes in kubespray code, it's on you.
All the tests I did with kubeadm cluster but it should not be very difficult to apply it to kubespray.
I will start of by summarizing all that has to be done into 4 steps:
tagging the instances
enabling cloud-provider functionality
IAM and service accounts
additional info
Tagging the instances
All worker node instances on GCP have to be labeled with unique tag that is the name of an instance; these tags are later used to create a firewall rules and target lists for LB. So lets say that you have an instance called worker-0; you need to tag that instance with a tag worker-0
Otherwise it will result in an error (that can be found in controller-manager logs):
Error syncing load balancer: failed to ensure load balancer: no node tags supplied and also failed to parse the given lists of hosts for tags. Abort creating firewall rule
Enabling cloud-provider functionality
K8s has to be informed that it is running in cloud and what cloud provider that is so that it knows how to talk with the api.
controller manager logs informing you that it wont create an LB.
WARNING: no cloud provider provided, services of type LoadBalancer will fail
Controller Manager is responsible for creation of a LoadBalancer. It can be passed a flag --cloud-provider. You can manually add this flag to controller manager pod manifest file; or like in your case since you are running kubespray, you can add this flag somewhere in kubespray code (maybe its already automated and just requires you to set some env or sth, but you need to find it out yourself).
Here is how this file looks like with the flag:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
...
- --cloud-provider=gce # <----- HERE
As you can see the value in our case is gce, which stangs for Google Compute Engine. It informs k8s that its running on GCE/GCP.
IAM and service accounts
Now that you have your provider enabled, and tags covered, I will talk about IAM and permissions.
For k8s to be able to create a LB in GCE, it needs to be allowed to do so. Every GCE instance has a deafult service account assigned. Controller Manager uses instance service account, stored within instance metadata to access GCP API.
For this to happen you need to set Access Scopes for GCE instance (master node; the one where controller manager is running) so it can use Cloud Engine API.
Access scopes -> Set access for each API -> compute engine=Read Write
To do this the instance has to be stopped, so now stop the instance. It's better to set these scopes during instance creation so that you don't need to make any unnecessary steps.
You also need to go to IAM & Admin page in GCP Console and add permissions so that master instance's service account has Kubernetes Engine Service Agent role assigned. This is a predefined role that has much more permissions than you probably need but I have found that everything works with this role so I decided to use is for demonstration purposes, but you probably want to use least privilege rule.
additional info
There is one more thing I need to mention. It does not impact you but while testing I have found out an interesting thing.
Firstly I created only one node cluster (single master node). Even though this is allowed from k8s point of view, controller manager would not allow me to create a LB and point it to a master node where my application was running. This draws a conclusion that one cannot use LB with only master node and has to create at least one worker node.
PS
I had to figure it out the hard way; by looking at logs, changing things and looking at logs again to see if the issue got solved. I didn't find a single article/documentation page where it is documented in one place. If you manage to solve it for yourself, write the answer for others. Thank you.

Microservice structure using helm and kubernetes

We have several microservices(NodeJS based applications) which needs to communicate each other and two of them uses Redis and PostgreSQL. Below are the name of of my microservices. Each of them has their own SCM repository and Helm Chart.Helm version is 3.0.1. We have two environments and we have two values.yaml per environments.We have also three nodes per cluster.
First of all, after end user's action, UI service triggers than it goes to Backend. According to the end user request Backend services needs to communicate any of services such as Market, Auth and API.Some cases API and market microservice needs to communicate with Auth microservice as well.
UI -->
Backend
Market --> use postgreSQL
Auth --> use Redis
API
So my questions are,
What should we take care to communicate microservices among each other? Is this my-svc-namespace.svc.cluster.local enough to provide developers or should we specify ENV in each pod as well?
Our microservices is NodeJS application. How developers. will handle this in application source code? Did they use this service name if first question's answer is yes?
We'd like to expose our application via ingress using host per environments? I guess ingress should be only enabled for UI microservice, am I correct?
What is the best way to test each service can communicate each other?
kubectl get svc --all-namespaces
NAMESPACE NAME TYPE
database my-postgres-postgresql-helm ClusterIP
dev my-ui-dev ClusterIP
dev my-backend-dev ClusterIP
dev my-auth-dev ClusterIP
dev my-api-dev ClusterIP
dev my-market-dev ClusterIP
dev redis-master ClusterIP
ingress ingress-traefik NodePort
Two ways to perform Service Discovery in K8S
There are two ways to perform communication (service discovery) within a Kubernetes cluster.
Environment variable
DNS
DNS is the simplest way to achieve service discovery within the cluster.
And it does not require any additional ENV variable setting for each pod.
As its simplest, a service within the same namespace is accessible over its service name. e.g http://my-api-dev:PORT is accessible for all the pods within the namespace, dev.
Standard Application Name and K8s Service Name
As a practice, you can give each application a standard name, eg. my-ui, my-backend, my-api, etc. And use the same name to connect to the application.
That practice can be even applied testing locally from developer environment, with entry in the /etc/host as
127.0.0.1 my-ui my-backend my-api
(Above is nothing to do with k8s, just a practice for the communication of applications with their names in local environments)
Also, on k8s, you may assign service name as the same application name (Try to avoid, suffix like -dev for service name, which reflect the environments (dev, test, prod, etc), instead use namespace or separate cluster for that). So that, target application endpoints can be configured with their service name on each application's configuration file.
Ingress is for services with external access
Ingress should only be enabled for services which required external accesses.
Custom Health Check Endpoints
Also, it is a good practice to have some custom health check that verify all the depended applications are running fine, which will also verify the communications of application are working fine.

OpenShift access service in other namespace without network join

I'm new to OpenShift. I have two projects|namespaces. In each I have a rest service. What I want is service from NS1 access service from NS2 without joining projects networks. Also SDN with multi tenant plugin.
I found example on how to add external services to cluster as native. In NS1 I created an Endpoint for external IP of Service form NS2, but when I tried to create a Service in NS1 for this Endpoint, it failed cause there was no type tag (which wasn't in example also).
I also tried ExternalName. For externalName key my value was URL of router to service in NS2. But it doesn't work pretty well, cause it always returns me a page with Application is not available. But app\service works.
Services in different namespaces are not external, but local to the cluster. So you simply access the services using DNS:
for example: servicename.svc.cluster.local or simply servicename.svc
see also https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/networking.html
Your question is not very clear and lacks information regarding your network setup and what you mean by joining projects network. What does the SDN multi-tenancy do for example?
By default, the network within the cluster is routable within the whole cluster. If you expose a service in a namespace NS_A, it can access a services in namespace NS_B like so:
Pod in namespace A : curl NS_B.servicename:port
vice versa:
Pod in namespace B : curl NS_A.servicename:port
If your SDN setup makes that impossible, you can expose both service with an Ingress / route and address is from the network where you expose those ( public or not ).
Read the docs on those, for example:
https://kubernetes.io/docs/concepts/services-networking/ingress/
That website is a great resource for all things Kubernetes (like OpenShift).
In OpenShift a slightly different take on it is with routes :
https://docs.openshift.com/container-platform/4.11/networking/routes/route-configuration.html
Basically, try to understand how the networks are set up and how these principles work.
If this does not answer your question, please make it more clear and specific.

Kubernetes StatefulSets: External DNS

Kubernetes StatefulSets create internal DNS entries with stable network IDs. The docs describe this here:
Each Pod in a StatefulSet derives its hostname from the name of the
StatefulSet and the ordinal of the Pod. The pattern for the
constructed hostname is $(statefulset name)-$(ordinal). The example
above will create three Pods named web-0,web-1,web-2. A StatefulSet
can use a Headless Service to control the domain of its Pods. The
domain managed by this Service takes the form: $(service
name).$(namespace).svc.cluster.local, where “cluster.local” is the
cluster domain. As each Pod is created, it gets a matching DNS
subdomain, taking the form: $(podname).$(governing service domain),
where the governing service is defined by the serviceName field on the
StatefulSet.
I am experimenting with headless services, and this works great for communication between individual services i.e web-0.web.default.svc.cluster.local can connect and communicate with web-1.web.default.svc.cluster.local just fine.
Is there any way that I can configure this to work outside of the cluster network as well, where "cluster.local" is replaced with something like "clustera.com"?
I would like to give another kubernetes cluster, lets call it clusterb.com, access to the individual services of the original cluster (clustera.com); I'm hoping it would look something like clusterb simply hitting endpoints like web-1.web.default.svc.clustera.com and web-0.web.default.svc.clustera.com.
Is this possible? I would like access to the individual services, not a load balanced endpoint.
I would suggest you to test the following solutions and check if they can help you to achieve your goal in your particular scenario:
The first one is for sure the easiest and I believe that you didn't implemented it for some reason and you did not reported in the question why.
I am talking about Headless services Without selectors CNAME records for ExternalName-type services.
ExternalName: Maps the service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. This requires version 1.7 or higher of kube-dns
Therefore if you need to point a service of an other cluster you will need to register a domain name pointing to the relative IP of clusterb.
The second solution that I have never tested, but I believe it can apply to your case is to make use of a Federated Cluster whose reason why to use it is accordinding to the documentation:
Cross cluster discovery: Federation provides the ability to auto-configure DNS servers and load balancers with backends from all clusters. For example, you can ensure that a global VIP or DNS record can be used to access backends from multiple clusters.

Frontend communication with API in Kubernetes cluster

Inside of a Kubernetes Cluster I am running 1 node with 2 deployments. React front-end and a .NET Core app. I also have a Load Balancer service for the front end app. (All working: I can port-forward to see the backend deployment working.)
Question: I'm trying to get the front end and API to communicate. I know I can do that with an external facing load balancer but is there a way to do that using the clusterIPs and not have an external IP for the back end?
The reason we are interested in this, it simply adds one more layer of security. Keeping the API to vnet only, we are removing one more entry point.
If it helps, we are deploying in Azure with AKS. I know they have some weird deployment things sometimes.
Pods running on the cluster can talk to each other using a ClusterIP service, which is the default service type. You don't need a LoadBalancer service to make two pods talk to each other. According to the docs on this topic
ClusterIP exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
As explained in the Discovery documentation, if both Pods (frontend and API) are running on the same namespace, the frontend just needs to send requests to the name of the backend service.
If they are running on different namespaces, the frontend API needs to use a fully qualified domain name to be able to talk with the backend.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
You can find more info about how DNS works on kubernetes in the docs.
The problem with this configuration is the idea that the Frontend app will be trying to reach out to the API via the internal cluster. But it will not. My app, on the client's browser can not reach services and pods in my Kluster.
My cluster will need something like nginx or another external Load Balancer to allow my client side api calls to reach my API.
You can alternatively used your front end app, as your proxy, but that is highly not advised!
I'm trying to get the front end and api to communicate
By api, if you mean the Kubernetes API server, first setup a service account and token for the front-end pod to communicate with the Kubernetes API server by following the steps here, here and here.
is there a way to do that using the clusterIPs and not have an external IP for the back end
Yes, this is possible and more secure if external access is not needed for the service. Service type ClusterIP will not have an ExternalIP and the pods can talk to each other using ClusterIP:Port within the cluster.