Using K8S deployment, our project is based on springcloud. I want to know that in K8S, because the multi-node deployment passes the default host name of the registry, the gateway is deployed in A, and the config is deployed in B. They can't access each other through eureka before. I changed to eureka.instance.prefer-ip-address: true, but I found that they can only access each other on the same host. He is not using cluster-ip of K8S. I want to know how to access each other between services in K8S.
In version 7-201712-EA of Activiti Cloud we provided an example using services running the netflix libraries in kubernetes - the stable github tags and docker images are available to refer to. We approached it by creating Kubernetes Services for each component and getting the component to register with eureka using the k8s service name.
To make sure the component declared the correct service name to eureka we set the eureka.instance.hostname in the component, which can be set in the Deployment yaml by specifying an environment variable or using the default environment variable EUREKA_INSTANCE_HOSTNAME. We also kept thing simple by using the same port for the java app in the Pod and for the Service. Again this can be set to match by setting the port in the Pod spec and passing the SERVER_PORT environment variable to the spring boot app.
Check the spring-cloud-kubernetes project
Related
We have a central monitoring cluster that monitors different k8s clusters (running various micro services)
Currently we’ve deployed prometheus using manifests but we plan to move to a prometheus operator.
My question is, is service discovery possible for prometheus in this kind of a set up? Will I be able to annotate my pods?
Of course, you'll be able to do service discovery with the Prometheus operator for Kubernetes.
However, it does not work as it does with a standalone Pormetheus server and the kubernetes_sd_config configuration.
With the operator, the service discovery works with a custom resource called ServiceMonitor. This resource works with label selector that target services with specific label. You can find an example here, in the official github page
I deployed a kubernetes cluster on Google Cloud using VMs and Kubespray.
Right now, I am looking to expose a simple node app to external IP using loadbalancer but showing my external IP from gcloud to service does not work. It stays on pending state when I query kubectl get services.
According to this, kubespray does not have any loadbalancer mechanicsm included/integrated by default. How should I progress?
Let me start of by summarizing the problem we are trying to solve here.
The problem is that you have self-hosted kubernetes cluster and you want to be able to create a service of type=LoadBalancer and you want k8s to create a LB for you with externlIP and in fully automated way, just like it would if you used a GKE (kubernetes as a service solution).
Additionally I have to mention that I don't know much of a kubespray, so I will only describe all the steps that need to bo done to make it work, and leave the rest to you. So if you want to make changes in kubespray code, it's on you.
All the tests I did with kubeadm cluster but it should not be very difficult to apply it to kubespray.
I will start of by summarizing all that has to be done into 4 steps:
tagging the instances
enabling cloud-provider functionality
IAM and service accounts
additional info
Tagging the instances
All worker node instances on GCP have to be labeled with unique tag that is the name of an instance; these tags are later used to create a firewall rules and target lists for LB. So lets say that you have an instance called worker-0; you need to tag that instance with a tag worker-0
Otherwise it will result in an error (that can be found in controller-manager logs):
Error syncing load balancer: failed to ensure load balancer: no node tags supplied and also failed to parse the given lists of hosts for tags. Abort creating firewall rule
Enabling cloud-provider functionality
K8s has to be informed that it is running in cloud and what cloud provider that is so that it knows how to talk with the api.
controller manager logs informing you that it wont create an LB.
WARNING: no cloud provider provided, services of type LoadBalancer will fail
Controller Manager is responsible for creation of a LoadBalancer. It can be passed a flag --cloud-provider. You can manually add this flag to controller manager pod manifest file; or like in your case since you are running kubespray, you can add this flag somewhere in kubespray code (maybe its already automated and just requires you to set some env or sth, but you need to find it out yourself).
Here is how this file looks like with the flag:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
...
- --cloud-provider=gce # <----- HERE
As you can see the value in our case is gce, which stangs for Google Compute Engine. It informs k8s that its running on GCE/GCP.
IAM and service accounts
Now that you have your provider enabled, and tags covered, I will talk about IAM and permissions.
For k8s to be able to create a LB in GCE, it needs to be allowed to do so. Every GCE instance has a deafult service account assigned. Controller Manager uses instance service account, stored within instance metadata to access GCP API.
For this to happen you need to set Access Scopes for GCE instance (master node; the one where controller manager is running) so it can use Cloud Engine API.
Access scopes -> Set access for each API -> compute engine=Read Write
To do this the instance has to be stopped, so now stop the instance. It's better to set these scopes during instance creation so that you don't need to make any unnecessary steps.
You also need to go to IAM & Admin page in GCP Console and add permissions so that master instance's service account has Kubernetes Engine Service Agent role assigned. This is a predefined role that has much more permissions than you probably need but I have found that everything works with this role so I decided to use is for demonstration purposes, but you probably want to use least privilege rule.
additional info
There is one more thing I need to mention. It does not impact you but while testing I have found out an interesting thing.
Firstly I created only one node cluster (single master node). Even though this is allowed from k8s point of view, controller manager would not allow me to create a LB and point it to a master node where my application was running. This draws a conclusion that one cannot use LB with only master node and has to create at least one worker node.
PS
I had to figure it out the hard way; by looking at logs, changing things and looking at logs again to see if the issue got solved. I didn't find a single article/documentation page where it is documented in one place. If you manage to solve it for yourself, write the answer for others. Thank you.
We have several microservices(NodeJS based applications) which needs to communicate each other and two of them uses Redis and PostgreSQL. Below are the name of of my microservices. Each of them has their own SCM repository and Helm Chart.Helm version is 3.0.1. We have two environments and we have two values.yaml per environments.We have also three nodes per cluster.
First of all, after end user's action, UI service triggers than it goes to Backend. According to the end user request Backend services needs to communicate any of services such as Market, Auth and API.Some cases API and market microservice needs to communicate with Auth microservice as well.
UI -->
Backend
Market --> use postgreSQL
Auth --> use Redis
API
So my questions are,
What should we take care to communicate microservices among each other? Is this my-svc-namespace.svc.cluster.local enough to provide developers or should we specify ENV in each pod as well?
Our microservices is NodeJS application. How developers. will handle this in application source code? Did they use this service name if first question's answer is yes?
We'd like to expose our application via ingress using host per environments? I guess ingress should be only enabled for UI microservice, am I correct?
What is the best way to test each service can communicate each other?
kubectl get svc --all-namespaces
NAMESPACE NAME TYPE
database my-postgres-postgresql-helm ClusterIP
dev my-ui-dev ClusterIP
dev my-backend-dev ClusterIP
dev my-auth-dev ClusterIP
dev my-api-dev ClusterIP
dev my-market-dev ClusterIP
dev redis-master ClusterIP
ingress ingress-traefik NodePort
Two ways to perform Service Discovery in K8S
There are two ways to perform communication (service discovery) within a Kubernetes cluster.
Environment variable
DNS
DNS is the simplest way to achieve service discovery within the cluster.
And it does not require any additional ENV variable setting for each pod.
As its simplest, a service within the same namespace is accessible over its service name. e.g http://my-api-dev:PORT is accessible for all the pods within the namespace, dev.
Standard Application Name and K8s Service Name
As a practice, you can give each application a standard name, eg. my-ui, my-backend, my-api, etc. And use the same name to connect to the application.
That practice can be even applied testing locally from developer environment, with entry in the /etc/host as
127.0.0.1 my-ui my-backend my-api
(Above is nothing to do with k8s, just a practice for the communication of applications with their names in local environments)
Also, on k8s, you may assign service name as the same application name (Try to avoid, suffix like -dev for service name, which reflect the environments (dev, test, prod, etc), instead use namespace or separate cluster for that). So that, target application endpoints can be configured with their service name on each application's configuration file.
Ingress is for services with external access
Ingress should only be enabled for services which required external accesses.
Custom Health Check Endpoints
Also, it is a good practice to have some custom health check that verify all the depended applications are running fine, which will also verify the communications of application are working fine.
They should able to communicate and update should visible to each other i mean mainly syncing.
DiscoveryStrategyConfig strategyConfig = new DiscoveryStrategyConfig(factory);
Blockquote
// strategyConfig.addProperty("service-dns",
"my-serice-name.my-namespace.svc.cluster.local");
// strategyConfig.addProperty("service-dns-timeout", "300");
strategyConfig.addProperty("service-name", "my-service-name");
strategyConfig.addProperty("service-label-name",
"my-service-label");
strategyConfig.addProperty("service-label-value", true);
strategyConfig.addProperty("namespace", "my-namespace");
I have followed the https://github.com/hazelcast/hazelcast-kubernetes.I have used the first approach was able to see the instance(per pod not in one members list) but they were not communicating (if I am doing crud in one hazel instance it's not reflecting in other). I want to use DNS strategy but was not able to create the instance only.
Please check the followings:
1. Discovery Strategy
For Kubernetes you need to use the HazelcastKubernetesDiscoveryStrategy class. It can be defined in the XML configuration or in the code (as in your case).
2. Labels
Check that the service for your Hazelcast cluster has the labels you specified. The same when it comes to the service name and namespace.
3. Configuration
There are two ways to configure the discovery: DNS Lookup and REST API. Each has special requirements. You mentioned DNS Lookup, but the configuration you've sent actually uses REST API.
DNS Lookup
Your Hazelcast cluster service must be headless ClusterIP.
spec:
type: ClusterIP
clusterIP: None
REST API
You need to grant access for you app to access Kubernetes API. Please check: https://github.com/hazelcast/hazelcast-code-samples/blob/master/hazelcast-integration/kubernetes/rbac.yaml
Other helpful resources
Hazelcast Kubernetes Code Sample
Hazelcast OpenShift Client app (should also work in Kubernetes)
I have two back-end deployments, REST server and a database server, each running on some specific ports. The REST server internally calls a database server.
Now how do I refer my database server deployment in my REST server deployment so that they can communicate with each other?
first, define a service for your DB server, that will create sort of a loadbalancer (internal kube integration based on iptables in most cases). With that, you will be able to refer to it by service name or fqdn like mydbsvc.namespace.svc.cluster.local. Which will return "Cluster IP" to that loadbalancer.
Then it's just an issue of regular app config to point it to your DB on mydbsvc, preferably by means of env variable like say DB_HOST=mydbsvc set in your REST API deployment manifest (pod template envs)
Expose your deployments as service. For example, kubectl expose ...
Connect/Allow these to communicate by creating network policies.
Service object (of database) will give you a virtual (stable) IP. Depending upon the type of service your rest code can call DB via clusterIP/externalName/externalIP/DNS.