Multiple turbine clusters by specific services - spring-cloud

I have two services(spring cloud applications) implementing fallbacks, let's say service1 and service2.
Now i want to have 2 clusters, cluster 1 contains service1 and service 2 and another cluster which only contains dash for service1.
Here is the configuration that doesn't work:
turbine:
aggregator:
clusterConfig: CLUSTER1,CLUSTER2
appConfig: service1,service2
ConfigPropertyBasedDiscovery:
cluster1: service1,service2
cluster2: service1
clusterNameExpression: metadata.cluster
Until now i could have a default dashboard with all the services with the following configuration, but i need to have multiple clusters.
turbine:
appConfig: service1,service2
clusterNameExpression: "'default'"

Related

how to expose ingress for Consul

I'm trying to add consul ingress to my project, and I'm using this GitHub repo as a doc for ui and ingress: here and as you can see unfortunately there is no ingress in doc, there is an ingressGateways which is not useful because doesn't create ingress inside Kubernetes(it can just expose URL to outside)
I have searched a lot, there are 2 possible options:
1: create extra deployment for ingress
2: create consul helm chart to add ingress deploy
(unfortunately I couldn't find a proper solution for this on the Internet)
Here is an example Docker compose file which configures Traefik to expose an entrypoint named web which listens on TCP port 8000, and integrates Traefik with Consul's service catalog for endpoint discovery.
# docker-compose.yaml
---
version: "3.8"
services:
consul:
image: consul:1.8.4
ports:
- "8500:8500/tcp"
traefik:
image: traefik:v2.3.1
ports:
- "8000:8000/tcp"
environment:
TRAEFIK_PROVIDERS_CONSULCATALOG_CACHE: 'true'
TRAEFIK_PROVIDERS_CONSULCATALOG_STALE: 'true'
TRAEFIK_PROVIDERS_CONSULCATALOG_ENDPOINT_ADDRESS: http://consul:8500
TRAEFIK_PROVIDERS_CONSULCATALOG_EXPOSEDBYDEFAULT: 'false'
TRAEFIK_ENTRYPOINTS_web: 'true'
TRAEFIK_ENTRYPOINTS_web_ADDRESS: ":8000"
Below is a Consul service registration file which registers an application named web which is listening on port 80. The service registration includes a couple tags which instructs Traefik to expose traffic to the service (traefik.enable=true) over the entrypoint named web, and creates the associated routing config for the service.
service {
name = "web"
port = 80
tags = [
"traefik.enable=true",
"traefik.http.routers.web.entrypoints=web",
"traefik.http.routers.web.rule=Host(`example.com`) && PathPrefix(`/myapp`)"
]
}
This can be registered into Consul using the CLI (consul service register web.hcl). Traefik will then discover this via the catalog integration, and configure itself based on the routing config specified in the tags.
HTTP requests received by Traefik on port 8000 with an Host header of example.com and path of /myapp will be routed to the web service that was registered with Consul.
Example curl command.
curl --header "Host: example.com" http://127.0.0.1:8000/myapp
This is a relatively basic example that is suitable for dev/test. You will need to define additional Traefik config parameters if you are deploying into a production Consul environment which is typically secured by access control lists (ACLs).
The ingressGateways config in the Helm chart is for deploying a Consul ingress gateway (powered by Envoy) for Consul service mesh. This is different from a Kubernetes Ingress.
Consul's ingress enables routing to applications running inside the service mesh, and is configured using an ingress-gateway configuration entry (or in the future using Consul CRDs). It cannot route to endpoints that exist outside the service mesh, such as Consul's API/UI endpoints.
If you need a generic ingress that can route to applications outside the mesh, I recommend using a solution such as Ambassador, Traefik, or Gloo. All three of this also support integrations with Consul for service discovery, or service mesh.

Weighted routing over kubernetes services

I have one master service and multiple slave services. The master service continuously polls a topic using subscriber from Google PubSub. The Slave services are REST APIs. Once the master service receives a message, it delegates the message to a slave service. Currently I'm using ClusterIP service in Kubernetes. Some of my requests are long running and some are pretty short.
I happen to observe that sometimes if there's a short running request while a long running request is in process, it has to wait until the long running request to finish even though many pods are available without serving any traffic. I think it's due to the round robin load balancing. I have been trying to find a solution and looked into approaches like setting up external HTTP load balancer with ingress and internal HTTP load balancer. But I'm really confused about the difference between these two and which one applies for my use case. Can you suggest which of the approaches would solve my use case?
TL;DR
assuming you want 20% of the traffic to go to x service and the rest 80% to y service. create 2 ingress files for each of the 2 targets, with same host name, the only difference is that one of them will carry the following ingress annotations: docs
nginx.ingress.kubernetes.io/canary: "true" #--> tell the controller to not create a new vhost
nginx.ingress.kubernetes.io/canary-weight: "20" #--> route here 20% of the traffic from the existing vhost
WHY & HOW TO
weighted routing is a bit beyond the ClusterIP. as you said yourself, its time for a new player to enter the game - an ingress controller.
this is a k8s abstraction for a load balancer - a powerful server sitting in front of your app and routing the traffic between the ClusterIPs.
install ingress controller on gcp cluster
once you have it installed and running, use its canary feature to perform a weighted routing. this is done using the following annotations:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
spec:
rules:
- host: echo.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
here is the full guide.
External vs internal load balancing
(this is the relevant definition from google cloud docs but the concept is similar among other cloud providers)
GCP's load balancers can be divided into external and internal load
balancers. External load balancers distribute traffic coming from the
internet to your GCP network. Internal load balancers distribute
traffic within your GCP network.
https://cloud.google.com/load-balancing/docs/load-balancing-overview

Monitor Spring Boot Apps using Prometheus on Kubernetes , not setting end Points

I am Trying to monitor Spring Boot application using Prometheus on Kubernetes. Promethus was insatll using Helm and I am using Spring Boot Actuator for Health checking, Auditing, Metrics gathering and Monitoring.
Actuator gives details about application. For example
http://**IP:Port**/actuator/health return below output
{"status":"UP"}.
I use below configuration file to add the application end point in promethus.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: scp-service-creator
namespace: sc678
labels:
app: scp-service-creator
release: prometheus-operator
spec:
selector:
matchLabels:
app: scp-service-creator
endpoints:
- port: api
path: "/actuator/prometheus"
scheme: http
interval: 10s
honorLabels: true
So my problem is even service is added to prometheus , no endpoint is assigned.
So What would be wrong here. Really appreciate your help.
Thank You.
From the Spring Boot Actuator documentation, to be more specific the Endpoints part. One can read that Endpoints are enabled be default except Shutdown which is disabled, but only health and info are exposed.
This can be checked here.
You need to expose the Endpoint you want manually.
The Endpoint you want to use which is Prometheus is Not Available for JMX and is disabled for Web.
To change which endpoints are exposed, use the following technology-specific include and exclude properties:
Property | Default
management.endpoints.jmx.exposure.exclude |
management.endpoints.jmx.exposure.include | *
management.endpoints.web.exposure.exclude |
management.endpoints.web.exposure.include | info, health
The include property lists the IDs of the endpoints that are exposed. The exclude property lists the IDs of the endpoints that should not be exposed. The excludeproperty takes precedence over the include property. Both include and exclude properties can be configured with a list of endpoint IDs.
For example, to stop exposing all endpoints over JMX and only expose the health and info endpoints, use the following property:
management.endpoints.jmx.exposure.include=health,info
* can be used to select all endpoints. For example, to expose everything over HTTP except the env and beans endpoints, use the following properties:
management.endpoints.web.exposure.include=*
management.endpoints.web.exposure.exclude=env,beans

HAProxy with Kubernetes in a DR setup

We have Kubernetes setup hosted on premises and are trying to allow clients outside of K8s to connect to services hosted in the K8s cluster.
In order to make this work using HA Proxy (which runs outside K8s), we have the HAProxy backend configuration as follows -
backend vault-backend
...
...
server k8s-worker-1 worker1:32200 check
server k8s-worker-2 worker2:32200 check
server k8s-worker-3 worker3:32200 check
Now, this solution works, but the worker names and the corresponding nodePorts are hard-coded in this config, which obviously is inconvenient as and when more workers are added (or removed/changed).
We came across the HAProxy Ingress Controller (https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/) which sounds promising, but (we feel) effectively adds another HAProxy layer to the mix..and thus, adds another failure point.
Is there a better solution to implement this requirement?
Now, this solution works, but the worker names and the corresponding nodePorts are hard-coded in this config, which obviously is inconvenient as and when more workers are added (or removed/changed).
You can explicitly configure the NodePort for your Kubernetes Service so it doesn't pick a random port and you always use the same port on your external HAProxy:
apiVersion: v1
kind: Service
metadata:
name: <my-nodeport-service>
labels:
<my-label-key>: <my-label-value>
spec:
selector:
<my-selector-key>: <my-selector-value>
type: NodePort
ports:
- port: <service-port>
nodePort: 32200
We came across the HAProxy Ingress Controller (https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/) which sounds promising, but (we feel) effectively adds another HAProxy layer to the mix..and thus, adds another failure point.
You could run the HAProxy ingress inside the cluster and remove the HAproxy outside the cluster, but this really depends on what type of service you are running. The Kubernetes Ingress is Layer 7 resource, for example. The DR here would be handled by having multiple replicas of your HAProxy ingress controller.

Understanding subnetting in Kubernetes cluster

When using GKE, I found that a all the nodes in the Kubernetes cluster must be in the same network and the same subnet. So, I wanted to understand the correct way to design networking.
I have two services A and B and they have no relation between them. My plan was to use a single cluster in a single region and have two nodes for each of the services A and B in different subnets in the same network.
However, it seems like that can't be done. The other way to partition a cluster is using namespaces, however I am already using partitioning development environment using namespaces.
I read about cluster federation https://kubernetes.io/docs/concepts/cluster-administration/federation/, however it my services are small and I don't need them in multiple clusters and in sync.
What is the correct way to setup netowrking for these services? Should I just use the same network and subnet for all the 4 nodes to serve the two services A and B?
You can restrict the incoming (or outgoing) traffic making use of labels and networking policies.
In this way the pods would be able to receive the traffic merely if it has been generated by a pod belonging to the same application or with any logic you want to implement.
You can follow this step to step tutorial that guides you thorough the implementation of a POC.
kubectl run hello-web --labels app=hello \
--image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose
Example of Network policy
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: hello-allow-from-foo
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
app: hello
ingress:
- from:
- podSelector:
matchLabels:
app: foo