I have k8s cluster in Google Cloud and frontend/api containers there.
In addition to that I have ElasticSearch cluster (es1,es2) in Google Cloud but not under k8s.
I can access api container by name 'api' from container 'frontend'.
I can access es1 from es2 by name 'es1'.
What is a right way to access es cluster by names like es1,es2 from containers api/frontend ?
Thanks!
If you want to refer to resources outside of your Kubernetes cluster as if they are inside the cluster, the simplest option is probably to assign each ElasticSearch instance a static IP, and then specify a Kubernetes Service with manual Endpoints:
https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
You can add an Endpoint entry for each es instance you have. Now your internal cluster DNS will resolve and load-balance for your es instances as if they were pods in the cluster.
You create endpoints like this (note the lack of selector on the Service):
kind: Endpoints
apiVersion: v1
metadata:
name: my-service
subsets:
- addresses:
- ip: 1.2.3.4
ports:
- port: 9376
---
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
Now you can access the service using my-service.my-namespace.svc.cluster.local as normal.
Create the Kubernetes Cluster (GKE) in the same network/subnetwork like your instances, then you can connect the internal Ip´s of your GCE instances from your k8s pods.
Related
In my Kubernetes cluster setup, I have a Greenplum DB cluster (one master and 8 segments nodes) with a LoadBlanacer service. Please refer to the below service config.
apiVersion: v1
kind: Service
metadata:
labels:
app: greenplum
greenplum-cluster: greenplum-cluster
name: greenplum
spec:
clusterIP: 10.101.251.127
clusterIPs:
- 10.101.251.127
externalIPs:
- 11.4.8.141
externalTrafficPolicy: Cluster
healthCheckNodePort: 32572
ports:
- name: psql
nodePort: 32198
port: 5432
protocol: TCP
targetPort: 5432
selector:
statefulset.kubernetes.io/pod-name: master-0
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
However, a few hours after deployment, the externalTrafficPolicy value is set to Local instead of Cluster, which made the service inaccessible via the defined external IP. Is there any reason for this? It changes automatically even after I edit the service configuration.
Or is there any other way to access this Greenplum DB (TCP 5432) such as ingress?
Can you provide more details on your Kubernetes cluster setup? Is this bare-metal, cloud managed, kubeadm?
Would be useful to see the logs when you try to connect to the DB, also some traces?
The only difference between externalTrafficPolicy Local or Cluster is that Cluster will balance the traffic between all pods in the cluster having a nice distribution between your pods. Local will only distribute the traffic between the pods already deployed on the node where the request land from the LB.
Additionally using Local will preserve the clientIP of the request but not the Cluster option.
There is a nice video who really explain this on detail here.
I used Cloud Foundry a lot previously, when an app is bind with a service, all the service connection info will be injected into app's environment variables. In Kubernetes world, I think this is same for normal service.
For me, I try to use headless service to describe an external PostgreSQL using below service yaml.
---
kind: "Service"
apiVersion: "v1"
metadata:
name: "postgresql"
spec:
clusterIP: None
ports:
- protocol: "TCP"
port: 5432
targetPort: 5432
nodePort: 0
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "postgresql"
subsets:
-
addresses:
- ip: "10.29.0.123"
ports:
- port: 5432
After deploy the headless service to cluster, the container does not has any environment variables for that, I guess it is because the ClusterIP = None.
The apps can use postgresql:5432 to access by DNS, but I just wonder why Kubernetes does not inject the headless service and its endpoints into the app's environment variable, so the app can get both ip and port from it?
Is there any way to do so?
Thanks!
The Kube-proxy does not manage HeadLess Service, a request made to theses service is only forwarded to the it.
Kubernetes does not really aknowledge theses endpoints (cf https://kubernetes.io/docs/concepts/services-networking/service/#headless-services).
To pass the IP of your postgreSQL DB, you will have to add a environment variable in your deployment, like this:
env:
- name: POSTGRESQL_ADDR
value: "10.29.0.123:5432"
I found the answer to the question. For a headless service, the service info will not be shown in pod's environment variables. If service info is to be available in the environment var, you need to use the service without selectors, simply remove the "clusterIP: None".
The client pod can use both DNS and environment var for external service discovery.
I am trying to setup access to an external MySQL database that is setup using MariaDB PXC three node cluster.
Let say my external database nodes has these IP addresses
172.16.10.100
172.16.10.101
172.16.10.102
In case one of those nodes goes down, I want Kubernetes to automatically route traffic to only available two nodes.
If I create a simple Service and Endpoints in kubernetes (showing below), does it automatically do a failover?
#
# Service
#
kind: Service
apiVersion: v1
metadata:
name: mariadb-service
spec:
clusterIP: None
sessionAffinity: None
ports:
- port: 3306
protocol: TCP
targetPort: 3306
---
#
# Endpoints
#
kind: Endpoints
apiVersion: v1
metadata:
name: mariadb-service
subsets:
- addresses:
- ip: 172.16.10.100
- ip: 172.16.10.101
- ip: 172.16.10.102
ports:
- port: 3306
protocol: TCP
Please note this:
Kubernetes does not offer an implementation of network load balancers (Services of type LoadBalancer) for bare-metal clusters. The implementations of network load balancers that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
I found the option to have LoadBalancer with on premises Kubernetes cluster - that is MetalLB
MetalLB aims to redress this imbalance by offering a network load balancer implementation that integrates with standard network equipment, so that external services on bare-metal clusters also “just work” as much as possible.
See the requirements and the installation options you prefer.
MetalLB respects the spec.loadBalancerIP parameter, so if you want your service to be set up with a specific address, you can request it by setting that parameter. MetalLB also supports requesting a specific address pool, if you want a certain kind of address but don’t care which one exactly. To request assignment from a specific pool, add the metallb.universe.tf/address-pool annotation to your service, with the name of the address pool as the annotation value. For example:
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
And for address-pool annotation you can define it something like:
address-pools:
name: production-public-ips
protocol: TCP
addresses:
- ip: 172.16.10.100
- ip: 172.16.10.101
- ip: 172.16.10.102
ports:
- port: 3306
Find full example usage here.
However, there is no health check implemented, that is main option for you.
There is an ideal issue example on GitHub and this thread explaining on why health check is not implemented for Kubernetes CRDs.
If we jump into Kubernetes concepts, this use case is not doable and you can seek for some custom endpoint controller:
readinessProbe: Indicates whether the container is ready to respond to requests. If the readiness probe fails, the endpoints controller removes the Pod's IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.
Please find more information in official documentation
It is so cool that we have a LoadBalancer in Docker for Mac.
I have a question regarding ports created:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
run: nginx
spec:
ports:
port: 9999
targetPort: 80
selector:
run: nginx
type: LoadBalancer
This gives me(kubectl get service):
nginx LoadBalancer 10.96.128.253 localhost 9999:32455/TCP 2s
What is 32455?
Thanks
32455 is your nodePort. Kubernetes automatically assigns a unique nodePort for any service that is accessible outside of a cluster (including services of type LoadBalancer. You can specify these yourself as well in the same config, as long as you .
With regards to Docker for Mac specifically, Kubernetes is creating a service which is listening on localhost:9999. This is an "egress" that kubernetes is creating since you don't actually have a load balancer, it's essentially simulating one. Beyond the "load balancer/egress", it still behaves just like it would in production - that is Kubernes assigns a nodePort for the service. You curl localhost:32455, you will likely get the same response as if you had curl localhost:9999.
I use Google Container Engine with Kubernetes.
I have created an https load balancer which terminates ssl and forwards traffic to k8s cluster nodes. The problem is I see no option to whitelist/filter incoming ip addresses. Is there any?
It sounds like you've set up a load balancer outside of Kubernetes. You may want to consider using a Kubernetes Service set to type: LoadBalancer. That type of service will give you an external IP that load balances to all of your Pods and can be easily restricted to whitelist IPs using the loadBalancerSourceRanges setting. Here is the example from the docs at https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerIP: 79.78.77.76
loadBalancerSourceRanges:
- 130.211.204.1/32
- 130.211.204.2/32
If you're using gce controller is not yet possible[1], just nginx controller[2] accept whitelist ip.
[1] https://github.com/kubernetes/ingress/issues/566
[2] https://github.com/kubernetes/ingress/blob/188c64aaac17ef29400e0f143b9aed7770e32fee/controllers/nginx/configuration.md#whitelist-source-range