I have a Service that is deployed as a StatefulSet in one Kubernetes cluster, and with static IPs in another cluster.
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: statefulset-name
spec:
...
I want to use the same host to access the service in both clusters. I also need to have one hostname per endpoint (e.g the pod and the static IP).
Given that I get <pod>.<statefulset>.<namespace> in the Kubernetes-native case, how can I expose my static IPs with the same hosts?
A StatefulSet will eventually generate a Service for the Pod. Digging into the source code I found that the pod subdomain will map to the address hostname in the Endpoints resource.
Assuming that you have 2 replicas in the first case and 2 static IPs in the second one, you need to create the Service and Endpoints as such:
kind: Service
apiVersion: v1
metadata:
name: same-as-statefulset
spec:
clusterIP: None
ports:
- port: 80
targetPort: 80
---
kind: Endpoints
apiVersion: v1
metadata:
name: same-as-statefulset
subsets:
- addresses:
- ip: 10.192.255.0
hostname: same-as-pod-0
- ip: 10.192.255.1
hostname: same-as-pod-1
ports:
- port: 80
Related
i have ingress file where i am forwarding request to pods using service name but i have a scenario where few requests with path /abc* needs to be forwarded to ip based url say http://10.10.1.1:8080/. How to do this case using ingress in Kubernetes and i am using AWS EKS as my kubernetes.
You can create Service using Endpoints for that:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 10.10.1.1
ports:
- port: 8080
I can't access to public IP assigned by MetalLB load Balancer
I created a Kubernetes cluster in Contabo. Its 1 master and 2 workers. Each one has its own public IP.
I did it with kubeadm + flannel. Later I did install MetalLB to use Load Balancing.
I used this manifest for installing nginx:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
It works, pods are running. I see the external IP adress after:
kubectl get services
From each node/host I can curl to that ip and port and I can get nginx's:
<h1>Welcome to nginx!</h1>
So far, so good. BUT:
What I still miss is to access to that service (nginx) from my computer.
I can try to access to each node (master + 2 slaves) by their IP:PORT and nothing happens. The final goal is to have a domain that access to that service but I can't guess witch IP should I use.
What I'm missing?
Should MetalLB just expose my 3 possible IPs?
Should I add something else on each server as a reverse proxy?
I'm asking this here because all articles/tutorials on baremetal/VPS (non aws,GKE, etc...) do this on a kube on localhost and miss this basic issue.
Thanks.
I am having the very same hardware layout:
a 3-Nodes Kubernetes Cluster - here with the 3 IPs:
| 123.223.149.27
| 22.36.211.68
| 192.77.11.164 |
running on (different) VPS-Providers (connected to a running cluster(via JOIN), of course)
Target: "expose" the nginx via metalLB, so I can access my web-app from outside the cluster via browser via the IP of one of my VPS'
Problem: I do not have a "range of IPs" I could declare for the metallb
Steps done:
create one .yaml file for the Loadbalancer, the kindservicetypeloadbalancer.yaml
create one .yaml file for the ConfigMap, containing the IPs of the 3 nodes, the kindconfigmap.yaml
``
### start of the kindservicetypeloadbalancer.yaml
### for ensuring a unique name: loadbalancer name nginxloady
apiVersion: v1
kind: Service
metadata:
name: nginxloady
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
``
below, the second .yaml file to be added to the Cluster:
# start of the kindconfigmap.yaml
## info: the "production-public-ips" can be found
## within the annotations-sector of the kind: Service type: loadbalancer / the kindservicetypeloadbalancer.yaml
## as well... ...namespace: metallb-system & protocol: layer2
## note: as you can see, I added a /32 after every of my node-IPs
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: production-public-ips
protocol: layer2
addresses:
- 123.223.149.27/32
- 22.36.211.68/32
- 192.77.11.164/32
``
add the LoadBalancer:
kubectl apply -f kindservicetypeloadbalancer.yaml
add the ConfigMap:
kubectl apply -f kindconfigmap.yaml
Check the status of the namespace ( "n" ) metallb-system:
kubectl describe pods -n metallb-system
PS:
actually it is all there:
https://metallb.universe.tf/installation/
and here:
https://metallb.universe.tf/usage/#requesting-specific-ips
What you are missing is a routing policy
Your external IP addresses must belong to the same network as your nodes or instead of that you can add a route to your external address at your default gateway level and use a static NAT for each address
I have the following pods hello-abc and hello-def.
And I want to send data from hello-abc to hello-def.
How would pod hello-abc know the IP address of hello-def?
And I want to do this programmatically.
What's the easiest way for hello-abc to find where hello-def?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-abc-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-abc
spec:
containers:
- name: hello-abc
image: hello-abc:v0.0.1
imagePullPolicy: Always
args: ["/hello-abc"]
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-def-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-def
spec:
containers:
- name: hello-def
image: hello-def:v0.0.1
imagePullPolicy: Always
args: ["/hello-def"]
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: hello-abc-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: hello-abc
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: hello-def-service
spec:
ports:
- port: 80
targetPort: 5001
protocol: TCP
selector:
app: hello-def
type: NodePort
Preface
Since you have defined a service that routes to each deployment, if you have deployed both services and deployments into the same namespace, you can in many modern kubernetes clusters take advantage of kube-dns and simply refer to the service by name.
Unfortunately if kube-dns is not configured in your cluster (although it is unlikely) you cannot refer to it by name.
You can read more about DNS records for services here
In addition Kubernetes features "Service Discovery" Which exposes the ports and ips of your services into any container which is deployed into the same namespace.
Solution
This means, to reach hello-def you can do so like this
curl http://hello-def-service:${HELLO_DEF_SERVICE_PORT}
based on Service Discovery https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Caveat: Its very possible that if the Service port changes, only pods that are created after the change in the same namespace will receive the new environment variables.
External Access
In addition, you can also reach this your service externally since you are using the NodePort feature, as long as your NodePort range is accessible from outside.
This would require you to access your service by node-ip:nodePort
You can find out the NodePort which was randomly assigned to your service with kubectl describe svc/hello-def-service
Ingress
To reach your service from outside you should implement an ingress service such as nginx-ingress
https://github.com/helm/charts/tree/master/stable/nginx-ingress
https://github.com/kubernetes/ingress-nginx
Sidecar
If your 2 services are tightly coupled, you can include both in the same pod using the Kubernetes Sidecar feature. In this case, both containers in the pod would share the same virtual network adapter and accessible via localhost:$port
https://kubernetes.io/docs/concepts/workloads/pods/pod/#uses-of-pods
Service Discovery
When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. It supports both Docker links
compatible variables (see makeLinkVariables) and simpler
{SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the
Service name is upper-cased and dashes are converted to underscores.
Read more about service discovery here:
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
You should be able to reach hello-def-service from pods in hello-abc via DNS as specified here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
However, kube-dns or CoreDNS has to be configured/installed in your k8s cluster before DNS records can be utilized in your cluster.
Specifically, you should be reach hello-def-service via the DNS record http://hello-def-service for the service running in the same namespace as hello-abc-service
And you should be able to reach hello-def-service running in another namespace ohter_namespace via the DNS record hello-def-service.other_namespace.svc.cluster.local.
If, for some reason, you do not have DNS add-ons installed in your cluster, you still can find the virtual IP of the hello-def-service via environment variables in hello-abc pods. As is documented here.
I really like the kubernetes Ingress schematics. I currently run ingress-nginx controllers to route traffic into my kubernetes pods.
I would like to use this to also route traffic to 'normal' machines: ie vm's or physical nodes that are not part of my kubernetes infrastructure. Is this possible? How?
In Kubernetes you can define an externalName service in which you define a FQND to an external server.
kind: Service
apiVersion: v1
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
Then you can use my-service in your nginx rule.
You can create static service and corresponding endpoints for external services which are not k8s and then use k8s service in ingress to route traffic.
Also you see ingress doc to enable custom upstream check
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-nginx-upstream-checks
In below example just change port/IP according to your need
apiVersion: v1
kind: Service
metadata:
labels:
product: external-service
name: external-service
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
labels:
product: external-service
name: external-service
subsets:
- addresses:
- ip: x.x.x.x
- ip: x.x.x.x
- ip: x.x.x.x
ports:
- name: http
port: 80
protocol: TCP
I don't think it's possible, since ingress-nginx get pods info through watch namespace, service, endpoints, ingress resources, then redirect traffic to pods, without these resources specific to kubernetes, ingress-nginx has no way to find the ips that need loadbalance. And ingress-nginx doesn't has health-check method defined, it's up to the kubernetes builtin mechanic to check the health of the running pods.
I have a cluster in aws and using kubernetes.
I have an app running on a machine (vm) in the same network as the cluster
in my browser i can type http://ipaddress:port/status and i get a response
In my pod i can ping the ip address and i get a response but if i do wget://ipaddress:port/status it doesn't connect.
I have tried some things but not able to succeed.
How do i get the pod in the cluster to be be able to open this url, what do I need to do?
You can integrate external services within kubernetes.
endpoint.yaml
kind: Endpoints
apiVersion: v1
metadata:
name: external-ip-database
subsets:
- addresses:
- ip: 192.168.0.1
ports:
- port: 3306
service.yaml
apiVersion: v1
kind: Service
metadata:
name: database
spec:
ports:
- port: 1433
targetPort: 1433
protocol: TCP
---
# Because this service has no selector, the corresponding Endpoints
# object will not be created. You can manually map the service to
# your own specific endpoints:
kind: Endpoints
apiVersion: v1
metadata:
name: database
subsets:
- addresses:
- ip: "192.168.1.103"
ports:
- port: 1433