Access to service IP from the pod - kubernetes

I have a pod with mysql and service to provide access from outside. So I can connect to my database at 192.168.1.29:3306 from the other machine.
But how I can connect from the other pod in the same cluster (same node)?
That is my service description:
Name: etl-mysql
Namespace: default
Labels: run=etl-mysql
Annotations: field.cattle.io/publicEndpoints=[{"addresses":["192.168.1.20"],"port":31211,"protocol":"TCP","serviceName":"default:etl-mysql","allNodes":true}]
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"run":"etl-mysql"},"name":"etl-mysql","namespace":"default"},"spec":{"extern...
Selector: run=etl-mysql
Type: NodePort
IP: 10.43.44.58
External IPs: 192.168.1.29
Port: etl-mysql-port 3306/TCP
TargetPort: 3306/TCP
NodePort: etl-mysql-port 31211/TCP
Endpoints: 10.42.1.87:3306
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

Kubernetes has a built in DNS, that registers services automatically resulting in simple to use DNS address like this: http://{servicename}.{namespace}:{servicePort}
If you are in the same namespace you can omit the namespace part and if your service listens on port 80 that part can be omitted as well.
If you need further informations the following documentation will help you: DNS for Services and Pods

Related

How to achieve functionality of headless service + port translation of the FQDNs that are created by that headless service in kuberentes

What's the best way to create a functionality of headless service + port translation of the FQDNs that are created by that headless service?
I have created this headless service where the selector is a stateful set with replica=3. Basically creating the headless service to get the FQDN functionality. But I also want the port translation for these FQDNS. For example, when some internal microservice uses :443 it should be routed to :7443 on the pod. What's the best way to achieve that?
Name: sbijendra-service
Namespace: avictrl-aviproject-aviclusterinstaller-system
Labels: app.kubernetes.io/instance=sbijendra
Annotations: <none>
Selector: app.kubernetes.io/instance=sbijendra-sts,app.kubernetes.io/kind=sbijendra-statefulset,app.kubernetes.io/name=sbijendra
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: None
IPs: None
Port: https 443/TCP
TargetPort: 7443/TCP
Endpoints: 100.109.187.135:7443,100.111.94.144:7443,100.127.8.17:7443
Session Affinity: None
Events: <none>
I added the port and target port in the headless service itself but looks like it is not going to work.

Kubernetes service responding on different port than assigned port

I've deployed few services and found one service to be behaving differently to others. I configured it to listen on 8090 port (which maps to 8443 internally), but the request works only if I send on port 8080. Here's my yaml file for the service (stripped down to essentials) and there is a deployment which encapsulates the service and container
apiVersion: v1
kind: Service
metadata:
name: uisvc
namespace: default
labels:
helm.sh/chart: foo-1
app.kubernetes.io/name: foo
app.kubernetes.io/instance: rb-foo
spec:
clusterIP: None
ports:
- name: http
port: 8090
targetPort: 8080
selector:
app.kubernetes.io/component: uisvc
After installing the helm, when I run kubectl get svc, I get the following output
fooaccess ClusterIP None <none> 8888/TCP 119m
fooset ClusterIP None <none> 8080/TCP 119m
foobus ClusterIP None <none> 6379/TCP 119m
uisvc ClusterIP None <none> 8090/TCP 119m
However, when I ssh into one of the other running containers and issue a curl request on 8090, I get "Connection refused". If I curl to "http://uisvc:8080", then I am getting the right response. The container is running a spring boot application which by default listens on 8080. The only explanation I could come up with is somehow the port/targetPort is being ignored in this config and other pods are directly reaching the spring service inside.
Is this behaviour correct? Why is it not listening on 8090? How should I make it work this way?
Edit: Output for kubectl describe svc uisvc
Name: uisvc
Namespace: default
Labels: app.kubernetes.io/instance=foo-rba
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rba
helm.sh/chart=rba-1
Annotations: meta.helm.sh/release-name: foo
meta.helm.sh/release-namespace: default
Selector: app.kubernetes.io/component=uisvc
Type: ClusterIP
IP: None
Port: http 8090/TCP
TargetPort: 8080/TCP
Endpoints: 172.17.0.8:8080
Session Affinity: None
Events: <none>
This is expected behavior since you used headless service.
Headless Services are used for service discovery mechanism so instead of returning single DNS A records, the DNS server will return multiple A records for your service each pointing to the IP of an individual pods that backs the service. So you do simple DNS A records lookup and get the IP of all of the pods that are part of the service.
Since headless service doesn't create iptables rules but creates dns records instead, you can interact directly with your pod instead of a proxy. So If you resolve <servicename:port> you will get <podN_IP:port> and then your connection will go to the pod directly. As long as all of this is in the same namespace you don`t have resolve it by full dns name.
With several pods, DNS will give you all of them and just put in the random order (or in RR order). The order depends on the DNS server implementation and settings.
For more reading please visit:
Services-netowrking/headless-services
This stack questions with great answer explaining how headless services work

EKS ELB: odd instances in the list

I've configured application to run on 2 ec2 instances and k8s service type = LoadBalancer for this application (Selector:app=some-app). Also, I have 10+ instances running in EKS cluster. According to the service output - everything is ok:
Name: some-app
Namespace: default
Labels: app=some-app
Annotations: external-dns.alpha.kubernetes.io/hostname: some-domain
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 3600
service.beta.kubernetes.io/aws-load-balancer-internal: true
Selector: app=some-app
Type: LoadBalancer
IP: 172.20.206.150
LoadBalancer Ingress: internal-blablabla.eu-west-1.elb.amazonaws.com
Port: default 80/TCP
TargetPort: 80/TCP
NodePort: default 30633/TCP
Endpoints: 10.30.21.238:80,10.30.22.38:80
Port: admin 80/TCP
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
But when I check AWS console I see that all instances are included (10+) into ELB. (if I use Application load balancer - only 2 instances are present)
Is there any configuration to remove odd instances?
Thats the default behaviour for the elb/nlb, once traffic hits the instances kube-proxy will redirect it to the instances with your pods running.
If you're using the alb ingress controller, then again its standard behaviour, it will only add the instances were your pods are running, skipping the iptables mumbo jumbo ;)

Expose deployment via service type Node Port on digital Ocean Kubernetes

I'm implementing a solution in Kubernetes for several clients, and I want to monitoring my cluster with Prometheus. However, because this can scale quickly, and I want to reduce costs, I will use Federation for Prometheus, to scrape different clusters of Kubernetes, but I need to expose my Prometheus deployment.
I already have that working with a service type LoadBalancer exposing my Prometheus deployment, but this approach add this extra expense to my infra structure (Digital Ocean LB).
Is it possible to make this using a service type NodePort, exposing a port to my Cluster IP, something like this:
XXXXXXXXXXXXXXXX.k8s.ondigitalocean.com:9090
Where I can use this URL to my master Prometheus scrappe all "slaves" Prometheus instances?
I already tried, but I can't reach my cluster port. Something is blocking. I also delete my firewall, to ensure that nothing is interferes in this implementation but nothing.
This is my service:
Name: my-nodeport-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-nodeport-service","namespace":"default"},"spec":{"ports":[{"na...
Selector: app=nginx
Type: NodePort
IP: 10.245.162.125
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30800/TCP
Endpoints: 10.244.2.220:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Can anybody help me please?
---
This is my service:
```kubectl describe service my-nodeport-service
Name: my-nodeport-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-nodeport-service","namespace":"default"},"spec":{"ports":[{"na...
Selector: app=nginx
Type: NodePort
IP: 10.245.162.125
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30800/TCP
Endpoints: 10.244.2.220:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You can then set up host XXXXXXXXXXXXXXXX.k8s.ondigitalocean.com:9090 to act as your load balancer with Nginx.
Try setup Nginx TCP load balancer.
Note: You will be using Nginx stream and if you want to use open source Nginx and not Nginx Plus, then you might have to compile your own Nginx with -with-stream option.
Example config file:
events {
worker_connections 1024;
}
stream {
upstream stream_backend {
server dhcp-180.example.com:446;
server dhcp-185.example.com:446;
server dhcp-186.example.com:446;
server dhcp-187.example.com:446;
}
server {
listen 446;
proxy_pass stream_backend;
}
After runing Nginx, test results should be like:
Host lb.example.com acts as load balancer with Nginx.
In this example Ngnix is configured to use round-robin and as you can see, every time a new connection ends up to a different host/container.
Note: the container hostname is same as the node hostname this is due to the hostNetwork.
There are some drawbacks of this solution like:
defining hostNetwork reserves the host’s port(s) for all the containers running in the pod
creating one load balancer you have single point of failure
every time a new node is added or removed to the cluster, the load balancer should be updated
This way, one could set up a kubernetes cluster to Ingress-Egress TCP connections routing from/to outside of the cluster.
Useful post: load-balancer-tcp.
NodePort documentation: nodePort.

Kubernetes, Cannot access exposed services

Kubernetes version:
v1.10.3
Docker version:
17.03.2-ce
Operating system and kernel:
Centos 7
Steps to Reproduce:
https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/
Results:
[root#rd07 rd]# kubectl describe services example-service
Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations:
Selector: run=load-balancer-example
Type: NodePort
IP: 10.108.214.162
Port: 9090/TCP
TargetPort: 9090/TCP
NodePort: 31105/TCP
Endpoints: 192.168.1.23:9090,192.168.1.24:9090
Session Affinity: None
External Traffic Policy: Cluster
Events:
Expected:
Expect to be able to curl the cluster ip defined in the kubernetes service
I'm not exactly sure which is the so called "public-node-ip", so I tried every related ip address, only when using the master ip as the "public-node-ip" it shows "No route to host".
I used "netstat" to check if the endpoint is listened.
I tried "https://github.com/rancher/rancher/issues/6139" to flush my iptables, and it was not working at all.
I tried "https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/", "nslookup hostnames.default" is not working.
The services seems working perfectly fine, but the services still cannot be accessed.
I'm using "calico" and the "flannel" was also tried.
I tried so many tutorials of apply services, they all cannot be accessed.
I'm new to kubernetes, plz if anyone could help me.
If you are on any public cloud you are not supposed to get public ip address at ip a command. But even though the port will be exposed to 0.0.0.0:31105
Here is the sample file you can verify for your configuration:
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: app-name
name: bss
namespace: default
spec:
externalIPs:
- 172.16.2.2
- 172.16.2.3
- 172.16.2.4
externalTrafficPolicy: Cluster
ports:
- port: 9090
protocol: TCP
targetPort: 9090
selector:
k8s-app: bss
sessionAffinity: ClientIP
type: LoadBalancer
status:
loadBalancer: {}
Just replace your <private-ip> at externalIPs: and do curl your public ip with your node port.
If you are using any cloud to deploy application, Also verify configuration from cloud security groups/firewall for opening port.
Hope this may help.
Thank you!
My k8s cluster is 1 master and 1 node.
The service pod is running on the node.
So I used http://nodeip:31105, it shows "Hello Kubernetes!".
But http://masterip:31105 still not working, is it suppose to be right?
I checked the endpoint listen, 31105 is listened on master.