how to give service name and port in configmap yaml? - kubernetes

I have a service (CusterIP) like following which is exposing ports of backend POD.
apiVersion: v1
kind: Service
metadata:
name: fsimulator
namespace: myns
spec:
type: ClusterIP
selector:
application: oms
ports:
- name: s-port
port: 9780
- name: b-port
port: 8780
Front end POD should be able to connect to Backend POD using service. Should we replace hostname with service name to connect from Frontend POD to Backend POD ?
I have to supply the service name and port through environment variables to Frontend POD container.
The enviroment variables are set using configMap.
Is it enough to give service name fsimulator as hostname to connect to ?
How to give service if is created inside namespace ?
Thanks

Check out this documentation. The internal service PORT / IP pairs for active services are indeed passed into the containers by default.
As the documentation also says, it is possible (recommended) to use a DNS cluster add-on for service discovery. Accessing service.namespace from outside / inside a service will resolve to the correct service route (or just service from inside the namespace). This is usually the right path to take.
Built-in service discovery is a huge perk of using Kubernetes, use the available tools if at all possible!

Related

Kubernetes open port to server on same subnet

I am launching a service in a Kubernetes pod that I would like to be available to servers on the same subnet only.
I have created a service with a LoadBalancer opening the desired ports. I can connect to these ports through other pods on the cluster, but I cannot connect from virtual machines I have running on the same subnet.
So far my best solution has been to assign a loadBalancerIP and restrict it with loadBalancerSourceRanges, however this still feels too public.
The virtual machines I am attempting to connect to my service are ephemeral, and have a wide range of public IPs assigned, so my loadBalancerSourceRanges feels too broad.
My understanding was that I could connect to the internal LoadBalancer cluster-ip from servers that were on that same subnet, however this does not seem to be the case.
Is there another solution to limit this service to connections from internal IPs that I am missing?
This is all running on GKE.
Any help would be really appreciated.
i think you are right here a little bit but not sure why you mentioned the cluster-ip
My understanding was that I could connect to the internal LoadBalancer
cluster-ip from servers that were on that same subnet, however this
does not seem to be the case.
Now if you have deployment running on GKE and you have exposed it with service type LoadBalancer and have internal LB you will be able to access to internal LB across same VPC.
apiVersion: v1
kind: Service
metadata:
name: internal-svc
annotations:
networking.gke.io/load-balancer-type: "Internal"
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
selector:
app: internal-svcinternal-svc
ports:
- name: tcp-port
protocol: TCP
port: 8080
targetPort: 8080
once your changes are applied check the status using
kubectl get service internal-svc --output yaml
In YAML output check at last section for
status:
loadBalancer:
ingress:
- ip: 10.127.40.241
that's your actual IP you can use to connect with service from other VMs in subnet.
Doc ref
To restrict the service to only be available to servers on the same subnet, you can use a combination of Network Policies and Service Accounts.
First, you'll need to create a Network Policy which specifies the source IP range that is allowed to access your service. To do this, you'll need to create a YAML file which contains the following:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-subnet-traffic
spec:
podSelector: {}
ingress:
- from:
- ipBlock:
cidr: <subnet-range-cidr>
ports:
- protocol: TCP
port: <port-number>
Replace the <subnet-range-cidr> and <port-number> placeholders with the relevant IP address range and port numbers. Once this YAML file is created, you can apply it to the cluster with the following command:
kubectl apply -f path-to-yaml-file
Next, you'll need a Service Account and assign it to the service. you can use the Service Account to authenticate incoming requests. To do this, you'll need to add a Service Account to the service's metadata with the following command:
kubectl edit service <service-name>
You must first create a Role or ClusterRole and grant it access to the network policy before you can assign a Service Account to it. The network policy will then be applied to the Service Account when you bind the Role or ClusterRole to it. This can be accomplished using the Kubernetes kubectl command line tool as follows:
kubectl create role <role_name> --verb=get --resource=networkpolicies
kubectl create clusterrole <clusterrole_name> --verb=get --resource=networkpolicies
kubectl create rolebinding <rolebinding_name> --role=<role_name> --serviceaccount=<service_account_name>
kubectl create clusterrolebinding <clusterrolebinding_name> --clusterrole=<clusterrole_name> --serviceaccount=<service_account_name>
The network policy will be applied to all pods that make use of the Service Account when the Role or ClusterRole is bound to it. To access the service, incoming requests will need to authenticate with the Service Account once it has been added. The service will only be accessible to authorized requests as a result of this.
For more info follow this documentation.

How to access ExternalName service in kubernetes using minikube?

I understand that a service of type ExternalName will point to a specified deployment that is not exposed externally using the specified external name as the DNS name. I am using minikube in my local machine with docker drive. I created a deployment using a custom image. When I created a service with default type (Cluster IP) and Load Balancer for the specific deployment I was able to access it after port forwarding to the local ip address. This was possible for service type ExternalName also but accessible using the ip address and not the specified external name.
According to my understanding service of type ExternalName should be accessed when using the specified external name. But I wasn't able to do it. Can anyone say how to access an external name service and whether my understanding is correct?
This is the externalName.yaml file I used.
apiVersion: v1
kind: Service
metadata:
name: k8s-hello-test
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
type: ExternalName
externalName: k8s-hello-test.com
After port forwarding using kubectl port-forward service/k8s-hello-test 3000:3000 particular deployment was accessible using http://127.0.0.1:300
But even after adding it to etc/hosts file, cannot be accessed using http://k8s-hello-test.com
According to my understanding service of type ExternalName should be
accessed when using the specified external name. But I wasn't able to
do it. Can anyone say how to access it using its external name?
You are wrong, external service is for making external connections. Suppose if you are using the third party Geolocation API like https://findmeip.com you can leverage the External Name service.
An ExternalName Service is a special case of Service that does not
have selectors and uses DNS names instead. For more information, see
the ExternalName section later in this document.
For example
apiVersion: v1
kind: Service
metadata:
name: geolocation-service
spec:
type: ExternalName
externalName: api.findmeip.com
So your application can connect to geolocation-service, which which forwards requests to the external DNS mentioned in the service.
As ExternalName service does not have selectors you can not use the port-forwarding as it connects to POD and forwards the request.
Read more at : https://kubernetes.io/docs/concepts/services-networking/service/#externalname

Kubernetes: is there any way to get headless service endpoint info in container environment vars

I used Cloud Foundry a lot previously, when an app is bind with a service, all the service connection info will be injected into app's environment variables. In Kubernetes world, I think this is same for normal service.
For me, I try to use headless service to describe an external PostgreSQL using below service yaml.
---
kind: "Service"
apiVersion: "v1"
metadata:
name: "postgresql"
spec:
clusterIP: None
ports:
- protocol: "TCP"
port: 5432
targetPort: 5432
nodePort: 0
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "postgresql"
subsets:
-
addresses:
- ip: "10.29.0.123"
ports:
- port: 5432
After deploy the headless service to cluster, the container does not has any environment variables for that, I guess it is because the ClusterIP = None.
The apps can use postgresql:5432 to access by DNS, but I just wonder why Kubernetes does not inject the headless service and its endpoints into the app's environment variable, so the app can get both ip and port from it?
Is there any way to do so?
Thanks!
The Kube-proxy does not manage HeadLess Service, a request made to theses service is only forwarded to the it.
Kubernetes does not really aknowledge theses endpoints (cf https://kubernetes.io/docs/concepts/services-networking/service/#headless-services).
To pass the IP of your postgreSQL DB, you will have to add a environment variable in your deployment, like this:
env:
- name: POSTGRESQL_ADDR
value: "10.29.0.123:5432"
I found the answer to the question. For a headless service, the service info will not be shown in pod's environment variables. If service info is to be available in the environment var, you need to use the service without selectors, simply remove the "clusterIP: None".
The client pod can use both DNS and environment var for external service discovery.

How to get ip/address of service in k8s

I would like to create service A (redis instance) and service B (application).
Application would like to use service A (redis).
How can I get some automaticaly address/url of service A inside k8s cluster without expose to internet?
Something like:
redis://service-a-url:6379
I don't know which technic of k8s should I use.
So for example your redis service should look like this:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
run: redis
spec:
ports:
- port: 6379
targetPort: 6379
protocol: TCP
selector:
run: redis
The service is type ClusterIP (because if you will not specify service type in yaml file by default it will be ClusterIP type) that you don't have to access service from the outside the cluster. There are more types of service - find information here: services-kubernetes.
Take a look: connecting-app-service, app-service-redis.
Kubernetes supports two modes of finding a Service - environment variables and DNS.
Kubernetes has a specific DNS cluster addon Service that automatically assigns DNS names to other Services.
Every single Service created in the cluster has its own assigned DNS name. A client Pod's DNS search list will include the Pod's own namespace and the cluster's default domain by default. This is best illustrated by example:
Assume a Service named example in the Kubernetes namespace ns. A Pod running in namespace ns can look up this service by simply doing a DNS query for example. A Pod running in namespace test can look up this service by doing a DNS query for example.ns.
Find more here: Kubernetes DNS-Based Service Discovery, dns-name-service.
You will be able to access your service within the cluster using following record:
<service>.<ns>.svc.<zone>. <ttl>
For example: redis.default.svc.cluster.local

Assign public ip to the service which is running on Kubernetes

i have a application ( Let's say Integration-protocol-api)
and this application want to talk to other application, but this application located on another Network (Let's call it Another-Integration-Protocol)
And problem is, on another-integration-protocol side the whitelist exist, which allow to connect to it, only from selected ip addresses.
But my integration-protocol-api is Dockerized and running on Kubernetes cluster, so ip address is changing everyime when i restart my pod.
how can i assign the Public and Static ip to my Kubernetes Pod?
But my integration-protocol-api is Dockerized and running on Kubernetes cluster, so ip address is changing everyime when i restart my pod. how can i assign the Public and Static ip to my Kubernetes Pod
There are several approaches, depending on your actual setup/needs and I'll try to give some options here:
Tie pod to specific node and expose that node's IP address through service. This would be something along those lines:
# Quick deployment/pod manifest node selector (affinity is better)
...
spec:
nodeSelector:
kubernetes.io/hostname: my-node-name
...
# Service manifest
apiVersion: v1
kind: Service
metadata:
name: svc-myservice
labels:
app: myapp
tier: frontend
spec:
selector:
app: myapp
tier: frontend
ports:
- name: tcpserviceport
protocol: TCP
port: 8080
targetPort: 80
externalIPs:
- 111.222.222.111
Pod should be in same namespace, tied to that node via either node selector or affinity rules and have same labels as in selector for service to pick it up. IP address of cluster node with name my-node-name should be 111.222.222.111 in this example, and it would be accessible through port 8080 and that ip address.
If applicable, expose service through ingress and whitelist ingress public ip only. Depending on your namespace separation you'll reference your pod (wherever it might run) in ingress through corresponding service using either service name (in namespace scope) or FQDN such as:
<service-name>.<namespace-name>.svc.cluster.local
Here is good overview of some methods to make it more illustrative from kubernetes docs: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-intro/
A pod makes any request with it's node IP address as source. So you could whitelist your cluster nodes' IP addresses, and it should work.