Resolve service IP in environment variable in Kubernetes - kubernetes

When I pass service name in environment variable in YAML file, that service name is still string, it's not being resolved in real ip address.
Example:
env:
- name: ES
value: elasticsearch
Thanks

You should be able to use it directly and it should resolve fine:
curl $ES
If you use it inside your application it should also work.
Just consider that Kubernetes uses its internal DNS and the that "elasticsearch" name should only work inside the same namespace. In fact it will resolve to:
elasticsearch.<namespace>.svc.cluster.local.
If your elastic service is running in different namespace, make sure you use elastic.<target_namespace>.

Related

How can I disable / ignore proxy settings from inside a kubernetes pod only for requests directed to kubernetes services?

I have set these environment variables inside my pod named main_pod.
$ env
HTTP_PROXY=http://myproxy.com
http_proxy=http://myproxy.com
I also have another dynamic pod in pattern sub_pod-{number} which has a service attached to it called sub_pod-{number}.
So, if I add NO_PROXY=sub_pod-1 environment variable in main_pod, request with URL http://sub_pod-1:5000/health_check will run successfully as it won't be directed through proxy which is fine.
But I want this process to be dynamic. sub_pod_45 might spawn at runtime and sub_pod-1 might get destroyed. Is there any better way to handle this rather than updating NO_PROXY for every pod creation / destruction ?
Is there any resource / network policy / egress rule from which I can tell pod that if domain name belongs to kubernetes service, do not route it through proxy server?
Or can I simply use regex or glob patterns in NO_PROXY env variable like NO_PROXY=sub_pod-* ?
Edited
Result of nslookup
root#tmp-shell:/# nslookup sub_pod-1
Server: 10.43.0.10
Address: 10.43.0.10#53
Name: sub_pod-1.default.svc.cluster.local
Address: 10.43.22.139
When no_proxy=cluster.local,
Proxy bypassed when requested with FQDN
res = requests.get('http://sub_pod-1.default.svc.cluster.local:5000')
Proxy didn't bypass when requested with service name only
res = requests.get('http://sub_pod-1:5000') # I want this to work
I would not want to ask my developers to change the application to use FQDN.
Is there any way cluster can identify if URL resolves to a service present within the network and if it happens do not route the request to proxy ?
Libraries that support the http_proxy environment variable generally also support a matching no_proxy that names things that shouldn't be proxied. The exact syntax seems to vary across languages and libraries but it does seem to be universal that setting no_proxy=example.com causes anything.example.com to not be proxied either.
This is relevant because the Kubernetes DNS system creates its names in a domain based on the cluster name, by default cluster.local. The canonical form of a Service DNS name, for example, is service-name.namespace-name.svc.cluster.local., where service-name and namespace-name are the names of the corresponding Kubernetes objects.
I suspect this means it would work to do two things:
Set an environment variable no_proxy=cluster.local; and
Make sure to use the FQDN form when calling other services, service.namespace.svc.cluster.local.
Pods have similar naming, but are in a pod.cluster.local subdomain. The cluster.local value is configurable at a cluster level and it may be different in your environment.

Resolve service name in environment variable in Kubernetes

I'm really not sure what is going on and what is problem, when I pass service name in environment variable in YAML file, that service name is still string, it's not being resolved in real ip address.
Does it should work by automatically inside kubernetes, or I need to do some more config in order that service is resolved.
Example like this, new deployment:
env:
-name: MYSQL-SERVICE
value: my-service-name-which-should-be-resolved (also deployed on kubernetes as service)
thanks a lot for any advice!
#MatthewLDaniel and #RyanDawson are right. In this case, environment variable could not be converted into IP address, and you should use Service name.
More details you can find in DNS for Services and Pods and Services.

How do I use variable substitution in Azure Service Fabric

Trying to emulate compose file type deployment via Service Fabric service manifest specifically for environment variables in the container. Static values work fine, what is not working/documented is how do I pass something from the host into the container.
In compose following code will put hostname variable from container host into container environment variable, how do I do that in Service Fabric manifest?
environment:
- "SHELL=powershell.exe"
- "HostName=${hostname}"
It appears to be unsupported at this time, according to the referenced github issue.

Discover port of other service in Kubernetes, without using the FOO_SERVICE_PORT env variable

According to the Kubernetes documentation, each container gets a set of environment variables that lets it access other services
For example, if a Service named foo exists, all containers will get the following variables in their initial environment:
FOO_SERVICE_HOST=<the host the Service is running on>
FOO_SERVICE_PORT=<the port the Service is running on>
However, it seems that in my cluster I'm not getting the expected values in those variables:
tlycken#local: k exec -ti <my-pod> ash
/app # echo $SEARCH_HOST
/app # echo $SEARCH_PORT
tcp://10.0.110.126:80
I would rather have expected to see something like
tlycken#local: k exec -ti <my-pod> ash
/app # echo $SEARCH_HOST
10.0.110.126
/app # echo $SEARCH_PORT
80
I know that the docs also say
If you are writing code that talks to a Service, don’t use these environment variables; use the DNS name of the Service instead.
but that only gives me the host name, not the port, of the service. Therefore, I wanted to set SEARCH_HOST to search in my deployment template and rely on SEARCH_PORT to get the port, but when I put the service url together from the existing environment variables, it becomes http://search:tcp://10.0.110.126:80 which obviously does not work.
If I can't rely on the FOO_SERVICE_PORT variable to give me the port number, what should I do instead?
According to a part from kubernetes documentation posted in the question:
For example, if a Service named foo exists, all containers will get
the following variables in their initial environment:
FOO_SERVICE_HOST=<the host the Service is running on>
FOO_SERVICE_PORT=<the port the Service is running on>
The variable name is <your_service_name>_SERVICE_PORT, so if your server has name SEARCH, you are able to find it host and port values using SEARCH_SERVICE_HOST and SEARCH_SERVICE_PORT environment variables:
echo $SEARCH_SERVICE_HOST
echo $SEARCH_SERVICE_PORT
If I can't rely on the FOO_SERVICE_PORT variable to give me the port number, what should I do instead?
I think the best way is to use SRV records for resolving information about service because a DNS of the cluster is providing that Service Discovery feature.
Here is an official documentation about it, but in few words, the record looks like that:
<my-port-name>.<my-port-protocol>.<my-svc>.<my-namespace>.svc.cluster.local
So, for your service it will be like:
foo-port.tcp.foo.my-namespace.svc.cluster.local, where my-namespace is a namespace of foo service.
Address of your service can be obtained from foo.my-namespace.svc.cluster.local record.

Docker Compose v3 and link environment variables

Link environment variables has been deprecated since v2. What is the alternative for discovering the random port then? I have a dockerized java app that I can inform about data source via environment variables but now I can not. The vague mention that I should use link name is not helping. Is there an alternative?
So here is thing --link use to create so many unnecessary variables which were not required at all.
Now when you use docker-compose you can name your service anything you want. So if you are running mysql, you can name it mysql or db or dbservice or anything.
In your configs you can either use the service name mysql or db or dbservice. Or you can use environment variable inside the code to pickup the service name and pass that through your docker-compose.
Also you can have alias for the same container with different names.
About the ports, if I have a nginx image which exposes port 8080. Then I know in my config that it will always be port 8080 and hence no need to pass it