Link environment variables has been deprecated since v2. What is the alternative for discovering the random port then? I have a dockerized java app that I can inform about data source via environment variables but now I can not. The vague mention that I should use link name is not helping. Is there an alternative?
So here is thing --link use to create so many unnecessary variables which were not required at all.
Now when you use docker-compose you can name your service anything you want. So if you are running mysql, you can name it mysql or db or dbservice or anything.
In your configs you can either use the service name mysql or db or dbservice. Or you can use environment variable inside the code to pickup the service name and pass that through your docker-compose.
Also you can have alias for the same container with different names.
About the ports, if I have a nginx image which exposes port 8080. Then I know in my config that it will always be port 8080 and hence no need to pass it
Related
When I pass service name in environment variable in YAML file, that service name is still string, it's not being resolved in real ip address.
Example:
env:
- name: ES
value: elasticsearch
Thanks
You should be able to use it directly and it should resolve fine:
curl $ES
If you use it inside your application it should also work.
Just consider that Kubernetes uses its internal DNS and the that "elasticsearch" name should only work inside the same namespace. In fact it will resolve to:
elasticsearch.<namespace>.svc.cluster.local.
If your elastic service is running in different namespace, make sure you use elastic.<target_namespace>.
This is the documentation on External Database Environment Variables. It says,
Using an external service in your application is similar to using an internal service. Your application will be assigned environment variables for the service and the additional environment variables with the credentials described in the previous step. For example, a MySQL container receives the following environment variables:
EXTERNAL_MYSQL_SERVICE_SERVICE_HOST=<ip_address>
EXTERNAL_MYSQL_SERVICE_SERVICE_PORT=<port_number>
MYSQL_USERNAME=<mysql_username>
MYSQL_PASSWORD=<mysql_password>
MYSQL_DATABASE_NAME=<mysql_database>
This part is not clear - Your application will be assigned environment variables for the service.
How should the application be configured so that the environment variables for the service are assigned? I understand that, the ones defined in DeploymentConfig will flow into the application in say NodeJS as process.env.MYSQL_USERNAME, etc. I am not clear, how EXTERNAL_MYSQL_SERVICE_SERVICE_HOST or EXTERNAL_MYSQL_SERVICE_SERVICE_PORT will flow into.
From Step 1 of the link that you posted, if you create a Service object
oc expose deploymentconfig/<name>
This will automatically generate environment variables (https://docs.openshift.com/container-platform/3.11/dev_guide/environment_variables.html#automatically-added-environment-variables) for all pods in your namespace. (The environment variables may not be immediately available if the Service was added after your pods were already created...delete the pods to have them added on restart)
Trying to emulate compose file type deployment via Service Fabric service manifest specifically for environment variables in the container. Static values work fine, what is not working/documented is how do I pass something from the host into the container.
In compose following code will put hostname variable from container host into container environment variable, how do I do that in Service Fabric manifest?
environment:
- "SHELL=powershell.exe"
- "HostName=${hostname}"
It appears to be unsupported at this time, according to the referenced github issue.
According to the Kubernetes documentation, each container gets a set of environment variables that lets it access other services
For example, if a Service named foo exists, all containers will get the following variables in their initial environment:
FOO_SERVICE_HOST=<the host the Service is running on>
FOO_SERVICE_PORT=<the port the Service is running on>
However, it seems that in my cluster I'm not getting the expected values in those variables:
tlycken#local: k exec -ti <my-pod> ash
/app # echo $SEARCH_HOST
/app # echo $SEARCH_PORT
tcp://10.0.110.126:80
I would rather have expected to see something like
tlycken#local: k exec -ti <my-pod> ash
/app # echo $SEARCH_HOST
10.0.110.126
/app # echo $SEARCH_PORT
80
I know that the docs also say
If you are writing code that talks to a Service, don’t use these environment variables; use the DNS name of the Service instead.
but that only gives me the host name, not the port, of the service. Therefore, I wanted to set SEARCH_HOST to search in my deployment template and rely on SEARCH_PORT to get the port, but when I put the service url together from the existing environment variables, it becomes http://search:tcp://10.0.110.126:80 which obviously does not work.
If I can't rely on the FOO_SERVICE_PORT variable to give me the port number, what should I do instead?
According to a part from kubernetes documentation posted in the question:
For example, if a Service named foo exists, all containers will get
the following variables in their initial environment:
FOO_SERVICE_HOST=<the host the Service is running on>
FOO_SERVICE_PORT=<the port the Service is running on>
The variable name is <your_service_name>_SERVICE_PORT, so if your server has name SEARCH, you are able to find it host and port values using SEARCH_SERVICE_HOST and SEARCH_SERVICE_PORT environment variables:
echo $SEARCH_SERVICE_HOST
echo $SEARCH_SERVICE_PORT
If I can't rely on the FOO_SERVICE_PORT variable to give me the port number, what should I do instead?
I think the best way is to use SRV records for resolving information about service because a DNS of the cluster is providing that Service Discovery feature.
Here is an official documentation about it, but in few words, the record looks like that:
<my-port-name>.<my-port-protocol>.<my-svc>.<my-namespace>.svc.cluster.local
So, for your service it will be like:
foo-port.tcp.foo.my-namespace.svc.cluster.local, where my-namespace is a namespace of foo service.
Address of your service can be obtained from foo.my-namespace.svc.cluster.local record.
I created a Mongodb service according to the Kubernetes tutorial.
Now my question is how do I gain access to the database itself, with a client like Robomongo or similar clients? Just for making backups or exploring what data have been entered.
The mongo-pod and service only have an internal endpoint, and a single mount.
Is there any way to safely access this instance with no public endpoint?
Internally URI is mongo:27***
You can use kubectl port-forward mypod 27017:27017 and then just connect your mongodb client to localhost:27017.
If you want to stop, just hit Ctrl+C on the same cmd window to stop the process.
The kubernetes cmd-line tool provides this functionality as #ainlolcat stated
kubectl get pods
Retrieves the pod names currently running and with:
kubectl exec -i mongo-controller-* bash
you get a basic bash, which lets you execute
mongo
to get into the database to create dumps, and so on. The bash is very basic and has no features like completion and so on. I have not found a solution for better shell but it does the job
when you create a service in kubernetes you give it a name, say for example "mymongo". After the service is created then
The DNS service of kubernetes (by default is on) will ensure that any pod can discover this servixe simply by its name. so you can set your uri like
uri: mongodb://**mymongo**:27017/mong
In addition the service IP and port will be set as environment variables at the running pod.
MYMONGO_SERVICE_HOST
MYMONGO_SERVICE_PORT
I have in fact wrote a blog that show a step by step example of an app with nodejs web server and mongo that can explain further
http://codefresh.io/blog/kubernetes-snowboarding-everything-intro-kubernetes/
feedback welcome!
Answer from #grchallenge is correct but it is deprecated as of in 2021
All new comers please use
kubectl exec mongo-pod-name -i -- bash