I'm really not sure what is going on and what is problem, when I pass service name in environment variable in YAML file, that service name is still string, it's not being resolved in real ip address.
Does it should work by automatically inside kubernetes, or I need to do some more config in order that service is resolved.
Example like this, new deployment:
env:
-name: MYSQL-SERVICE
value: my-service-name-which-should-be-resolved (also deployed on kubernetes as service)
thanks a lot for any advice!
#MatthewLDaniel and #RyanDawson are right. In this case, environment variable could not be converted into IP address, and you should use Service name.
More details you can find in DNS for Services and Pods and Services.
Related
I have set these environment variables inside my pod named main_pod.
$ env
HTTP_PROXY=http://myproxy.com
http_proxy=http://myproxy.com
I also have another dynamic pod in pattern sub_pod-{number} which has a service attached to it called sub_pod-{number}.
So, if I add NO_PROXY=sub_pod-1 environment variable in main_pod, request with URL http://sub_pod-1:5000/health_check will run successfully as it won't be directed through proxy which is fine.
But I want this process to be dynamic. sub_pod_45 might spawn at runtime and sub_pod-1 might get destroyed. Is there any better way to handle this rather than updating NO_PROXY for every pod creation / destruction ?
Is there any resource / network policy / egress rule from which I can tell pod that if domain name belongs to kubernetes service, do not route it through proxy server?
Or can I simply use regex or glob patterns in NO_PROXY env variable like NO_PROXY=sub_pod-* ?
Edited
Result of nslookup
root#tmp-shell:/# nslookup sub_pod-1
Server: 10.43.0.10
Address: 10.43.0.10#53
Name: sub_pod-1.default.svc.cluster.local
Address: 10.43.22.139
When no_proxy=cluster.local,
Proxy bypassed when requested with FQDN
res = requests.get('http://sub_pod-1.default.svc.cluster.local:5000')
Proxy didn't bypass when requested with service name only
res = requests.get('http://sub_pod-1:5000') # I want this to work
I would not want to ask my developers to change the application to use FQDN.
Is there any way cluster can identify if URL resolves to a service present within the network and if it happens do not route the request to proxy ?
Libraries that support the http_proxy environment variable generally also support a matching no_proxy that names things that shouldn't be proxied. The exact syntax seems to vary across languages and libraries but it does seem to be universal that setting no_proxy=example.com causes anything.example.com to not be proxied either.
This is relevant because the Kubernetes DNS system creates its names in a domain based on the cluster name, by default cluster.local. The canonical form of a Service DNS name, for example, is service-name.namespace-name.svc.cluster.local., where service-name and namespace-name are the names of the corresponding Kubernetes objects.
I suspect this means it would work to do two things:
Set an environment variable no_proxy=cluster.local; and
Make sure to use the FQDN form when calling other services, service.namespace.svc.cluster.local.
Pods have similar naming, but are in a pod.cluster.local subdomain. The cluster.local value is configurable at a cluster level and it may be different in your environment.
I have this ConfigMap where I am constructing a app-config.json file that I pass into Angular. This file is how I get environment variables into Angular as they must be served.
Below is how I thought passing variables into the JSON would work in ConfiMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: frontend-settings
data:
app-config.json: |-
{
"keycloakUrl": "http://${minikube ip}:${keycloak_port}/auth",
"realm": "eshc",
"clientId": "eshc-frontend",
"backendApi": "http://localhost:${backend_port}"
}
The problem is that these are not evaluated. I want to pass Kube service aliases, and the minikube ip command as in the example above. Could someone point me in the right direction as to how I might do this?
Thanks in advance!
Kubernetes doesn't provide this facility in the API.
You can do this at deploy time with helm or kubectl's kustomization features.
Depending on your use case, this can also be done at runtime in a container entry point before the app starts up or in a Kubernetes specific init container. Avoid the init container unless you are working with shared file systems, or with the Kubernetes API to apply these changes.
From your example it looks like everything should be available at deploy time, maybe not the minikube IP. For that you should be able to use the magic DNS name host.minikube.internal
I recently successfully deployed my Vue.JS webapp to Cloud Run. Beforehand the webapp was deployed by a Kubernetes Deployment and Service. I also had an Ingress running that redirect my http requests to that service. Now Cloud Run takes over the work.
Unfortunately the new Cloud Run driven Knative "Service" does not seem to work anymore.
My Ingress is showing me the following error message:
(Where importer-controlroom is my application's name)
The error message is not cromprehensible to me. I hereby try to provide you with some more information with what you maybe be able to help me out with this issue.
This is current list of resources that have been created. I especially was looking at the importer-controlroom-frontend External Name. I somewhat think this is the Service that replaced the old one?
Because I used it's name in my Ingress Rules to map it to a domain as you can see here:
The error message in the Ingress says:
could not find port "80" in service "dev/importer-controlroom-frontend"
However the Cloud Run revision shows that port 80 is being provided:
A friend of mine redirect me to this article: https://cloud.google.com/solutions/integrating-https-load-balancing-with-istio-and-cloud-run-for-anthos-deployed-on-gke?hl=de#handling_health_check_requests
Unfortunately I have no idea what it is talking about. True thing is that we are using Istio but I did not configure it and have a very hard time getting my head around it for this particular case.
INFO_1
Dockerfile contains:
EXPOSE 80
CMD [ "http-server", "dist", "-p 80"]
Cloud Run for Anthos apps do not work with a GKE Ingress.
Knative services are exposed through a public gateway service called istio-ingress on the gke-system namespace:
$ kubectl get svc -n gke-system
NAME TYPE CLUSTER-IP EXTERNAL-IP
istio-ingress LoadBalancer 10.4.10.33 35.239.55.104
Domain names etc work very differently on Cloud Run for Anthos so make sure to read the docs on that.
When I pass service name in environment variable in YAML file, that service name is still string, it's not being resolved in real ip address.
Example:
env:
- name: ES
value: elasticsearch
Thanks
You should be able to use it directly and it should resolve fine:
curl $ES
If you use it inside your application it should also work.
Just consider that Kubernetes uses its internal DNS and the that "elasticsearch" name should only work inside the same namespace. In fact it will resolve to:
elasticsearch.<namespace>.svc.cluster.local.
If your elastic service is running in different namespace, make sure you use elastic.<target_namespace>.
As we know, kubernetes supports 2 primary modes of finding a Service - environment variables and DNS, could we disable the first way and only choose the DNS way?
As shown in this PR, this feature will land with Kubernetes v1.13. From the PR (as Docs are not available yet) I expect it to be the field enableServiceLinks in the pod spec with true as default.
Edit: It has been a while and the PR finally landed. The enableServiceLinks was added as an optional Boolean to the Kubernetes PodSpec.
For the record: using DNS to discover service endpoints is the recommended approach. The docker link behavior, from where the environment variables originate, has long been deprecated.
Per kubernetes v1.8 source, it's impossible to disable services discovery with environment variables.
Only service meet either condition is exposed by envVars.
service in the same namespace as the pod;
kubernetes service in the default namespace;
Even though, these environment variables can be overwritten by env and envFrom defined in pod template.
I'm wondering what's your scenario, maybe we can figure out some workaround.