I have an application running on my own server with kubernetes. This application is supposed to work as a gateway and has a LoadBalancer service, which is exposing it to "the world". Now I'd like to connect this application with other applications running within the very same kubernetes cluster, so they can exchange HTTP requests with each other.
So let's say that my Gateway app is running on the port 9000, the app which I'd like to call runs on 9001. When I make curl my_cluster_ip:9001 it gives me a response. Nevertheless I never know, what the Cluster IP will be, so I can't implement this to my gateway app.
Use case is typing to the web browser url_of_my_server:9000 -> this will call the gateway -> it sends HTTP Request to the other app running in the cluster on the port 9001 -> response back to the gateway -> response back to the user.
Where the magic has to happen and how to easily make these two apps to talk with each other, while only one will be exposed to "the world" and the other one will be accessible only from within the cluster?
You can expose your app on port 9001 as a service (lets say myservice).
When you do that myservice.<namespace>.svc.cluster.local will resolve to IP addres of your app. More Info on DNS here : https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
And then you can access your app within Kubernetes cluster as:
http://myservice.<namespace>.svc.cluster.local:9001
You have a couple of options for internal service discovery:
You can use the cluster-internal DNS service to find the other application, as detailed in the answer by bits.
if both the proxy and the app runs in the same namespace, there are environment variables that expose the IP and ports. This may mean you have to restart the proxy if you remove/readd the other application, as the ports may change.
you can run both apps as two different containers in the same pod; this will ensure they get scheduled on the same host, which allows you to communicate on the same host.
Also note that support for your HTTP proxy setup already exist in Kubernetes; take a look at Ingress and Ingress Controllers.
Related
I have a query which is basically a clarification regarding Routes in OpenShift Origin.
I managed to setup OpenShift Origin version 1.4.0-rc1 on a CentOS hosted in local VMWare installation. Am also able to pull and setup image for nginx and pod status shows Running. Able to access nginx on the service endpoint also. Now as per documentations if I want to access this nginx instance outside the hosted system I need to create a Route, which I also did.
Confusion is on the Create Route screen from OpenShift Web Console it generates a hostname or allows to enter a hostname. Both of the option i tried, generated hostname seems to be a a long subdomain kind of hostname and it doesn't work. What I mean is I'm not able to access this hostname from anywhere in the network including the hosting OS as well.
To summarize, service endpoints which looks like 172.x.x.x is working on the local machine which is hosting OpenShift. But the generated/entered hostname for the route doesn't work from anywhere.
Please clarify the idea behind this route concept and how could one access a service from outside the host machine (Part of same network)
As stated in documentation:
An OpenShift Origin route exposes a service at a host name, like
www.example.com, so that external clients can reach it by name. DNS
resolution for a host name is handled separately from routing; your
administrator may have configured a cloud domain that will always
correctly resolve to the OpenShift Origin router, or if using an
unrelated host name you may need to modify its DNS records
independently to resolve to the router.
It is important to notice the difference between "route" and "router". The Opensfhit router (that is mentioned above)listens to all requests to Openshift deployed applications, and has to be previoulsy deployed, in order for routes to work.
https://docs.openshift.org/latest/architecture/core_concepts/routes.html
So once you have the router deployed and working, all routes that you create in openshift should resolve where that Openshift router is listening. For example, configuring your DNS with a wildcard (this is dnsmaq wildcard example):
address=/.yourdomain.com/107.117.239.50
This way all your "routes" to services should be like this:
service1.yourdomain.com
service2.yourdomain.com
...
Hope this helps
I want to share http/80 port for two different web application(webpi/website) inside service fabric cluster, the application must have 2 different host name:
mywebapi.com and mywebsite.com
if i run the apps out of fabric(console app) all works fine:
The first console app
var _webHost = new Microsoft.AspNetCore.Hosting.WebHostBuilder()
.UseWebListener().UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>().UseUrls("http://myWebApi.com/").Build();
The second console app:
var _webHost = new Microsoft.AspNetCore.Hosting.WebHostBuilder()
.UseWebListener().UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.UseUrls(
"http://myWebSite.com/"
)
.Build();
but if i run apps inside a local fabric i get:
HTTP Error 503. The service is unavailable.
I've setup correct ACL with netsh and SetupEntryPoint(no Access Denied on open).
On microsoft http.sys guide explicit host is allowed.
Make sure you remove any HTTP Endpoint configurations for port 80 in your ServiceManifest.xml, otherwise Service Fabric will override your domain-specific ACLs. See here for info: host multiple public sites on service fabric
Why not just publish both to a non 80 port and use the default load balancer to remap it ?
My Setup
I have some services that register with Eureka. This registration info is used by Zuul to route requests to my services. Most of these services run on a port like 9999 or 8080. Each service is on it's own EC2 instance, and I have Nginx routing requests from port 80 to the server's port, so that I can keep my Security Group rules simple.
My Problem
When my service registers with Eureka, it gets registered with ${server.port}, which ends up being 8080 or 9999, etc. When Zuul attempts to route to {ec2host}:8080, it gets blocked by my Security Group rules. Based on the documentation, it looks like I should be able to specify a host and port with eureka.instance.hostname and eureka.instance.nonSecurePort. Whether I use those properties or not, my service registers with it's specific port.
Is there a way to get the Eureka client to register my service with port 80, instead of the server's port?
With a standard Kubernetes deployment on Google Container Engine, to include services configured with the Kubernetes load balancer settings which creates network load balancers, is it possible to access the user's (or referring) IP address in an application? In the case of PHP, checking common headers in the $_SERVER superglobal only results in the server and internal network addresses being available.
Not yet. Services go through kube_proxy, which answers the client connection and proxies through to the backend (your PHP server). The address that you'd see would be the IP of whichever kube-proxy the connection went through.
Work has been done, and a tracking issue is still open to switch over to an iptables-only proxy. That would allow your PHP server to get the actual client IP.
can a NodeJS application running on Bluemix make outside HTTP requests ? What address does the receiving end see ? There is a proxy that stops traffic from unknown servers on the other end, so we need to declare the origin IP. What is it for Bluemix ?
Any application running on IBM Bluemix can make outgoing HTTP requests (or any other outgoing TCP/UDP request).
Outgoing requests will come from the IP address of the DEA running the container with this application instance. If you have multiple instances, requests can come from any of these instances.
For details on the environment variables exposing these parameters, see this page:
http://docs.run.pivotal.io/devguide/deploy-apps/environment-variable.html
Yes a NodeJs application running on Bluemix can make outgoing requests.
The receiving end will see the IP address of the Bluemix gateway rather than the IP address of the DEA running the container. You can work out what the IP address is by doing a nslookup of your the app url, but the IP address(es) used is/are not currently documentation so could change.