Two Server which is behind load balancer must not be called directly from Public - webserver

I have facing with an issue in my lab. My lab infra is on Digital Ocean. I have two servers (Droplets) and one Load Balancer. The two servers are actually webservers. I can call the website from Load Balancer. But, I only want to call from the load balancer not directly from the webservers.
How can I manage to it?

You can
Drop the public IP of the servers
Block the requests via a firewall if they are not coming from your load balancers
Don't have the process that is actually serving the requests listen to the public ip of the server

Related

How do I simulate a VPN connection to Google Cloud?

So I have GCP set up and Kubernetes, I have a web app (Apache OFBiz) running on pods in the GKE cluster. We have a domain that points itself to the web app, so essentially it's accessible from anywhere on the internet. Our issue is since this is a school project, we want to limit the access to the web app to the internal network on GCP, we want to simulate a VPN connection. I have a VPN gateway set up, but I have no idea what to do on any random computer to simulate a connection to the internal network on GCP. Do I need something else to make this work? What are the steps on the host to connect to GCP? And finally, how do I go about limiting access to the webapp so only people in the internal network have access to the webapp?
When I want to test a VPN, I simply create a new VPC in my project and I connect both with Cloud VPN. Then, in the new VPC, you can create VM that simulate computer in the other side of the VPN and thus simulate what you want.
To setup a VPN on GCP you can use Cloud VPN using static or dynamic routing, you will need to configure a remote peer from the location you want to access your GCP resources to establish the connection towards the Cloud VPN gateway on GCP end.
This means you may require a router that supports creating VPN tunnels on your on-premises or use a host that acts like a router to establish this connection using a VPN software towards Cloud VPN (like Strongswan, for example).
You can block external access to the resources on your VPC network by using GCP firewall rules and just allow specific ports or source IP ranges as you wish.
Another option, even if it's not a VPN or encrypted traffic, is to only allow ingress traffic from the public IP from where you would like to connect to your internal VPC, but this is less secure and would only work if you have an static public IP on your on-premises.
Since you said this is a school project, I would recommend asking your teacher for more direct advice. That said, you can't "simulate" a VPN but you can set up an IPSec client on your laptop or whatever and actually connect to it. Unfortunately Google doesn't appear to have any documentation on this so I'm guessing they presume you already know IPSec well enough to write a connection config yourself.
Using kubectl port-forward might be an easier solution.

Allow load balanced instances to connect single compute instance postgresql server

I am looking for GCP networking best practice, where I can allow connection of auto-scaled instances to Postgresql server installed on separate instance.
So far I tried whitelisting load-balancer IP within firewall and postgresql config file, but failed.
Any help or pointer is highly appreciated.
The load-balancer doesn't process information by itself, it just redirects Frontend addresse(s) and manage the requests with Instance Groups.
That instance group should manage the HTTP requests and connect with the database instance.
The load-balancer is used to dynamically distribute (or even create additional instances) to handle the requests over the same Frontend address.
--
So first you should make it work with a regular instance, configure it and save the instance template. Then you can proceed with creating an instance group that can be managed by a load-balancer.
EDIT - Extended the answer from my comment
"I don't think your problem is related to Google cloud platform now. If you have a known IP address for the PostgreSQL server (connect using an internal network IP address so it doesn't change), then make sure your auto-balanced instances are in the same internal network, use db's internal IP and connect to it."

HAProxy dynamic configuration

Here is my setup:
we do have one external load balancer (AWS) attached to root domain mydomain.com
external load balancer forward traffic to HAProxy instance, and haproxy have to forward it further to on of two internal load balancers
we have 2 internal load balancers, first point to latest version of our app, second to minus one version
each of our client can have 1 to many subdomains like sub1.mydomain.com, sub2. sub3
some subdomains should be redirected to old version, some to new, so same client can have old and new in subdomains
Basically:
sub1.mydomain.com -> latest-load-balancer
sub2.mydomain.com -> older-load-balancer
The problem is how to setup this routing, we can't stop/start haproxy for each new subdomain. And it could be more them 10k of that subdomains in future.
haproxy can use maps to decide which backend (internal load balancer) to use, based on the domain.
haproxy can be reloaded instead of restarted when the map changes.
If you do not wish to even reload, you could pass map commands to the unix socket, thus changing the map in realtime.

GCE + K8S - Accessing referral IP address

With a standard Kubernetes deployment on Google Container Engine, to include services configured with the Kubernetes load balancer settings which creates network load balancers, is it possible to access the user's (or referring) IP address in an application? In the case of PHP, checking common headers in the $_SERVER superglobal only results in the server and internal network addresses being available.
Not yet. Services go through kube_proxy, which answers the client connection and proxies through to the backend (your PHP server). The address that you'd see would be the IP of whichever kube-proxy the connection went through.
Work has been done, and a tracking issue is still open to switch over to an iptables-only proxy. That would allow your PHP server to get the actual client IP.

Is a server farm abstracted on both sides?

I am trying to understand how a solution will behave if deployed in a server farm. We have a Java web application which will talk to an FTP server for file uploads and downloads.
It is also desirable to protect the FTP server with a firewall, such that it will allow incoming traffic only from the web server.
AT the moment since we do not have a server farm, all requests to the FTP server come from the same IP (web server IP) making it possible to add a simple rule in the firewall. However, if the application is moved to a server farm, then I do not know which machine in the farm will make a request to the FTP server.
Just like the farm is hidden behind a facade for it's clients, is it hidden behind a facade for the services it might invoke, so that regardless of which machine from the farm makes the request to the FTP server, it always sees the same IP?
Are all server farms implemented the same way, or would this behavior depend on the type of server farm? I am thinking of using Amazon Elastic CLoud.
It depends very much on how your web cluster is configured. If your cluster is behind a NAT firewall, then yes, all outgoing connections will appear to come from the same address. Otherwise, the IP addresses will be different, but they'll almost certainly all be in a fairly small range of addresses, and you should be able to add that range to the firewall's exclude list, or even just list the IP address of each machine individually.
Usually you can enter cnames or subnets when setting up firewall rules, which would simplify the maintenance of them. You can also send all traffic through a load balancer or proxy. Thats essentially how any cloud/cluster/farm service works.
many client ips <-> load balancer <-> many servers