1.How much affect a data center location has on server response time?
2.How secure is the network between servers and client say in terms of sniffing,any specific security measure taken?
The location of the data center will affect the latency between it and the client. Ideally, you want to choose a data center in a region/zone closest to the majority of the clients it will be serving. That said, request from anywhere in the world will hit the nearest edge network device and will then travel over the Google's backbone. This provides low global latency, but there is still lower latency if the data center is closer
All traffic between GCP resources is secure since it does not leave the Google network. Traffic between an external client and a GCP resource is not secured by default since the traffic has to travel through the internet. There are multiple ways to secure this traffic such as Cloud VPN or using SSL certificates for applications or Load Balancers.
Keep in mind that IaaS resources give you almost full control and so it is up to the user to secure what is deployed
Related
I made a game with unity. Now I am make it online with darkrift2, Darkrift gave me a server, I need to put it to rented cloud server. I want access hole world. Should I put the server to multiple locations for access all the world, which i don’t want to manage with or When I put to one rented cloud server, will it do automatickly access to hole world? And you know there are companies that rent server, how can I understand that company is automaticly access all of the world?
When you rent cloud resources you can chose in which data-center they are hostet, at least with the big cloud providers. In the example of Amazon Web Services, they have regions basically all around the world.
Certainly you can configure your VM to be accessible from anywhere in the world, but you have to take into account latency. For example if you host a server in us-east-1 (N. Virginia) you will experience latency in the order of at least 200ms in eastern Europe.
Another factor is if a countries politics will allow it.
I have come across a need that I need to serve application users based on their geo-location.
One possibility, I could think of it to have application installed on multiple k8s clusters hosted in different region and then load-balance the traffic based on geo-location of the users.
While exploring this idea, I came across several articles on "Kubernetes Cluster Federation" (e.g. https://kubernetes.io/blog/2016/10/globally-distributed-services-kubernetes-cluster-federation/). But seems like this functionality has been retired as mentioned in https://github.com/kubernetes-retired/federation.
Does someone know:
If there is any alternative for "Kubernetes Cluster Federation"?
Is there any other solution/s to address the need of serving users based on their geo-location?
If we leave the application part, is there any way to store the data in same geo-location?
Thanks!
https://github.com/kubernetes-sigs/kubefed is a successor to the "Kubernetes Cluster Federation", though I am not sure what is its current state. If you want to deploy a global loadbalancer, I suggest to have a look into https://www.k8gb.io/ .
...k8s clusters hosted in different region and then load-balance the traffic based on geo-location of the users
If you determine the user location simply by the network location, you can use DNS geolocation routing capability such as Route 53 to reach nearest services. In this context k8s federation is not required.
If we leave the application part, is there any way to store the data in same geo-location?
Apart from global scale database solution such as Aurora, Spanner, your application can point to a centralize database that resides in one of the region; if the increase latency is acceptable.
I read this article about the API Gateway pattern. I realize that API Gateways typically serve as reverse proxies, but this forces a bottleneck situation. If all requests to an application's public services go through a single gateway, or even a single load balancer across multiple replicas of a gateway (perhaps a hardware load balancer which can handle large amounts of bandwidth more easily than an API gateway), then that single access point is the bottleneck.
I also understand that it is a wide bottleneck, as it simply has to deliver messages in proxy, as the gateways and load balancers themselves are not responsible for any processing or querying. However, imagining a very large application with many users, one would require extremely powerful hardware to not notice the massive bandwidth traveling over the gateway or load balancer, given that every request to every microservice exposed by the gateway travels through that single access point.
If the API gateway instead simply redirected the client to publicly exposed microservices (sort of like a custom DNS lookup), the hardware requirements would be much lower. This is because the messages traveling to and from the API Gateway would be very small, the requests consisting only of a microservice name, and the responses consisting only of the associated public IP address.
I recognize that this pattern would involve greater latency due to increased external requests. It would also be more difficult to secure, as every microservice is publicly exposed, rather than providing authentication at a single entrypoint. However, it would allow for bandwidth to be distributed much more evenly, and provide a much wider bottleneck, thus making the application much more scalable. Is this a valid strategy?
A DNS/Public IP based approach not good from a lot of perspectives:
Higher attack surface area as you have too many exposed points and each need to be protected
No. of public IPs needed is higher
You may need more DNS settings with subdomains or domains to be there for these APIs
A lot of times your APIs will run on a root path but you may want to expose them on a folder path example.com/service1, which requires
you to use some gateway for the same
Handling SSL certificates for these public exposures
Security on a focussed set of nodes vs securing every publically exposed service becomes a very difficult task
While theoretically it is possible to redirect clients directly to the nodes, there are a few pitfalls.
Security, Certificate and DNS management has been covered by #Tarun
Issues with High Availability
DNS's cache the domains vs IP's they serve fairly aggressively because these seldom change. If we use DNS to expose multiple instances of services publicly, and one of the servers goes down, or if we're doing a deployment, DNS's will continue routing the requests to the nodes which are down for a fair amount of time. We have no control over external DNS's and their policies.
Using reverse proxies, we avoid hitting those nodes based on health checks.
I am using Haproxy with two different nodes having different machines 'geographically scattered'
Load-balancer-one having dns = http1.example.com
Load-balancer-two having dns = http2.example.com
The service is listening on DNS main site with original hostname --haproxy
My question is how to maintain a static URL? i.e. it must not show the back-end server domain's or IPs, I want to show only original hostname.
The simplest method is to setup a round robin DNS entry that returns the IP addresses of both servers.
You likely however want to use a GSLB (global server load balancing) solution that can remove failed load balancers from responses based on a health check. If you are in multiple data centers, some GSLB solutions can route users to the most performant location for them.
F5 and Netscaler have hardware GSLB solutions. Dyn, Akamai, UltraDNS and others offer GSLB as a service. AWS' Route53 has a weighted round robin solution. They do not currently offer health checking or routing based on geographic location.
This is my case:
I have 6 servers across US and Europe. All servers are on a load balancer. When you visit the website (www.example.com) its pointing on the load balancer IP address and from their you are redirect to one of the servers. Currently, if you visit the website from Germany for example, you are transfered randomly in one of the server. You could transfer to the Germany server or the server in San Fransisco.
I am looking for a way to redirect users to the nearest server based on their location but without changing url. So I am NOT looking of having many url's such as www.example.com, www.example.co.uk, www.example.dk etc
I am looking for something like a CDN where you retrieve your files from the nearest server (?) so I can get rid of the load balancer because if it crashes, the website does not respond (?)
For example:
If you are from uk, redirect to IP 53.235.xx.xxx
If you are from west us, redirect to IP ....
if you are from south europe, redirect to IP ... etc
DNSMadeeasy offers a feature similar to this but they are charging a 600 dollars upfront price and for a startup that doesnt know if that feature will work as expected or there is no trial version we cannot afford: http://www.dnsmadeeasy.com/enterprise-dns/global-traffic-director/
What is another way of doing this?
Also another question on the current setup. Even with 6 servers all connected to the load balancer, if the load balancer has lag issues, it takes everything with it, right? or if by any change it goes down, the website does not respond. So what is the best way to eliminate that downtime so that if one server IP address does not respond, move to the next (as a load balancer would do but load balancers can have issues themselves)
Would help to know what type of application servers you're talking about; i.e. J2EE (like JBoss/Tomcat), IIS, etc?
You can use a hardware or software load balancer with Sticky IP and define ranges of IPs to stick to different application servers. Each country's ISPs should have it's own block of IPs.
There's a list at the website below.
http://www.nirsoft.net/countryip/
Here's also a really, really good article on load balancing in general, with many high availability / persistence issues addressed. That should answer your second question on the single point of failure at your load balancer; there's many different techniques to provide both high availability and load distribution. Alot depends on what kind of application your run and whether you require persistent sessions or not. Load balancing by sticky IP, if persistence isn't required and you're LB does health checks properly, can provide high availability with easy failover. The downside is that load isn't evenly distributed, but it seems you're looking for distribution based on proximity, not on load.
http://1wt.eu/articles/2006_lb/index.html