Kubernetes DNS names, Certificates, LB Design Question [closed] - kubernetes

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 17 days ago.
Improve this question
I would like to ask what strategy to use in Kubernetes for Ingress Controllers, DNS names, certificates and apps. Not asking for technical details but more about the modeling. I have searched for recommendations on this and struggled.
Q1: Use one OR multiple Load Balancers? When would you spin a new LB - is it based on security, traffic, something else?
Q2: Lets say I have 3 Business Units and each of them have 2 Apps, what is the best way to go about DNS names and certificates?
Use separate certs and DNS names for every App ==> 6 Certs, 6 DNS names (bu1app1.company.com, bu1app2.company.com,bu2app1.company.com, bu2app2.company.com,...)
Use a cert per BU (DNS name for each BU BUT not each App) and then use path based routing for the Apps under that BU ==> 3 Certs, 3 DNS names (bu1.company.com/app1, bu1.company.com/app2, bu2.company.com/app1, bu2.company.com/app2,...)
Use a single cert for all BU (single DNS for All) and then use path based routing for every BU and app ==> 1 Cert, 1 DNS name (k8s.company.com/bu1app1, k8s.company.com/bu1app2, k8s.company.com/bu2app1, k8s.company.com/bu2app2,...)
Any advice is appreciated.
Jake.

Q1: Use one OR multiple Load Balancers? When would you spin a new LB - is it based on security, traffic, something else?
It depends on your application architecture, number of locations you are serving, etc. There are multiple types of load balancers available like internal load balancers, external load balancers, global load balancers, network and application load balancers etc., if your application is very big, instead of having a single machine serving your entire application you can split the application into multiple groups based on their functionality and have a load balancer configured for each group based on the traffic or hits they are receiving.
Q2: Lets say I have 3 Business Units and each of them have 2 Apps, what is the best way to go about DNS names and certificates?
Again it depends on the type of business each of your units is handling. If all the three units are doing the same business but targeting different audience then you can have a single DNS and a certificate. If your three units are doing distinct it’s always suggest to go with distinct dns names in order to reach the targeted consumers.
Eg: If your’s is a food delivery application and you have two units one which serves all the customers and one which serves only b2b customers, here you can have a single domain name and one sub domains namely
fooddelivery.com &
b2b.fooddelivery.com

Related

How to provide multiple services through a cloud gateway?

Assume I'm working on a multiplayer online game. Each group of players may start an instance of the game to play. Take League Of Legends as an example.
At any moment of time, there are many game matches being served at the same time. My question is about the architecture of this case. Here are my suggestions:
Assume we have a cloud with a gateway. Any game instance requires a game server behind this gateway to serve the game. For different clients outside the cloud to access different game servers in the cloud, the gateway may differentiate between connections according to ports. It is like we have one machine with many processes each of them listening on a different port.
Is this the best we can get?
Is there another way for the gateway to differentiate connections and forward them to different game instances?
Notice that these are socket connections NOT HTTP requests to an API gateway.
EDIT 1: This question is not about Load Balancing
The keyword is ports. Will each match be served on a different port? or is there another way to serve multiple services on the same host (host = IP)?
Elaboration: I'm using client-server model for each match instance. So multiple clients may connect to the same match server to participate in the same match. Each match need to be server by a match server.
The limitation in mind is: For one host (=IP) to serve multiple services it need to provide them on different ports. Match 1 on port 1234. So clients participating in match 1 will connect to and communicate with the match server on port 1234.
EDIT 2: Scalability is the target
My match server does not calculate and maintain the world of many matches. It maintains the world of one match. This is why each match need another instance of the match server. It is not scalable to have all clients communicating about different matches to connect to one process and to be processed by one process.
My idea is to serve the world of each match by different process. This will require each process to be listening on a different port.
Example: Any client will start a TCP connection with a server listening on port A. Is there is a way to serve multiple MatchServers on the same port A (so that more simultaneous MatchServers won't result in more ports)?
Is there a better scalable way to serve the different worlds of multiple matches?
Short answer: you probably shouldn't use proxy-gateway to handle user connections unless you are absolutely sure there's no other way - you are severely limiting your scaling ability.
Long answer:
What you've described is just a load balancing problem. You can find plenty of solutions based on given restrictions via google.
For League Of Legends it can be quite simple: using some health-check find server with lowest amount of load and stick (kinda like sticky sessions) current game to this server - until the game is finished any computations for particular game are made there. You could use any kind of caching mechanism to store game - server relation for subsequent requests on gateway side.
Another, a bit more complicated example could be data storage for statistics for particular game - it's usually solved via sharding which is a usual consequence of distributed computing. It could be solved this way: use some kind of hashing function (for example, modulo) with game ID as parameter to calculate server number. For example 18283 mod 15 = 13 for game ID = 18283 and 15 available shards - so 13th server should store/serve this data.
Main problem here would be "rebalancing" - adding/remove a shard from cluster, for example.
Those are just two examples, you can google more of them using appropriate keywords. Just keep in mind that all of this is just a subset of problems of distributed computing.

Can Let's Encrypt distribute multiple certificates for a single domain name?

Can we let Encrypt distribute multiple certificates for a single domain name? I mean all of them are valid at the same time.
Yes. Check out the rate limit documentation:
https://letsencrypt.org/docs/rate-limits/
If you need to do so for testing, the staging API rate limit is much higher.
In fact, I’m working on a home cloud system and we’re building a kind of “inside out” cloud where the devices use Greenlock and Telebit so that they each have their own certificate and connection rather than being behind a load balancer - exactly the kind of thing we couldn’t reasonably do without Let’s Encrypt.
Also, if you've got an application where you're sharing a domain among many hosts be sure to get your shared domains listed in the Public Suffix List both for security and so you don't hit rate limits.

What are the advantages and disadvantages of site mirroring [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Question 1:
When sites are mirrored, the content of their respective servers is synchronized (possibly automatically (live mirrors) or manually). Is this true? Are all servers 'equal', or does a main server exists? which then sends it changes to other 'children servers'? So all changes have to happen on the main server, and children servers are not allowed changes?
Question 2:
Expected advantages:
Global advantage: when a site that is originally hosted in the US is mirrored to a server in London, Europeans will benefit from this. They will have a better response time and because the amount of downloaders is cut down into two pieces (American and European servers) their download speeds can be higher.
Security: When one server crashes or is hacked, the other server can continue to operate normally.
Expected disadvantages:
If live mirroring is not used, some users will have to wait for renewed content.
More servers equals higher upkeep costs.
What other items can be added to these lists?
When sites are mirrored, the content of their respective servers is
synchronized. Is this true?
Yes, mirror sites should always be synchronized with their masters even if, for several reasons (eg. updates propagation times, network failures, etc.) they may not be.
There are several ways to achieve this; for example, a simple method could be using a rsync command in a cron job; a better solution is the "push mirroring" technique, used by the Debian and Ubuntu Linux distributions.
Are all servers 'equal', or does a main server exists, which then
sends it changes to other 'children servers'?
No, not all server are equals; generally the content provider updates one or more master servers which, in turn, provide the updated content to the other mirrors.
For example, in the Fedora infrastructure there are master servers, tier-1 servers (fastest mirrors) and tier-2 servers.
So all changes have to happen on the main server, and children servers
are not allowed changes?
Yes, in a mirrored context the content must be updated only on the master servers (one or more).
Expected advantages
Maybe the most comprehensive list of reasons for mirroring can be found on the Wikipedia:
To preserve a website or page, especially when it is closed or is about to be closed.
To allow faster downloads for users at a specific geographical location.
To counteract censorship and promote freedom of information.
To provide access to otherwise unavailable information.
To preserve historic content.
To balance load.
To counterbalance a sudden, temporary increase in traffic.
To increase a site's ranking in a search engine.
To serve as a method of circumventing firewalls.
Expected disadvantages
Cost: you have to buy additional servers and spend time to operate them.
Inconsistency: when one or more mirrors are not synchronized with the master (and this could happen not only with manual sync, but also with live sync).
As a further reference, since mirroring is a simple form of a Web Distributed System, you could also be interested in this reading.
Also, for files that are popular for downloading, a mirror helps reduce network traffic, ensures better availability of the Web site or files, or enables the site or downloaded files to arrive more quickly for users close to the mirror site. Mirroring is the practice of creating and maintaining mirror sites.
A mirror site is an exact replica of the original site and is usually updated frequently to ensure that it reflects the content of the original site. Mirror sites are used to make access faster when the original site may be geographically distant (for example, a much-used Web site in Germany may arrange to have a mirror site in the United States). In some cases, the original site (for example, on a small university server) may not have a high-speed connection to the Internet and may arrange for a mirror site at a larger site with higher-speed connection and perhaps closer proximity to a large audience.
In addition to mirroring Web sites, you can also mirror files that can be downloaded from an File Transfer Protocol server. Netscape, Microsoft, Sun Microsystems, and other companies have mirror sites from which you can download their browser software.
Mirroring could be considered a static form of content delivery.

How to set ip for vm from outside [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to set ip from outside of virtual machine.
Now we use dhcp server to bind static ip with their MAC.
But when the number of vms is larger and large, that's not easy for administer.
I want to make one interface for the clients to set the ip of the vm when creating it.
By now, i know i can mount the vm disk and config the network setting before creating the vm.
there is one problem for that, the vm disk type may be various, and sometimes they may have totally different partition structure, and may be including LVM,etc. Besides this, i don't know whether it is possible to config ip for Windows operating system with this method.
I don't know how they do this, i mean those Virtual-machine product, like Vmware.
Edit:If those virtual-machine product don't give one interface for client to set ip for vm, then how they manage their ips. we have many many vms, and we specify ip for each of them, the client just use it, they are not authorized to set their ip from within the os, though set, it won't make any sense, they will can't connect to the internet.
I think there must be one approach for this.
Thanks, any help is appreciated.
First of all VMWare does not provide a way to set the IP for the host from it's interface. At least not a general way. If you really want to modify the guest filesystem have a look at libguestfs which provides tools and an api to modify guest images.
You may also want to have a look at foreman smart proxy to manage/control your dhcp server via a REST api. If you use directly theforeman it will allow you to manage the ip addresses via a webui.

Redirect users to the nearest server based on their location without changing url

This is my case:
I have 6 servers across US and Europe. All servers are on a load balancer. When you visit the website (www.example.com) its pointing on the load balancer IP address and from their you are redirect to one of the servers. Currently, if you visit the website from Germany for example, you are transfered randomly in one of the server. You could transfer to the Germany server or the server in San Fransisco.
I am looking for a way to redirect users to the nearest server based on their location but without changing url. So I am NOT looking of having many url's such as www.example.com, www.example.co.uk, www.example.dk etc
I am looking for something like a CDN where you retrieve your files from the nearest server (?) so I can get rid of the load balancer because if it crashes, the website does not respond (?)
For example:
If you are from uk, redirect to IP 53.235.xx.xxx
If you are from west us, redirect to IP ....
if you are from south europe, redirect to IP ... etc
DNSMadeeasy offers a feature similar to this but they are charging a 600 dollars upfront price and for a startup that doesnt know if that feature will work as expected or there is no trial version we cannot afford: http://www.dnsmadeeasy.com/enterprise-dns/global-traffic-director/
What is another way of doing this?
Also another question on the current setup. Even with 6 servers all connected to the load balancer, if the load balancer has lag issues, it takes everything with it, right? or if by any change it goes down, the website does not respond. So what is the best way to eliminate that downtime so that if one server IP address does not respond, move to the next (as a load balancer would do but load balancers can have issues themselves)
Would help to know what type of application servers you're talking about; i.e. J2EE (like JBoss/Tomcat), IIS, etc?
You can use a hardware or software load balancer with Sticky IP and define ranges of IPs to stick to different application servers. Each country's ISPs should have it's own block of IPs.
There's a list at the website below.
http://www.nirsoft.net/countryip/
Here's also a really, really good article on load balancing in general, with many high availability / persistence issues addressed. That should answer your second question on the single point of failure at your load balancer; there's many different techniques to provide both high availability and load distribution. Alot depends on what kind of application your run and whether you require persistent sessions or not. Load balancing by sticky IP, if persistence isn't required and you're LB does health checks properly, can provide high availability with easy failover. The downside is that load isn't evenly distributed, but it seems you're looking for distribution based on proximity, not on load.
http://1wt.eu/articles/2006_lb/index.html