How does MOSS Front End (IIS) load balancing work? - moss

I would like to know how MOSS Front End load balancing works, just an overview or a link to a site that contains this type of information.
In otherwords, I have 2 front end servers in the farm, how does MOSS distribute the work load?

Sorry to disappoint , but I've just been informed that MOSS does not do any load balancing on its own, you need to set this up yourself outside of MOSS.
The MOSS front end farms only sync IIS content between each other - this is provided by MOSS

MOSS lives on Windows 2003 or 2008 servers. You can enable the NLB services within the OS on the web front ends. I don't recall the OS versions that support that but certainly Enterprise and DataCenter editions...

All server versions support NLB (network load balancing). There are really three ways to accomplish load balancing.
You can use DNS to point users to different WFEs by handing out different IP addresses for the same FQDN. This is the 5 minute load balance solution.
The second solution is to use windows version of network load balancing. This is the more robust version of load balancing as it takes into account actual load on the WFEs. If one WFE is processing a large number of request traffic will go to the other box. This solution also accomidates failover if one box goes down. The DNS solution does not.
The third solution is to use a load balancer in front of your WFEs like a cisco or F5 load balancer. This is the solution for farms with many WFE's.
The next question is how do you know if load balancing is occuring. I wrote a webpart for sharepoint that you can add to any page that tells you what server is serving the page. If your load balancing is working you should see the server name change as you make requests to the same page.
You can get the webpart here: Sharepoint Server Info Web Part

Related

Deploy a WebApp and always keep it running

I have a web application spread over multiple servers and the incoming traffic is handled by HAProxy in order to balance the load. When we do the distribution, we do it at night because the users are much less and therefore we are less in service. To make the distribution we use the following strategy:
we shut down half of the servers
we deploy on servers that are turned off
we reactivate the servers that are turned off
we perform the same procedure on the other servers
The problem is that in any case I turn off the servers we close connections to users. Is there a better strategy for doing this? How could I improve this and avoid disservices and maybe be able to make distributions even during the day?
I hope I was clear. Thanks
I strongly suggest to use health checks for the servers.
Using HAProxy as an API Gateway, Part 3 [Health Checks]
You should have a URL ("/health") which you can use for health check of the backend server and add option redispatch to the config.
Now when you want to maintain the backend server just "remove" the "/health" URL and haproxy automagically routes the user to the other available servers.

Is it possible to run a Golang REST web app on an internal (private) IIS server?

I would like to create a web service with GoLang that runs either on IIS (7, 8 or 10) or under Tomcat 7.0. We have multiple environments, each with multiple servers, all being Windows 2008 R2, 2012 or 2016. All servers are private (10.x). My goal is to add some REST services to a COTS product that uses both IIS and Tomcat. I'd prefer to avoid gluing nginx or something else onto either server at the risk of impairing the COTS product or having their tech support people not answer the phone. Again .. the servers are only accessible via corporate VPN and are not public internet-facing.
Which server would offer the easiest path to get something working -- Tomcat or IIS?
That's not really about Go, but still there exist at least two solutions I can think of:
Reverse proxying of HTTP requests.
Write a plain Go server serving requests via HTTP.
Maybe turn it into a proper Windows service using golang.org/x/sys/windows/svc.
Deploy it.
If it's to be hosted on the same machine which runs IIS, then it's fine to make it listen on localhost only. Note that it will need a dedicated TCP port to listen on, and you'll need to make it possible for your server to be somehow configurable in this regard.
Set up reverse proxying in your IIS so that it forwards requests coming to whatever (part of an) URL you want to the Go server.
Use FastCGI.
Go supports serving requests over the FastCGI
protocol by means of its standard library,
and IIS suports FastCGI workers.
So it's possible to (re-)write your Go server to use FastCGI
instead of HTTP and then hook it into IIS via this protocol.
The pros and cons of these solutions—as I view them—are:
Serving over plain HTTP is more familiar to a developer and
makes the server more "portable"—in the sense it will be easier to change its deployment scheme if/when you'll need it.
Right to making it available to the Internet directly.
Conversely, with FastCGI, you'll always need a FastCGI host as a "middleware".
There were rumors that HTTP code is more fine-tuned in terms
of performance than that of FastCGI.
Still, this only will be of concern for reasonably hard-core loads.
One possible upside of FastCGI over HTTP is that it can
be served over pipes rather than TCP. For instance, you might
get it served over named pipes as it's supported by IIS's FastCGI module and there exists 3rd-party packages for Go implementing support for them
(even including one from Microsoft®).
The upside of this is that pipes are beleived to incur lesser overhead for data transfer (basically it's just shoveling bytes between in-kernel buffers belonging to two processes instead of pushing them through the whole TCP/IP stack), and using pipes frees you from the need of dedicating a TCP port to the Go server.
Still, I have no personal experience with this kind of setup and its performance trade-offs.

Connect SCOM 2016 Gateway Server to a Load balancer

Evening Everyone,
I have been doing some research to see if a SCOM Gateway server can be configured to work with management servers behind a load balancers. In the reading and examples i see everyone pointing the gateway server directory to a management server.
If anyone has done this I would like to know what issues you faced.
What are you trying to achive?
scom management group is allready managing performance and workload in the resource pool for you... adding a middle NLB between the GW and MS seems very odd..
please be aware that if you evenatully will be using NLB. you need a physical NIC so that you will be able to use the Agent Deploy feature.

Load balancing servers

I have a windows COM+ server connected to several SQL databases. The user sessions are stored in memory on the server. A MFC windows client connects to the server. The traffic is starting to get too high for just one server to handle so I would like to have one more. I plan to just redirect all new users to the new server like so:
my-server -- old users -- > my-server1
my-server -- new users -- > my-server2
but then I thought there might be some load balancing framework out there that might work better. What is the best way to solve the problem? What are the pros and cons with using a premade load balancer vs redirecting users.
I would recommend the use of HAproxy for this. It supports both HTTP and plain TCP:
http://haproxy.1wt.eu/

EC2: can I host an http server there?

Does anyone have experience deploying GWT apps to EC2?
If I were to install tomcat or apache on a ec2 instance, could I have users connect directly to a url pointing there?
Would that be cost effective, or would java hosting services be best?
Is there any downside to hosting the edge HTTP server on a regular hosting service and have that direct requests to EC2? Performance ever an issue here?
Other answers are correct but I just wanted to share the fact that we are are developing a product that is 100% EC2/S3 based and also have a pure GWT front end.
We use maven2 for builds and the excellent gwt-maven plugin. This makes it easy to produce a WAR package of our web application as output. We use Jetty but Tomcat would work just as well.
We have pound (a http accelerator/load balancer) running on the VM listening for http & https, which then forwards to requests to lighttpd (static) or jetty (app). This also simplifies SSL certificates because pound handles SSL. I've found Java servers have always been a pain to configure with SSL certs.
Yes, you can host pretty much whatever you want, as you effectively have a dedicated Linux machine at your command.
As I last recall, the basic rate for an EC2 instance, on their "low end box" worked out to around $75/month, so you can use that as a benchmark against other vendors. That also assumed that the machine is up 24x7 (since you pay for it by the hour).
The major downside of an EC2 instance is simply that it can "go away" at any time, and when it does, any data written to your instance will "go away" as well.
That means you need to set it up so that you can readily restart the server, but also you need to offline any data that you generate and wish to keep (either to one of Amazons other services, like S3, or to some other external service). That will incur some extra costs depending on volume.
Finally, you will also be billed for any traffic to the service.
The thing to compare it against is another "Virtual Server" from some other vendor. There is a lot of interesting things that can be done with EC2, but it may well be easier to go with a dedicated Virtual hosting service if you're just using a single machine.
Others have given good answers. I would have to add that you need to spend programmer time getting to know EC2's quirks and addressing them (e.g. with EBS). It's not completely trivial, and though it is useful knowledge to have and may be worth it for that reason alone, if you want to get up and running quickly with just a few servers, you should probably look at other hosted options.
On the other hand, if you plan to scale up massively enough (eventually hosting many servers on EC2) then I would highly recommend it. You have to architect a few things, but you need to do that anyways. The flexibility of on-demand computing, and the generally low price, makes this a killer platform once you reach a certain scale of operation.
You definitely can host an http server in EC2, but you need to take into consideration the following:
As mentioned before the cost can be much higher than alternative hosting solutions
Your instance (the machine you've started in EC2) can go off unexpectedly. There is no guarantee from Amazon for 24x7 availability. This mean that the data you've stored in local storage will be lost and when you've start a new instance, it will get a new IP.
To successfully host a server in EC2, you therefore need to employ some other services from Amazon. You need Elastic IP, so that you can circumvent the new IP address problem. You can also use Elastic Block Storage. This is a service that will allow you to mount in your machine a disk, that will not go away when your instance is lost.