Increasing web server performance - rest

What configuration should I usually set to increase web server performance ? (For example Apache TomCat). I need that my web server process a lot of requests at the same time.

Something like Facebook Hip Hop.
Or just try to use cache e.g in Redis

In order to improve webserver performance, there are a few steps to follow.
Implement a high performance reverse proxy, like nginx.
Let nginx to handle all static content delivery.
Handle Secure Socket Layers negotiation in nginx because SSL and TLS traffic encryption uses CPU time and memory.
Make your content compression (gzip) in nginx.
Resuming, you have to avoid assigning tasks to your webserver, other than handle dynamic content, and backend processing tasks.
PD: You can also take a look to Varnish.

Related

How is 'watch=true' implemented on the kube-apiserver?

When watching kubernetes resources for changes, what exactly is happening under the hood? Does the http suddenly change to a wss connection?
To solve a problem of too many requests to the kube-apiserver I am rewriting some code to what I think is more of an operator pattern.
In our multi-tenant microservice architecture all services use the same library to look up connection details to tenant-specific DBs. The connection details are saved in secrets within the same namespace as the application. Every tenant DB has its own secret.
So on every call all secrets with the correct label are read and parsed for the necessary DB connection details. We have around 400 services/pods...
My idea: instead of reading all secrets on every call, create a cache and update the cache everytime a relevant secret was changed via a watcher.
My concerns: am I just replacing the http requests with equally expensive websockets? As I understand I will now have an open websocket connection for every service/pod, which still is 400 open connections.
Would it be better to have a proxy service to watch the secrets (kube-apiserver requests) and then all services query that service for connection details (intranet requests, kube-apiserver unreleated)?
From the sources:
// ServeHTTP serves a series of encoded events via HTTP with Transfer-Encoding: chunked
// or over a websocket connection.
It pretty much depends on the client which protocol is used (either chunked http or ws), both of them having their cost, which you'll have to compare to your current request frequency.
You may be better of with a proxy cache that either watches or polls in regular intervals, but that depends a lot on your application.

HAProxy & Consul-template : retry request when scaling down

I'am working on a microservice architecture based on Docker, registrator, consul and HAProxy.
I'am also using Consul-template to dynamically generate the HAProxy config file. Everything works fine : When I add multiple instances of the same microservice, the HAProxy configuration is updated immediately and requests are dispatched correctly using a round robin strategy.
My problem occurs when I remove some instances (scale down). If a container is shut down while a request is running I have an error.
I'am new to HAProxy so is there a way to configure HAProxy to tell it to retry a failing request to another endpoint if a container disappears?
Precision : I'am using a layer7 routing mode (mode http) for my frontends and backends. Here is a little sample of my consul-template file :
backend hello-backend
balance roundrobin
mode http
{{range service "HelloWorld" }}server {{.Node}} {{.Address}}:{{.Port}} check
{{end}}
# Path stripping
reqrep ^([^\ ]*)\ /hello/(.*) \1\ /\2
frontend http
bind *:8080
mode http
acl url_hello path_beg /hello
use_backend hello-backend if url_hello
Thank you for your help.
It isn't possible for HAProxy to resend a request that has already been sent to a backend.
Here's a forum post from Willy, the creator.
redispatch only happens when the request is still in haproxy. Once it has been sent, it is cannot be performed. It must not be performed either for non idempotent requests, because there is no way to know whether some processing has begun on the server before it died and returned an RST.
http://haproxy.formilux.narkive.com/nGKXq6WU/problems-with-haproxy-down-servers-and-503-errors
The post is quite old but it's still applicable based on more recent discussions. If a request is larger than tune.bufsize (default is around 16KB iirc) then HAProxy hasn't even retained the entire request in memory at the point an error occurs.
Both fortunately (for the craft) and unfortunately (for purposes of real-world utility), Willy has always insisted on correct behavior by HAProxy, and he is indeed correct that it is inappropriate to retry non-idempotent requests once they have been sent to a back-end server, because there are certainly cases where this would result in duplicate processing.
For GET requests which, by definition, should be idempotent (a GET request must be repeatable without consequence, otherwise it should not have been designed to use GET -- it should have been POST or another verb) there's a viable argument that resending to a different back-end would be a legitimate course of action, but this also is not currently supported.
Varnish, by contrast, does support a do-over, which I have used (behind HAProxy) with success on GET requests where I have on-line and near-line storage for the same object namespace. Old, "unpopular" files are migrated to near-line (slower, cheaper) storage, but all requests are sent to on-line storage, with the retry destination of near-line if on-line returns a 404. But, I've never tried this with requests other than GET.
Ideally, your solution would be for your back-ends to be declared unhealthy, perhaps by deliberately failing their HTTP health checks for a draining time before shutting down. One fairly simple approach is for the health check to require the presence of a static file, which gets deleted from the back-end before shutdown. Or, you can request HAProxy consider the backend to be in maintenance mode through the stats/admin UI or socket, preventing more requests from being initiated while allowing running requests to drain.

Is citrix netscalar restricted solely to servers running Citrix?

I understand that Citrix NetScaler usually sits in front of citrix servers. Does it also sit in front of non-citrix servers?
>Does it also sit in front of non-citrix servers?
Yes. It is a full-blown load balancer. Or using the newer, fancier, term an "Application Delivery Controller".
It will do all the typical work
distributing to backend
monitoring backend (using several included service monitors)
arrange persistence to backend
offload authentication to frontend and authenticate to backend
offload SSL/TLS from backend
And also:
SSL-VPN gateway
Web cache
Web front end optimization (compression, JavaScript-minification, Sharding, etc.)
Web application firewall
There are several editions and only the most expensive one will give you all the features. Also SSL-VPN is licensed by concurrent users.
It can be used for all other servers for various purposes.
depends on how its configurated, you can use for Loadbalancing level 4( At Layer 4, a load balancer has visibility on network information such as application ports and protocol (TCP/UDP), Reverse Proxy, Storage, etc.

SSL by RESTful API or by reverse proxy?

I'm building a RESTful API which is only accessible by TLS. Where should SSL connection be implemented?
by RESTful API itself, my API is written in golang, which handles SSL easily.
by a SSL reverse proxy, here I'm using nginx.
I would prefer 2nd approach because nginx handles caching and static deliveries better.
Should I implement my API HTTP-only now? In my opinion the system is secure, as long as nginx the reverse proxy is serving SSL only and my API exposes itself to nginx only.
I'm not sure if there is a 3rd approach, while I keep my API SSL only and nginx passes through all requests transparently.
TL;DR: I will choose between the 2nd or 3rd option, depending on the scenario. If you want to publish the API on Internet, never opt for the first one.
The most secure option is the third one: implement your API to allow SSL connections only, and publish to Internet using a reverse proxy anyway.
The pros are the communication with your API will be secure even for internal connections. That will give you protection from internal attackers. The cons are the extra processing load on your server to manage the SSL security and that can impact on the performance.
Anyway, you should look for cost-benefit. That's the reason why it will depend on the scenario. For example, if your API will not be accessed by internal users, but only for internal services, and the load on the server is heavy, you can consider the 2nd approach: plain HTTP por internal communications and SSL termination for Internet.

SSL offloading / redirecting specific URLs using HAproxy?

I have a working setup using a hardware load balancer that controls redirection in such a fashion that all requests to http://example.com/login/* are redirected (using HTTP 302) to https://example.com/login/* and all requests that are NOT for /login are inversely redirected from HTTPS to HTTP.
This allows me to wrap the login functions and user/password exchange in SSL but otherwise avoid slowing connections with encryption and also solving some problems with embedded content mixed content warnings in some browsers.
The load balance, however, is end of life and I am looking for a replacement solution, preferably in software.
I think HAproxy is going to be able to serve as my load balacing solution, but I have only been able to find configuration examples and documentation for redirecting everything from HTTP to HTTPS, or vice versa.
Is it possible to do what I am proposing using HAproxy or should I look for a different solution?
I realize I will need to use the development version of HAproxy to support SSL at all.
I would suggest you do not use a DEV build for your production environment.
To answer your question, I would assume you're going to use HAProxy version 1.4:
Is it possible to do what I am proposing using HAProxy or should I look for a different solution?
Yes. It is possible but you have to use another software to handle the HTTPS traffic. Stunnel is proven to be good in this. So I'd say the setup is going to be:
HAProxy 1.4
# Redirect http://../login to https://../login
frontend HTTPSRedirect
bind 1.2.3.4:80
default_backend AppServers
redirect prefix https://www.domain.com/login if { path_beg -i /login }
# Handler for requests coming from Stunnel4.
frontend HTTPReceiver
bind 5.6.7.8:80
default_backend AppServers
Stunnel4
[https]
accept=443
connect=5.6.7.8:80 (HAProxy IP)