Should I connect directly to CouchDB's socket and pass HTTP requests or use node.js as a proxy? - iphone

First, here's my original question that spawned all of this.
I'm using Appcelerator Titanium to develop an iPhone app (eventually Android too). I'm connecting to CouchDB's port directly by using Titanium's Titanium.Network.TCPSocket object. I believe it utilizes the Apple SDK's CFSocket/NSStream class.
Once connected, I simply write:
'GET /mydb/_changes?filter=app/myfilter&feed=continuous&gameid=4&heartbeat=30000 HTTP/1.1\r\n\r\n'
directly to the socket. It keeps it open "forever" and returns JSON data whenever the db is updated and matches the filter and change request. Cool.
I'm wondering, is it ok to connect directly to CouchDB's socket like this, or would I be better off opening the socket to node.js instead, and maybe using this CouchDB node.js module to handle the CouchDB proxy through node.js?
My main concern is performance. I just don't have enough experience with CouchDB to know if hitting its socket and passing faux HTTP requests directly is good practice or not. Looking for experience and opinions on any ramifications or alternate suggestions.

It's me again. :-)
CouchDB inherits super concurrency handling from Erlang, the language it was written in. Erlang uses lightweight processes and message passing between those processes to achieve excellent performance under high concurrent load. It will take advantage of all cpu cores, too.
Nodejs runs a single process and basically only does one thing at a time within that process. Its event-based, non-blocking IO approach does allow it to multitask while it waits for chunks of IO but it still only does one thing at a time.
Both should easily handle tens of thousands of connections, but I would expect CouchDB to handle concurrency better (and with less effort on your part) than Node. And keep in mind that Node adds some latency if you put it in front of CouchDB. That may only be noticeable if you have them on different machines, though.
Writing directly to Couch via TCPSocket is a-ok as long as your write a well-formed HTTP request that follows the spec. (You're not passing a faux request...that's a real HTTP request you're sending just like any other.)
Note: HTTP 1.1 does require you to include a Host header in the request, so you'll need to correct your code to reflect that OR just use HTTP 1.0 which doesn't require it to keep things simple. (I'm curious why you're not using Titanium.Network.HTTPClient. Does it only give you the request body after the request finishes or something?)
Anyway, CouchDB can totally handle direct connections and--unless you put a lot of effort into your Node proxy--it's probably going to give users a better experience when you have 100k of them playing the game at once.
EDIT: If you use Node write an actual HTTP proxy. That will run a lot faster than using the module you provided and be simpler to implement. (Rather than defining your own API that then makes requests to Couch you can just pass certain requests on to CouchDB and block others, say, for security reasons.
Also take a look at how "multinode" works:
http://www.sitepen.com/blog/2010/07/14/multi-node-concurrent-nodejs-http-server/

Related

Increase speed of API calls osx

I want to make my API calls as fast as possible using swift. I know that using Alamofire helps with speed but is there anything faster than using Alamofire? I am creating trading software so every millisecond makes a huge difference. These are "post" and "get" requests I am making to execute the trades. Right now there is a lot of variability in the speed of the calls. I know there are platforms like neumob that speed up applications so I wanted to know if there were any concepts like that which I can apply to my application. I am developing it using swift and it will be run on OS X.
I am also using a websocket to get order book data. To connect to the websocket, I am using Starscream. If there is a better way to connect to the socket I would love to know that as well.
If milliseconds matter, you shouldn't be using HTTP. Or even TCP. AFAIK, most trading applications use stream connections of some kind, usually transmitting protobufs instead of JSON, so events come in as fast as they're sent over the wire. Barring that, using URLSession directly may be a few instructions faster than Alamofire, which wraps URLSession, but I doubt it would make a noticeable difference. As far as an HTTP connection goes, URLSession is pretty damn fast, as that's what Safari and the rest of the system use.
Your program is very likely I/O bound, not CPU bound, so the main bottleneck will be your internet connection and the data that it transmits.
If you cannot change the communication protocol because you do not control the API server, the only thing you can do is to run your app in a data center that is geographically close to the API server. As long as the service requires you to use HTTP and web sockets, you can't go much faster than NSURLSession, Alamofire or Starscream without years of optimization.
If you can control the API server, you could switch to plain TCP or even UDP. Then you could come up with a custom communication protocol that uses small binary messages.
Of course, you have to profile first and actually find out, which parts of your code are slow. There may be other code that takes a few milliseconds to run.

WebSocket/REST: Client connections?

I understand the main principles behind both. I have however a thought which I can't answer.
Benchmarks show that WebSockets can serve more messages as this website shows: http://blog.arungupta.me/rest-vs-websocket-comparison-benchmarks/
This makes sense as it states the connections do not have to be closed and reopened, also the http headers etc.
My question is, what if the connections are always from different clients all the time (and perhaps maybe some from the same client). The benchmark suggests it's the same clients connecting from what I understand, which would make sense keeping a constant connection.
If a user only does a request every minute or so, would it not be beneficial for the communication to run over REST instead of WebSockets as the server frees up sockets and can handle a larger crowd as to speak?
To fix the issue of REST you would go by vertical scaling, and WebSockets would be horizontal?
Doe this make sense or am I out of it?
This is my experience so far, I am happy to discuss my conclusions about using WebSockets in big applications approached with CQRS:
Real Time Apps
Are you creating a financial application, game, chat or whatever kind of application that needs low latency, frequent, bidirectional communication? Go with WebSockets:
Well supported.
Standard.
You can use either publisher/subscriber model or request/response model (by creating a correlationId with each request and subscribing once to it).
Small size apps
Do you need push communication and/or pub/sub in your client and your application is not too big? Go with WebSockets. Probably there is no point in complicating things further.
Regular Apps with some degree of high load expected
If you do not need to send commands very fast, and you expect to do far more reads than writes, you should expose a REST API to perform CRUD (create, read, update, delete), specially C_UD.
Not all devices prefer WebSockets. For example, mobile devices may prefer to use REST, since maintaining a WebSocket connection may prevent the device from saving battery.
You expect an outcome, even if it is a time out. Even when you can do request/response in WebSockets using a correlationId, still the response is not guaranteed. When you send a command to the system, you need to know if the system has accepted it. Yes you can implement your own logic and achieve the same effect, but what I mean, is that an HTTP request has the semantics you need to send a command.
Does your application send commands very often? You should strive for chunky communication rather than chatty, so you should probably batch those change request.
You should then expose a WebSocket endpoint to subscribe to specific topics, and to perform low latency query-response, like filling autocomplete boxes, checking for unique items (eg: usernames) or any kind of search in your read model. Also to get notification on when a change request (write) was actually processed and completed.
What I am doing in a pet project, is to place the WebSocket endpoint in the read model, then on connection the server gives a connectionID to the client via WebSocket. When the client performs an operation via REST, includes an optional parameter that indicates "when done, notify me through this connectionID". The REST server returns saying if the command was sent correctly to a service bus. A queue consumer processes the command, and when done (well or wrong), if the command had notification request, another message is placed in a "web notification queue" indicating the outcome of the command and the connectionID to be notified. The read model is subscribed to this queue, gets messessages and forward them to the appropriate WebSocket connection.
However, if your REST API is going to be consumed by non-browser clients, you may want to offer a way to check of the completion of a command using the async REST approach: https://www.adayinthelifeof.nl/2011/06/02/asynchronous-operations-in-rest/
I know, that is quite appealing to have an low latency UP channel available to send commands, but if you do, your overall architecture gets messed up. For example, if you are using a CQRS architecture, where is your WebSocket endpoint? in the read model or in the write model?
If you place it on the read model, then you can easy access to your read DB to answer fast search queries, but then you have to couple somehow the logic to process commands, being the read model the responsible of send the commands to the write model and notify if it is unable to do so.
If you place it on the write model, then you have it easy to place commands, but then you need access to your read model and read DB if you want to answer search queries through the WebSocket.
By considering WebSockets part of your read model and leaving command processing to the REST interface, you keep your loose coupling between your read model and your write model.

Implement server-push with GWTP

I got a project using GWTP (which involves MVP separation, Gin and Dispatch), now I'm on the situation where it is required that changes on the server are pushed to specific clients
I've reading the gwt-comet and gwteventservice documentation, It seems the first doesn't work with RPC and the second Ecnapsulates RPC, for which I don't know how to fit it in my current command pattern from GWTP. Ideas?
I have been using gwt-comet (http://code.google.com/p/gwt-comet/). It's a native comet implementation working pretty good like RPC, you can send Strings or your GWT-serialized objects as well. And the best thing you don't need to do many things to make it works.
i used "Server Push in GWT" described here http://code.google.com/p/google-web-toolkit-incubator/wiki/ServerPushFAQ - it seemed to work fairly well for a small project.
This is really a servlet problem, not a GWT or GWTP problem.
So there are a few approaches to doing this, the most stable (in my opinion) is to have a long or blocking poll servlet. This is basically a servlet that is polled by the client, and holds the connection open for some period of time if there is no message to 'push' to the client, and if too much time passes (this is to get around http timeouts) a heartbeat is returned of some kind. Either way, when the servlet request request returns, the client just makes another request. This is the most portable and stable way to my mind, since it uses only the core servlet api, doesn't suffer from network issues, and the blocking portion allows you to have the poll 'park' at the server for some period of time and reduces total request load, while allowing very quick return of new information to the client when there is some available.
The next way to achieve this is via WebSockets, this is great once you get it working and in my opinion is the way of the future without question. I think this is a good one to work with since this will be, in my opinion, a paradigm shift in web applications once it catches a head of steam, so we all need to be up to speed. Basically, you have a javascript 'socket' open via port 80 (this is one of the best features, since you don't have to open any firewall holes) and can communicate in two directions across that socket.
Comet can also work, but it will generally lock you down to one server type, which may be alright for your application. Caveat here!!!! I have only done very small tests with comet, it was flaky for me when I set it up, and was not as steady as the blocking poll solution as I had it set up.
Now the neatest one in my opinion, but this one is very limited due to network constraints probably to single domain intranet applications, is to use an applet based push. This setup (which could be done with udp or a straight socket, I did all web just to keep it all simpler conceptually) takes the applet, uses it to spin up a jetty server instance on the client, and the has the page publish the client's jetty 'endpoint' to the server. At this point, the client can contact the server using it's servlets, and the server can contact the client at the servlet(s) exposed on the jetty server. This is true push, it's neato, but there are network nightmares.
So of all the above, I use long polling, keep my eye on web sockets since they are the future in my mind, and really like the applet based version, although it's quite restricted in use due to the network resolution limitations.
Once you have this decided, from GWTP you would just have actions or JSNI bridge methods as needed to connect to your server and receive responses. I won't go into this, since this is really a core servlet/http/javascript question more than a GWT or GWTP centric question.
I hope that helps!

How should I implement bi-directional networking between an iPhone application and an Objective-C server-side application?

I'm looking for advice on the best way to implement some kind of bi-directional communication between a "server-side" application, written in Objective-C and running on a mac, and a client application running on an iPhone.
To cut a long story short, I'm adapting an existing library for use in a client-server environment. The library (which runs on the server) is basically a search engine which provides periodic results, and additionally can provide updates for any of those results at a later date. In an ideal world therefore I would be able to achieve the following with my hypothetical networking solution:
Start queries on the server.
Have the server "push" results to the client as they arrive.
Have the server "push" updates to individual results to the client as they arrive.
If I was writing this client to run on another Mac, I might well look at using Distributed Objects to mask the fact that the server was actually running remotely, but DO is not available on an iPhone.
If I was writing a more generic client-server application I would probably look at using HTTP to provide some kind of RESTful interface to searches, but this solution does not lend itself well to asynchronous updates and additionally what I am proposing does not fit well with the "stateless" tennet of REST: I would have to model my protocol so I "created" a search resource that I could subsequently query the state of and I would have to poll for updates to it.
One suggestion someone made was to make use of something like BLIP to provide me with a two-way pipe between the client and the server and implement my own "proxy" type objects for the server-side resources that knew how to fetch data from the server and additionally were addressable so that the server could push updates to them. Whilst BLIP provides the low-level messaging framework needed to communicate bi-directionally it still leaves me with a few questions:
How will I manage the lifetime of the objects on the server? I can have a message type that "creates" a search object, but when should that object be destroyed?
How well with this perform on an iPhone: if I have a persistent connection to the server will this drain the batteries too fast? This question is also pertinent in the HTTP world: most async updates are done using a COMET type hack which again requires a persistent connection.
So right now I'm still completely unsure what the best way to go is: I've done a lot of searching and reading but have not settled on any solution. I'm asking here on SO because I'm sure that there are many of you out there who have already solved this problem.
How have you gone about achieving real-time bidirectional networking between the iPhone and an Objective-C server-side app?

Some fundamental but important questions about web development?

I've developed some web-based applications till now using PHP, Python and Java. But some fundamental but very important questions are still beyond my knowledge, so I made this post to get help and clarification from you guys.
Say I use some programming language as my backend language(PHP/Python/.Net/Java, etc), and I deploy my application with a web server(apache/lighttpd/nginx/IIS, etc). And suppose at time T, one of my page got 100 simultaneous requests from different users. So my questions are:
How does my web server handle such 100 simultaneous requests? Will web server generate one process/thread for each request? (if yes, process or thread?)
How does the interpreter of the backend language do? How will it handle the request and generate the proper html? Will the interpreter generate a process/thread for each request?(if yes, process or thread?)
If the interpreter will generate a process/thread for each request, how about these processes(threads)? Will they share some code space? Will they communicate with each other? How to handle the global variables in the backend codes? Or they are independent processes(threads)? How long is the duration of the process/thread? Will they be destroyed when the request is handled and the response is returned?
Suppose the web server can only support 100 simultaneous requests, but now it got 1000 simultaneous requests. How does it handle such situation? Will it handle them like a queue and handle the request when the server is available? Or other approaches?
I read some articles about Comet these days. And I found long connection may be a good way to handle the real-time multi-users usecase. So how about long connection? Is it a feature of some specific web servers or it is available for every web server? Long connection will require a long-existing interpreter process?
EDIT:
Recently I read some articles about CGI and fastcgi, which makes me know the approach of fastcgi should be a typical approach to hanlde request.
the protocol multiplexes a single transport connection between several independent FastCGI requests. This supports applications that are able to process concurrent requests using event-driven or multi-threaded programming techniques.
Quoted from fastcgi spec, which mentioned connection which can handle several requests, and can be implemented in mutli-threaded tech. I'm wondering this connection can be treated as process and it can generate several threads for each request. If this is true, I become more confused about how to handle the shared resource in each thread?
P.S thank Thomas for the advice of splitting the post to several posts, but I think the questions are related and it's better to group them together.
Thank S.Lott for your great answer, but some answers to each question are too brief or not covered at all.
Thank everyone's answer, which makes me closer to the truth.
Update, Spring 2018:
I wrote this response in 2010 and since then, a whole lot of things have changed in the world of a web backend developer. Namely, the advent of the "cloud" turning services such as one-click load balancers and autoscaling into commodities have made the actual mechanics of scaling your application much easier to get started.
That said, what I wrote in this article in 2010 still mostly holds true today, and understanding the mechanics behind how your web server and language hosting environment actually works and how to tune it can save you considerable amounts of money in hosting costs. For that reason, I have left the article as originally written below for anyone who is starting to get elbows deep in tuning their stack.
1. Depends on the webserver (and sometimes configuration of such). A description of various models:
Apache with mpm_prefork (default on unix): Process per request. To minimize startup time, Apache keeps a pool of idle processes waiting to handle new requests (which you configure the size of). When a new request comes in, the master process delegates it to an available worker, otherwise spawns up a new one. If 100 requests came in, unless you had 100 idle workers, some forking would need to be done to handle the load. If the number of idle processes exceeds the MaxSpare value, some will be reaped after finishing requests until there are only so many idle processes.
Apache with mpm_event, mpm_worker, mpm_winnt: Thread per request. Similarly, apache keeps a pool of idle threads in most situations, also configurable. (A small detail, but functionally the same: mpm_worker runs several processes, each of which is multi-threaded).
Nginx/Lighttpd: These are lightweight event-based servers which use select()/epoll()/poll() to multiplex a number of sockets without needing multiple threads or processes. Through very careful coding and use of non-blocking APIs, they can scale to thousands of simultaneous requests on commodity hardware, provided available bandwidth and correctly configured file-descriptor limits. The caveat is that implementing traditional embedded scripting languages is almost impossible within the server context, this would negate most of the benefits. Both support FastCGI however for external scripting languages.
2. Depends on the language, or in some languages, on which deployment model you use. Some server configurations only allow certain deployment models.
Apache mod_php, mod_perl, mod_python: These modules run a separate interpreter for each apache worker. Most of these cannot work with mpm_worker very well (due to various issues with threadsafety in client code), thus they are mostly limited to forking models. That means that for each apache process, you have a php/perl/python interpreter running inside. This severely increases memory footprint: if a given apache worker would normally take about 4MB of memory on your system, one with PHP may take 15mb and one with Python may take 20-40MB for an average application. Some of this will be shared memory between processes, but in general, these models are very difficult to scale very large.
Apache (supported configurations), Lighttpd, CGI: This is mostly a dying-off method of hosting. The issue with CGI is that not only do you fork a new process for handling requests, you do so for -every- request, not just when you need to increase load. With the dynamic languages of today having a rather large startup time, this creates not only a lot of work for your webserver, but significantly increases page load time. A small perl script might be fine to run as CGI, but a large python, ruby, or java application is rather unwieldy. In the case of Java, you might be waiting a second or more just for app startup, only to have to do it all again on the next request.
All web servers, FastCGI/SCGI/AJP: This is the 'external' hosting model of running dynamic languages. There are a whole list of interesting variations, but the gist is that your application listens on some sort of socket, and the web server handles an HTTP request, then sends it via another protocol to the socket, only for dynamic pages (static pages are usually handled directly by the webserver).
This confers many advantages, because you will need less dynamic workers than you need the ability to handle connections. If for every 100 requests, half are for static files such as images, CSS, etc, and furthermore if most dynamic requests are short, you might get by with 20 dynamic workers handling 100 simultaneous clients. That is, since the normal use of a given webserver keep-alive connection is 80% idle, your dynamic interpreters can be handling requests from other clients. This is much better than the mod_php/python/perl approach, where when your user is loading a CSS file or not loading anything at all, your interpreter sits there using memory and not doing any work.
Apache mod_wsgi: This specifically applies to hosting python, but it takes some of the advantages of webserver-hosted apps (easy configuration) and external hosting (process multiplexing). When you run it in daemon mode, mod_wsgi only delegates requests to your daemon workers when needed, and thus 4 daemons might be able to handle 100 simultaneous users (depends on your site and its workload)
Phusion Passenger: Passenger is an apache hosting system that is mostly for hosting ruby apps, and like mod_wsgi provides advantages of both external and webserver-managed hosting.
3. Again, I will split the question based on hosting models for where this is applicable.
mod_php, mod_python, mod_perl: Only the C libraries of your application will generally be shared at all between apache workers. This is because apache forks first, then loads up your dynamic code (which due to subtleties, is mostly not able to use shared pages). Interpreters do not communicate with each other within this model. No global variables are generally shared. In the case of mod_python, you can have globals stay between requests within a process, but not across processes. This can lead to some very weird behaviours (browsers rarely keep the same connection forever, and most open several to a given website) so be very careful with how you use globals. Use something like memcached or a database or files for things like session storage and other cache bits that need to be shared.
FastCGI/SCGI/AJP/Proxied HTTP: Because your application is essentially a server in and of itself, this depends on the language the server is written in (usually the same language as your code, but not always) and various factors. For example, most Java deployment use a thread-per-request. Python and its "flup" FastCGI library can run in either prefork or threaded mode, but since Python and its GIL are limiting, you will likely get the best performance from prefork.
mod_wsgi/passenger: mod_wsgi in server mode can be configured how it handles things, but I would recommend you give it a fixed number of processes. You want to keep your python code in memory, spun up and ready to go. This is the best approach to keeping latency predictable and low.
In almost all models mentioned above, the lifetime of a process/thread is longer than a single request. Most setups follow some variation on the apache model: Keep some spare workers around, spawn up more when needed, reap when there are too many, based on a few configurable limits. Most of these setups -do not- destroy a process after a request, though some may clear out the application code (such as in the case of PHP fastcgi).
4. If you say "the web server can only handle 100 requests" it depends on whether you mean the actual webserver itself or the dynamic portion of the webserver. There is also a difference between actual and functional limits.
In the case of Apache for example, you will configure a maximum number of workers (connections). If this number of connections was 100 and was reached, no more connections will be accepted by apache until someone disconnects. With keep-alive enabled, those 100 connections may stay open for a long time, much longer than a single request, and those other 900 people waiting on requests will probably time out.
If you do have limits high enough, you can accept all those users. Even with the most lightweight apache however, the cost is about 2-3mb per worker, so with apache alone you might be talking 3gb+ of memory just to handle the connections, not to mention other possibly limited OS resources like process ids, file descriptors, and buffers, and this is before considering your application code.
For lighttpd/Nginx, they can handle a large number of connections (thousands) in a tiny memory footprint, often just a few megs per thousand connections (depends on factors like buffers and how async IO apis are set up). If we go on the assumption most your connections are keep-alive and 80% (or more) idle, this is very good, as you are not wasting dynamic process time or a whole lot of memory.
In any external hosted model (mod_wsgi/fastcgi/ajp/proxied http), say you only have 10 workers and 1000 users make a request, your webserver will queue up the requests to your dynamic workers. This is ideal: if your requests return quickly you can keep handling a much larger user load without needing more workers. Usually the premium is memory or DB connections, and by queueing you can serve a lot more users with the same resources, rather than denying some users.
Be careful: say you have one page which builds a report or does a search and takes several seconds, and a whole lot of users tie up workers with this: someone wanting to load your front page may be queued for a few seconds while all those long-running requests complete. Alternatives are using a separate pool of workers to handle URLs to your reporting app section, or doing reporting separately (like in a background job) and then polling its completion later. Lots of options there, but require you to put some thought into your application.
5. Most people using apache who need to handle a lot of simultaneous users, for reasons of high memory footprint, turn keep-alive off. Or Apache with keep-alive turned on, with a short keep-alive time limit, say 10 seconds (so you can get your front page and images/CSS in a single page load). If you truly need to scale to 1000 connections or more and want keep-alive, you will want to look at Nginx/lighttpd and other lightweight event-based servers.
It might be noted that if you do want apache (for configuration ease of use, or need to host certain setups) you can put Nginx in front of apache, using HTTP proxying. This will allow Nginx to handle keep-alive connections (and, preferably, static files) and apache to handle only the grunt work. Nginx also happens to be better than apache at writing logfiles, interestingly. For a production deployment, we have been very happy with nginx in front of apache(with mod_wsgi in this instance). The apache does not do any access logging, nor does it handle static files, allowing us to disable a large number of the modules inside apache to keep it small footprint.
I've mostly answered this already, but no, if you have a long connection it doesn't have to have any bearing on how long the interpreter runs (as long as you are using external hosted application, which by now should be clear is vastly superior). So if you want to use comet, and a long keep-alive (which is usually a good thing, if you can handle it) consider the nginx.
Bonus FastCGI Question You mention that fastcgi can multiplex within a single connection. This is supported by the protocol indeed (I believe the concept is known as "channels"), so that in theory a single socket can handle lots of connections. However, it is not a required feature of fastcgi implementors, and in actuality I do not believe there is a single server which uses this. Most fastcgi responders don't use this feature either, because implementing this is very difficult. Most webservers will make only one request across a given fastcgi socket at a time, then make the next across that socket. So you often just have one fastcgi socket per process/thread.
Whether your fastcgi application uses processing or threading (and whether you implement it via a "master" process accepting connections and delegating or just lots of processes each doing their own thing) is up to you; and varies based on capabilities of your programming language and OS too. In most cases, whatever is the default the library uses should be fine, but be prepared to do some benchmarking and tuning of parameters.
As to shared state, I recommend you pretend that any traditional uses of in-process shared state do not exist: even if they may work now, you may have to split your dynamic workers across multiple machines later. For state like shopping carts, etc; the db may be the best option, session-login info can be kept in securecookies, and for temporary state something akin to memcached is pretty neat. The less you have reliant on features that share data (the "shared-nothing" approach) the bigger you can scale in the future.
Postscript: I have written and deployed a whole lot of dynamic applications in the whole scope of setups above: all of the webservers listed above, and everything in the range of PHP/Python/Ruby/Java. I have extensively tested (using both benchmarking and real-world observation) the methods, and the results are sometimes surprising: less is often more. Once you've moved away from hosting your code in the webserver process, You often can get away with a very small number of FastCGI/Mongrel/mod_wsgi/etc workers. It depends on how much time your application stays in the DB, but it's very often the case that more processes than 2*number of CPU's will not actually gain you anything.
How does my web server handle such 100 simultaneous requests? Does web server generate one process/thread for each request? (if yes, process or thread?)
It varies. Apache has both threads and processes for handling requests. Apache starts several concurrent processes, each one of which can run any number of concurrent threads. You must configure Apache to control how this actually plays out for each request.
How does the interpreter of the backend language do? How will it handle the request and generate the proper html? Will the interpreter generate a process/thread for each request?(if yes, process or thread?)
This varies with your Apache configuration and your language. For Python one typical approach is to have daemon processes running in the background. Each Apache process owns a daemon process. This is done with the mod_wsgi module. It can be configured to work several different ways.
If the interpreter will generate a process/thread for each request, how about these processes(threads)? Will they share some code space? Will they communicate with each other? How to handle the global variables in the backend codes? Or they are independent processes(threads)? How long is the duration of the process/thread? Will they be destroyed when the request is handled and the response is returned?
Threads share the same code. By definition.
Processes will share the same code because that's the way Apache works.
They do not -- intentionally -- communicate with each other. Your code doesn't have a way to easily determine what else is going on. This is by design. You can't tell which process you're running in, and can't tell what other threads are running in this process space.
The processes are long-running. They do not (and should not) be created dynamically. You configure Apache to fork several concurrent copies of itself when it starts to avoid the overhead of process creation.
Thread creation has much less overhead. How Apaches handles threads internally doesn't much matter. You can, however, think of Apache as starting a thread per request.
Suppose the web server can only support 100 simultaneous requests, but now it got 1000 simultaneous requests. How does it handle such situation? Will it handle them like a queue and handle the request when the server is available? Or other approaches?
This is the "scalability" question. In short -- how will performance degrade as the load increases. The general answer is that the server gets slower. For some load level (let's say 100 concurrent requests) there are enough processes available that they all run respectably fast. At some load level (say 101 concurrent requests) it starts to get slower. At some other load level (who knows how many requests) it gets so slow you're unhappy with the speed.
There is an internal queue (as part of the way TCP/IP works, generally) but there's no governor that limits the workload to 100 concurrent requests. If you get more requests, more threads are created (not more processes) and things run more slowly.
To begin with, requiring detailed answers to all your points is a bit much, IMHO.
Anyway, a few short answers about your questions:
#1
It depends on the architecture of the server. Apache is a multi-process, and optionally also, multi-threaded server. There is a master process which listens on the network port, and manages a pool of worker processes (where in the case of the "worker" mpm each worker process has multiple threads). When a request comes in, it is forwarded to one of the idle workers. The master manages the size of the worker pool by launching and terminating workers depending on the load and the configuration settings.
Now, lighthttpd and nginx are different; they are so-called event-based architectures, where multiple network connections are multiplexed onto one or more worker processes/threads by using the OS support for event multiplexing such as the classic select()/poll() in POSIX, or more scalable but unfortunately OS-specific mechanisms such as epoll in Linux. The advantage of this is that each additional network connection needs only maybe a few hundred bytes of memory, allowing these servers to keep open tens of thousands of connections, which would generally be prohibitive for a request-per-process/thread architecture such as apache. However, these event-based servers can still use multiple processes or threads in order to utilize multiple CPU cores, and also in order to execute blocking system calls in parallel such as normal POSIX file I/O.
For more info, see the somewhat dated C10k page by Dan Kegel.
#2
Again, it depends. For classic CGI, a new process is launched for every request. For mod_php or mod_python with apache, the interpreter is embedded into the apache processes, and hence there is no need to launch a new process or thread. However, this also means that each apache process requires quite a lot of memory, and in combination with the issues I explained above for #1, limits scalability.
In order to avoid this, it's possible to have a separate pool of heavyweight processes running the interpreters, and the frontend web servers proxy to the backends when dynamic content needs to be generated. This is essentially the approach taken by FastCGI and mod_wsgi (although they use custom protocols and not HTTP so perhaps technically it's not proxying). This is also typically the approach chosen when using the event-based servers, as the code for generating the dynamic content seldom is re-entrant which it would need to be in order to work properly in an event-based environment. Same goes for multi-threaded approaches as well if the dynamic content code is not thread-safe; one can have, say, frontend apache server with the threaded worker mpm proxying to backend apache servers running PHP code with the single-threaded prefork mpm.
#3
Depending on at which level you're asking, they will share some memory via the OS caching mechanism, yes. But generally, from a programmer perspective, they are independent. Note that this independence is not per se a bad thing, as it enables straightforward horizontal scaling to multiple machines. But alas, some amount of communication is often necessary. One simple approach is to communicate via the database, assuming that one is needed for other reasons, as it usually is. Another approach is to use some dedicated distributed memory caching system such as memcached.
#4
Depends. They might be queued, or the server might reply with some suitable error code, such as HTTP 503, or the server might just refuse the connection in the first place. Typically, all of the above can occur depending on how loaded the server is.
#5
The viability of this approach depends on the server architecture (see my answer to #1). For an event-based server, keeping connections open is no big issue, but for apache it certainly is due to the large amount of memory required for every connection. And yes, this certainly requires a long-running interpreter process, but as described above, except for classic CGI this is pretty much granted.
Web servers are multi-threaded environment; besides using application scoped variables, a user request doesn't interact with other threads.
So:
Yes, a new thread will be created for every user
Yes, HTML will be processed for every request
You'll need to use application scoped variables
If you get more requests than you can deal, they will be put on queue. If they got served before configured timeout period, user will get his response, or a "server busy" like error.
Comet isn't specific for any server/language. You can achieve same result by quering your server every n seconds, without dealing with other nasty threads issues.