It seems that both service and server refer to some web based application. But is there any precise definition of the two terms?
A server offers one or more services. Server is also a more technical term, whereas service is more a term off the problem domain.
You also need to distinguish between:
Server as hardware (see post from Dan D)
Server as software (eg. Apache HTTP server)
You can find more elaborate definiton on Wikipedia:
Service
Server
This is regardless of client-server or P2P models.
A server provides services to one or more clients, and a server(hardware) is a computer. A server(hardware) can be anything from a home computer to a big server-rack with a lot of processor power.
From the view of a computer, a server(software) is just a set of services which is available to clients on the network.
Some well known services are web-server, mail-server. ftp-server. notice they are called xxx-server because such programs consist of a client and server part. The postfix is mainly to distinguish whether we are talking about the client or the server.
So at what moment do we call something a server? We do it when a computer shares some service/content on the network, which is accessible by clients. In other words, when we make a server as defined for software.
Regarding the P2P model: every one is both a client and a server, hence called servent. The above apply to the server part of a P2P network, just remember that it also can be a client.
Futher reading:
Client-Server model
P2P
a server is a piece of hardware or on a virtual machine
a service is a process that provides services normally over the network and runs on a server
but a server can also refer to a web server which is actual a service but it's sort of like one as it hosts services
i think those are reasonable working definitions
I think a simpler way to define both besides the definition of the server being a piece of hardware, a server in the software sense is a service that serves data. In other words you interact with a server with a request and you should get a response back. It "serves" data.
A service does not need interaction and is pretty much just a random process that keeps running doing the same thing, but a server is a service because it is basically a process that keep waiting for a request to come in so that it can return a response.
"A service is a component that performs operations in the background without a user interface."
~ Android Developers
Services don't just run on servers
Shell services
Services can run from the shell. Unix refers to services as Daemons (pronounced "demons"), and Windows refers to them as services.
Client-side services
Services can run client-side. Mozilla (and other browsers) support Web Workers which run in a background thread. Client-side frameworks, like Angular, support services as well.
Related
I would like to create a web service with GoLang that runs either on IIS (7, 8 or 10) or under Tomcat 7.0. We have multiple environments, each with multiple servers, all being Windows 2008 R2, 2012 or 2016. All servers are private (10.x). My goal is to add some REST services to a COTS product that uses both IIS and Tomcat. I'd prefer to avoid gluing nginx or something else onto either server at the risk of impairing the COTS product or having their tech support people not answer the phone. Again .. the servers are only accessible via corporate VPN and are not public internet-facing.
Which server would offer the easiest path to get something working -- Tomcat or IIS?
That's not really about Go, but still there exist at least two solutions I can think of:
Reverse proxying of HTTP requests.
Write a plain Go server serving requests via HTTP.
Maybe turn it into a proper Windows service using golang.org/x/sys/windows/svc.
Deploy it.
If it's to be hosted on the same machine which runs IIS, then it's fine to make it listen on localhost only. Note that it will need a dedicated TCP port to listen on, and you'll need to make it possible for your server to be somehow configurable in this regard.
Set up reverse proxying in your IIS so that it forwards requests coming to whatever (part of an) URL you want to the Go server.
Use FastCGI.
Go supports serving requests over the FastCGI
protocol by means of its standard library,
and IIS suports FastCGI workers.
So it's possible to (re-)write your Go server to use FastCGI
instead of HTTP and then hook it into IIS via this protocol.
The pros and cons of these solutions—as I view them—are:
Serving over plain HTTP is more familiar to a developer and
makes the server more "portable"—in the sense it will be easier to change its deployment scheme if/when you'll need it.
Right to making it available to the Internet directly.
Conversely, with FastCGI, you'll always need a FastCGI host as a "middleware".
There were rumors that HTTP code is more fine-tuned in terms
of performance than that of FastCGI.
Still, this only will be of concern for reasonably hard-core loads.
One possible upside of FastCGI over HTTP is that it can
be served over pipes rather than TCP. For instance, you might
get it served over named pipes as it's supported by IIS's FastCGI module and there exists 3rd-party packages for Go implementing support for them
(even including one from Microsoft®).
The upside of this is that pipes are beleived to incur lesser overhead for data transfer (basically it's just shoveling bytes between in-kernel buffers belonging to two processes instead of pushing them through the whole TCP/IP stack), and using pipes frees you from the need of dedicating a TCP port to the Go server.
Still, I have no personal experience with this kind of setup and its performance trade-offs.
I'm building a software agent that run on a server, this software agent act as a server manager i.e. starting/stoping Docker container, monitoring etc.
This server will host/serve many services, these services are programs running in Docker container, 1 program/service per container.
There may be so many servers and these servers aren't necessary be a high performance server, they ranges from a small VM to high performance computer. Right now, I assume that every service uses HTTP to serve request.
The function that I want to implement in this software agent is tracking the number of clients that are currently connecting (requesting) to server for every service (e.x. server A is processing 500 requests) or specific program is ok (e.x. program A is processing 100 requests, program B is processing 200 request).
I want to know this number because I want to do workload balancing across servers that host the same service.
The following is ideas that I have.
Implementing load balancer/reverse proxy inside this agent (I would use this load balancer https://github.com/nwoodthorpe/Load-Balancer-Golang). This may be the last choice because I think it will use pretty much resources for load balancing.
Letting service programs that are running on server tell agent whenever they start and finish processing request. I simply implement UDP socket server in agent to listen for a datagram that tell unique ID of request (actually, can be anything that help me distinguish specific request that being processed) and status whether is being processed or finish processing.
So, I would like to ask for a suggestion for above approaches, which one is better or how should I implement it? Is there any better approach to do this?
I am going to write a service to manipulate a database that all Insert/Update/Delete/Select will be executed via this service.
However, I only know socket services (Web service is a kind of socket service because it uses network layer).
What I am concerning is the performance of socket services. Because they needs to go through the network layer. So OS needs to start the network layer and then pass all packets to my program that maybe have performance overhead on network layer.
So my question is: is there any non-socket services working in both Windows and Linux?
Updated at 19th January 2012
I found the solution here: http://en.wikipedia.org/wiki/Inter-process_communication
Is this over the network, or on same box?
If over the network, sockets are fine, WCF, web services are all fine (this is how SQL Server, Oracle and everything else work...)
If local, same box, you can use a shared memory approach, and avoid the network completely.
FWIW, Shared Memory totally works on Windows. See CreateSharedMemory function from Win32-SDK. In .NET, you can use .NET remoting with shared memory as the transport. There are many ways to do this on Windows.
I need to develop a server side application that opens sockets and manages communication with multiple clients. Previous answers have told me this is possible using a single script file, which loops forever.
Is this possible using only a PHP/Perl/Python hosting service? or would I need a VPS or shell access?
Any help is appreciated since I've never worked with sockets before. Thanks for your time.
Cheap Perl/PHP hosting services don't want you running your own long-running processes.
This means you will need a VPS (which obviously includes shell account since you can do anything you want on your private server). A few VPS providers might block outgoing IRC port but I think that is rare.
Linode and Slicehost/Rackspace are just two examples very very well run VPS service providers and I guarantee you can run your own socket application on them.
It would make your host very unhappy since their CPU time is valuable! If you use shared hosting, your host might just kick you out for such a solution! (Read your contract for the fine details.)
I think it could be possible but it depends on the setup of your host, plus the permissions your host are granting you. And most will be unhappy about anything that runs forever. (They prefer to see just short, simple applications.)
Usually the service firewall will block any unexpected ports, or if they are not doing it now they will start doing it after they figure out what you are doing and decide they don't like it.
I would say no because it involve too much security problems
Does anyone have experience deploying GWT apps to EC2?
If I were to install tomcat or apache on a ec2 instance, could I have users connect directly to a url pointing there?
Would that be cost effective, or would java hosting services be best?
Is there any downside to hosting the edge HTTP server on a regular hosting service and have that direct requests to EC2? Performance ever an issue here?
Other answers are correct but I just wanted to share the fact that we are are developing a product that is 100% EC2/S3 based and also have a pure GWT front end.
We use maven2 for builds and the excellent gwt-maven plugin. This makes it easy to produce a WAR package of our web application as output. We use Jetty but Tomcat would work just as well.
We have pound (a http accelerator/load balancer) running on the VM listening for http & https, which then forwards to requests to lighttpd (static) or jetty (app). This also simplifies SSL certificates because pound handles SSL. I've found Java servers have always been a pain to configure with SSL certs.
Yes, you can host pretty much whatever you want, as you effectively have a dedicated Linux machine at your command.
As I last recall, the basic rate for an EC2 instance, on their "low end box" worked out to around $75/month, so you can use that as a benchmark against other vendors. That also assumed that the machine is up 24x7 (since you pay for it by the hour).
The major downside of an EC2 instance is simply that it can "go away" at any time, and when it does, any data written to your instance will "go away" as well.
That means you need to set it up so that you can readily restart the server, but also you need to offline any data that you generate and wish to keep (either to one of Amazons other services, like S3, or to some other external service). That will incur some extra costs depending on volume.
Finally, you will also be billed for any traffic to the service.
The thing to compare it against is another "Virtual Server" from some other vendor. There is a lot of interesting things that can be done with EC2, but it may well be easier to go with a dedicated Virtual hosting service if you're just using a single machine.
Others have given good answers. I would have to add that you need to spend programmer time getting to know EC2's quirks and addressing them (e.g. with EBS). It's not completely trivial, and though it is useful knowledge to have and may be worth it for that reason alone, if you want to get up and running quickly with just a few servers, you should probably look at other hosted options.
On the other hand, if you plan to scale up massively enough (eventually hosting many servers on EC2) then I would highly recommend it. You have to architect a few things, but you need to do that anyways. The flexibility of on-demand computing, and the generally low price, makes this a killer platform once you reach a certain scale of operation.
You definitely can host an http server in EC2, but you need to take into consideration the following:
As mentioned before the cost can be much higher than alternative hosting solutions
Your instance (the machine you've started in EC2) can go off unexpectedly. There is no guarantee from Amazon for 24x7 availability. This mean that the data you've stored in local storage will be lost and when you've start a new instance, it will get a new IP.
To successfully host a server in EC2, you therefore need to employ some other services from Amazon. You need Elastic IP, so that you can circumvent the new IP address problem. You can also use Elastic Block Storage. This is a service that will allow you to mount in your machine a disk, that will not go away when your instance is lost.