golang grpc socket tuning - sockets

I have a golang client application talking a server via GRPC. I noticed that while the application is running that the number of sockets accumulated on the client app keeps climbing till around 9000. At which point I pause client. However, after there are no more traffic between the client and the server the number sockets still stayed at that level even after 8 hours.
Is there anyway we can tune GRPC for socket usage? Such as closing sockets after a timeout? Is using streaming another way to limit number of sockets being opened?
Thanks for any help.

I'd start by making sure that your client application cleans up unused connections (grpc.ClientConn) by calling Close() method on it.
Also, since I don't know what exactly your application does so I'm gonna go ahead and suggest reusing connections for multiple RPCs (you're probably already doing this).
And to answer your question about setting timeout deadline on connections:
1. You shouldn't have to do this. Feel free to open up an issue on https://github.com/grpc/grpc-go about whatever gRPC shortcoming is forcing you to take this route.
2. But if you must know, you can use a custom dialer(https://github.com/grpc/grpc-go/blob/13975c070286c7371aa3a8b3c230e90d7bf029fc/clientconn.go#L333) and set a deadline on the net.Conn that you return from it.
Best,
Mak

Related

multiple clients with server handling

I am just starting to learn sockets and client/servers. I am not clear on the following concept. Assume non-blocking sockets.
Assume I have a server application, and I have 1000 clients trying to talk to it, I think it is very realistic. Assume the client and server talk via sockets.
1- Does this mean that with every client, there is a separate socket connection? (Do we have 1000 sockets, or one socket with 1000 connections?
2- Does every socket connection belong to a separate thread? If Yes, How can we limit number of threads as it can get out of control?
Assuming you're using TCP, then every connection is over a separate socket. The operating system allocates them using file descriptors.
When using a protocol like UDP, this need not be the case, and won't be unless you write the code to do make it happen.
Threading? It depends on how you build the server. You don't need threads to be a part of a server at all and you can (obviously) have multiple threads with just a single connection. One common way of doing things, however, is to hand the socket returned by accept() to a new thread, yes.
If you don't have an interest in threads--for example, if the server only performs very quick tasks and creating a thread is just wasting time--you can use select() to poll the sockets and determine which ones need attention. Some servers use a combination of threading and polling to try to maximize throughput.

how do I write my own production web server?

I am making a unix ssl server/client. So far I have implemented FD_SET with select to handle all connections concurrently in one master server process. However due to __FD_SETSIZE the number of clients can only be 1024. I need to increase the number of clients and efficiency of the server. Changing the __FD_SETSIZE has potential problems (apparently?) so I am stuck.
So far the network includes: errno.h detection, signal detection -> atomic handling, fd_set -> select(), successful stream socket based communication.
I would really appreciate it if someone can tell me what should I do? do I fork() after 1024 (which presents its own problems, if its even doable?) do I implement threads to handle each client request, or just client data or both?
What is the best network architecture in your opinion? keep in mind its a socket stream based connection that is meant to handle as much punishment as possible and allowing as many clients to the server as possible.
Don't write your own production web server.
There are too many open source servers out there all written by people who know more about high connectivity and SSL than you do. They also have the advantage of being tested to a degree that you'd never be able to accomplish with your homebrew server.

Send data to socket on iPhone

I don't know very much about sockets at all. What I am wondering is how the sockets work.
1) Firewalls often block ports. Is it plausible that if I request a socket and it is behind a blocked port anything that is sent or received will get intercepted and destroyed? Is this an issue?
2) On the server side how do you keep the connection alive to send data back through, like if I was building a chat app is it possible to start a connection through a socket and keep that alive and have the server basically push new data through the stream. Instead of the apps having to query every so often.
3) If the app goes into the background apple says they will keep the socket as long its not needed (if it is voip). Does that mean that I could still send data through to the app and have it handled? If my app is registered for location updates and executing in the background already will the socket stay open?
Are there any server languages that make it simple? I am currently using python and Django for a simple http server. Are any tutorials on setting up a server that can keep connections alive. I really don't know much about this so what i'm asking may not make any sense but some direction would be greatly appreciated.
1) I haven't found an issue with firewalls at all.
2) I used a twisted server along side my web server to implement a event driven socket server and it works great.
3) the sockets will stay open as long as you are executing in the background and the delegate methods are called to handle stream events.
There is an excellent tutorial here:
http://www.raywenderlich.com/3932/how-to-create-a-socket-based-iphone-app-and-server
that goes over the twisted framework and how to create a chat app with sockets. I found that immensely helpful.

TCP connection management

I have this question asked in the Go mailing list, but I think it is more general to get better response from SO.
When work with Java/.Net platform, I never had to manage database connection manually as the drivers handle it. Now, when try to connect to a no sql db with very basic driver support, it is my responsibility to manage the connection. The driver let connect, close, reconnect to a tcp port, but not sure how should i manage it (see the link). Do i have to create a new connection for each db request? can I use other 3rd party connection pooling libraries?
thanks.
I don't know enough about MongoDB to answer this question directly, but do you know how MongoDB handles requests over TCP? For example, one problem with a single TCP connection can be that the db will handle each request serially, potentially causing high latency even though it may be bottlenecking on a single machine and could handle a higher capacity.
Are the machines all running on a local network? If so, the cost of opening a new connection won't be too high, and might even be insignificant from a performance perspective regardless.
My two cents: Do one TCP connection per request and just profile it and see what happens. It is very easy to add pooling later if you're DoSing yourself, but it may never be a problem. That'll work right now, and you won't have to mess around with a third party library that may cause more problems than it solves.
Also, TCP programming is really easy. Don't be intimidated by it, detecting a closed socket, and reconnecting synchronously or asynchronously is simple.
Most mongodb drivers (clients) will create and use a connection pool when connecting to the server. Each socket (connection) can do one operation at a time at the server; because of how data is read off the socket you can issue many requests and server will just get them one after another and return data as each one completes.
There is a Go mongo db driver but it doesn't seem to do connection pooling. http://github.com/mikejs/gomongo
In addition to the answers here: if you find you do need to do some kind of connection pooling redis.go is a decent example of a database driver that pools connections. Specifically, look at the Client.popCon and Client.pushCon methods in the source.

How to hand-over a TCP listening socket with minimal downtime?

While this question is tagged EventMachine, generic BSD-socket solutions in any language are much appreciated too.
Some background:
I have an application listening on a TCP socket. It is started and shut down with a regular System V style init script.
My problem is that it needs some time to start up before it is ready to service the TCP socket. It's not too long, perhaps only 5 seconds, but that's 5 seconds too long when a restart needs to be performed during a workday. It's also crucial that existing connections remain open and are finished normally.
Reasons for a restart of the application are patches, upgrades, and the like. I unfortunately find myself in the position that, every once in a while, I need to do this kind of thing in production.
The question:
I'm looking for a way to do a neat hand-over of the TCP listening socket, from one process to another, and as a result get only a split second of downtime. I'd like existing connections / sockets to remain open and finish processing in the old process, while the new process starts servicing new connectinos.
Is there some proven method of doing this using BSD-sockets? (Bonus points for an EventMachine solution.)
Are there perhaps open-source libraries out there implementing this, that I can use as is, or use as a reference? (Again, non-Ruby and non-EventMachine solutions are appreciated too!)
There are a couple of ways to do this with no downtime, with appropriate modifications to the server program.
One is to implement a restart capability in the server itself, for example upon receipt of a certain signal or other message. The program would then exec its new version, passing it the file descriptor number of the listening socket e.g. as an argument. This socket would have the FD_CLOEXEC flag clear (the default) so that it would be inherited. Since the other sockets will continue to be serviced by the original process and should not be passed on to the new process, the flag should be set on those e.g. using fcntl(). After forking and execing the new process, the original process can go ahead and close the listening socket without any interruption to the service, since the new process is now listening on that socket.
An alternative method, if you do not want the old server to have to fork and exec the new server itself, would be to use a Unix-domain socket to communicate between the old and new server process. A new server process could check for such a socket in a well-known location in the file system when it is starting. If present, the new server would connect to this socket and request that the old server transfer its listening socket as ancillary data using SCM_RIGHTS. An example of this is given at the end of cmsg(3).
Jean-Paul Calderone wrote a detailed presentation in 2004 on a holistic solution to your problem using Twisted, including socket migration and other issues.