I am trying to write a server side code for sending push notifications for my applications. As per Apple recommendation, I am planning to retain the connection and send push notification as required.
Apple also allows opening and retaining multiple parallel connections for sending push notifications.
"You may establish multiple, parallel connections to the same gateway or to multiple gateway instances."
For this purpose I would like to maintain a connections pool.
My question is what is the limitation of connections pool, or the number of persistent connections with APNS can I maintain?
Thanks for anticipated help.
Don't know if you're going to get a precise answer to this one. As large and dynamic a system as APNS is, it behooves Apple to be ambiguous about such a number; it gives them liberty to change it at will. I found a similar vagueness here.
From this discussion it appears a rule of thumb is 15 connections max
One suggestion is to have an open-ended pool where new connections can be created until they start being refused. Just an idea.
I agree with #paislee, I don't think you'll get a precise number. I'm opening over 20 distinct connections simultaneously and there are ok.
In order to help you with your test, use TcpView, where it is possible to see every opened connection.
Regards
Related
I was thinking about the concept of a software port which helps distinguish which internet packets belong to which application and i was wondering what similar construct does an app itself use to distinguish between two different connections that it might be attempting. For example, if an app X with asynchronous execution has initiated a connection on line 10 in its source code and then it starts another one on line 11 while the first connection is still ongoing what does it use to keep track of these connections?
I believe that depending on the connection type, there must be an identifier for it.
Take for example socket connections, assuming a client is using socket.io for initiating multiple connections with other processes, for every connection there is a unique id for it.
socket.on('connect', () => {
console.log(socket.id); // 'G5p5...'
});
This might be a little abstracted, but even on the low level, the same thing happens.
I have a golang client application talking a server via GRPC. I noticed that while the application is running that the number of sockets accumulated on the client app keeps climbing till around 9000. At which point I pause client. However, after there are no more traffic between the client and the server the number sockets still stayed at that level even after 8 hours.
Is there anyway we can tune GRPC for socket usage? Such as closing sockets after a timeout? Is using streaming another way to limit number of sockets being opened?
Thanks for any help.
I'd start by making sure that your client application cleans up unused connections (grpc.ClientConn) by calling Close() method on it.
Also, since I don't know what exactly your application does so I'm gonna go ahead and suggest reusing connections for multiple RPCs (you're probably already doing this).
And to answer your question about setting timeout deadline on connections:
1. You shouldn't have to do this. Feel free to open up an issue on https://github.com/grpc/grpc-go about whatever gRPC shortcoming is forcing you to take this route.
2. But if you must know, you can use a custom dialer(https://github.com/grpc/grpc-go/blob/13975c070286c7371aa3a8b3c230e90d7bf029fc/clientconn.go#L333) and set a deadline on the net.Conn that you return from it.
Best,
Mak
I'm a newbie with Rabbitmq(and programming) so sorry in advance if this is obvious. I am creating a pool to share between threads that are working on a queue but I'm not sure if I should use connections or channels in the pool.
I know I need channels to do the actual work but is there a performance benefit of having one channel per connection(in terms of more throughput from the queue)? or am I better off just using a single connection per application and pool many channels?
note: because I'm pooling the resources the initial cost is not a factor, as I know connections are more expensive than channels. I'm more interested in throughput.
I have found this on the rabbitmq website it is near the bottom so I have quoted the relevant part below.
The tl;dr version is that you should have 1 connection per application and 1 channel per thread.
Connections
AMQP connections are typically long-lived. AMQP is an application
level protocol that uses TCP for reliable delivery. AMQP connections
use authentication and can be protected using TLS (SSL). When an
application no longer needs to be connected to an AMQP broker, it
should gracefully close the AMQP connection instead of abruptly
closing the underlying TCP connection.
Channels
Some applications need multiple connections to an AMQP broker.
However, it is undesirable to keep many TCP connections open at the
same time because doing so consumes system resources and makes it more
difficult to configure firewalls. AMQP 0-9-1 connections are
multiplexed with channels that can be thought of as "lightweight
connections that share a single TCP connection".
For applications that use multiple threads/processes for processing,
it is very common to open a new channel per thread/process and not
share channels between them.
Communication on a particular channel is completely separate from
communication on another channel, therefore every AMQP method also
carries a channel number that clients use to figure out which channel
the method is for (and thus, which event handler needs to be invoked,
for example).
It is advised that there is 1 channel per thread, even though they are thread safe, so you could have multiple threads sending through one channel. In terms of your application I would suggest that you stick with 1 channel per thread though.
Additionally it is advised to only have 1 consumer per channel.
These are only guidelines so you will have to do some testing to see what works best for you.
This thread has some insights here and here.
Despite all these guidelines this post suggests that it will most likely not affect performance by having multiple connections. Though it is not specific whether it is talking about client side or server(rabbitmq) side. With the one point that it will of course use more systems resources with more connections. If this is not a problem and you wish to have more throughput it may indeed be better to have multiple connections as this post suggests multiple connections will allow you more throughput. The reason seems to be that even if there are multiple channels only one message goes through the connection at one time. Therefore a large message will block the whole connection or many unimportant messages on one channel may block an important message on the same connection but a different channel. Again resources are an issue. If you are using up all the bandwidth with one connection then adding an additional connection will have no increase performance over having two channels on the one connection. Also each connection will use more memory, cpu and filehandles, but that may well not be a concern though might be an issue when scaling.
In addition to the accepted answer:
If you have a cluster of RabbitMQ nodes with either a load-balancer in front, or a short-lived DNS (making it possible to connect to a different rabbit node each time), then a single, long-lived connection would mean that one application node works exclusively with a single RabbitMQ node. This may lead to one RabbitMQ node being more heavily utilized than the others.
The other concern mentioned above is that the publishing and consuming are blocking operations, which leads to queueing messages. Having more connections will ensure that 1. processing time for each messages doesn't block other messages 2. big messages aren't blocking other messages.
That's why it's worth considering having a small connection pool (having in mind the resource concerns raised above)
The "one channel per thread" might be a safe assumption (I say might as I have not made any research by myself and I have no reason to doubt the documentation :) ) but beware that there is a case where this breaks:
If you you use RPC with RabbitMQ Direct reply-to then you cannot reuse the same channel to consume for another RPC request. I asked for details about that in the google user group and the answer I got from Michael Klishin (who seems to be actively involved in RabbitMQ development) was that
Direct Reply to is not meant to be used with channel sharing either way.
I've email Pivotal to update their documentation to explain how amq.rabbitmq.reply-to is working under the hood and I'm still waiting for an answer (or an update).
So if you want to stick to "one channel per thread" beware as this will not work good with Direct reply-to.
I have this question asked in the Go mailing list, but I think it is more general to get better response from SO.
When work with Java/.Net platform, I never had to manage database connection manually as the drivers handle it. Now, when try to connect to a no sql db with very basic driver support, it is my responsibility to manage the connection. The driver let connect, close, reconnect to a tcp port, but not sure how should i manage it (see the link). Do i have to create a new connection for each db request? can I use other 3rd party connection pooling libraries?
thanks.
I don't know enough about MongoDB to answer this question directly, but do you know how MongoDB handles requests over TCP? For example, one problem with a single TCP connection can be that the db will handle each request serially, potentially causing high latency even though it may be bottlenecking on a single machine and could handle a higher capacity.
Are the machines all running on a local network? If so, the cost of opening a new connection won't be too high, and might even be insignificant from a performance perspective regardless.
My two cents: Do one TCP connection per request and just profile it and see what happens. It is very easy to add pooling later if you're DoSing yourself, but it may never be a problem. That'll work right now, and you won't have to mess around with a third party library that may cause more problems than it solves.
Also, TCP programming is really easy. Don't be intimidated by it, detecting a closed socket, and reconnecting synchronously or asynchronously is simple.
Most mongodb drivers (clients) will create and use a connection pool when connecting to the server. Each socket (connection) can do one operation at a time at the server; because of how data is read off the socket you can issue many requests and server will just get them one after another and return data as each one completes.
There is a Go mongo db driver but it doesn't seem to do connection pooling. http://github.com/mikejs/gomongo
In addition to the answers here: if you find you do need to do some kind of connection pooling redis.go is a decent example of a database driver that pools connections. Specifically, look at the Client.popCon and Client.pushCon methods in the source.
Is there a way to reuse SSL socket connections on the iPhone. I'm seeing an extra 3-4 second overhead in doing SSL handshaking. I'm using NSURLconnection currently to do the API calls and each one of them is taking 4-5 seconds on Wifi. Any suggestions would be greatly appreciated.
Are you asking how to "reuse" sockets for the same specific address and port? Or for different URLs?
If the former, just don't close the socket until you're absolutely sure you don't need it anymore.
If the latter, there's nothing you can do about that. The SSL certificate verification process is likely where you're getting the overhead from.
You'll need to add more context to your question if you want a more specific answer.
you might want to establish an SSL connection an keep reusing it. Rather than make a new connection each time. There is definitely an overhead to SSL connections as well as handshaking. You cant get rid of the overhead from the encryption but the handshaking can be reduced by using NSStreams and keeping the connection open as you use it.
I have posted code and instructions on how to do it here:
NSStream SSL on used socket