SFTP synchronization in PhpStorm 5 is too slow - deployment

I'm testing new PhpStorm5 in Ubuntu 12.04. Synchronization via sftp works OK, but it is too slow. I have a feeling that phpStorm synchronizes via only one connection and doesn't create more concurrent connections to sftp. When I synchronize folders in Krusader sync plugin it uses 10 concurrent connections and works much faster. Any ideas how to force phpStorm deployment synchronization use more connections ?

It's not possible to use more connections, at least not until this request is addressed. The request is about FTP, but if/when it's implemented, it can be easily expanded to SFTP and other transports.
There is a setting to limit concurrent connections, but it looks like only one connection is used when synchronizing.

Related

socket connections closing when manually deploying

We made a chat module in our project using socket.io. When the load is balanced and the manual deployed, if socket connections are switched to different servers, socket connections are disconnected and the messaging events are partially not processed. I solved the load balance problem with socket.io-redis library. It acts as a gateway and solves this problem thanks to redis.
Another problem is that when I deploy it manually, the pid of the servers changes and socketio connections are instantly disconnected on the client and then it is not connected even though it says connected.
Do you think that using tools such as Travis CI solves the problems in manual deploy process?
Another question is, if a system that goes to 3 servers with load balance then goes back to 2 servers, the socket connections will be closed again, what method may be required to solve this? I thought of separating the socket.io service from the monolithic structure and keeping it on a single server, and scaling the server vertically when the load increased.
We are using an Aws Elastic Beanstalk(EBS), it automatically performs load balance.

ADO.NET background pool validation

in Java, application servers like JBoss EAP have the option to periodically verify the connections in a database pool (https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/6.4/html/administration_and_configuration_guide/sect-database_connection_validation). This has been very useful for removing stale connections.
I'm now looking at a ADO.NET application, and I was wondering if there was any similar functionality that could be used with a Microsoft SQL Server?
I ended up find this post by redgate that describes some of the validation that goes on when connections are taken from the pool:
If the connection has died because a router has decided that it no
longer wants to forward your packets and no other routers like you
either then there is no way to know this unless you try to send some
data and don’t get a response.
If you create a connection and a connection pool is created and
connections are put into the pool and not used, the longer they are in
there, the bigger the chance of something bad happening to it.
When you go to use a connection there is nothing to warn you that a
router has stopped forwarding your packets until you go to use it; so
until you use it, you do not know that there is a problem.
This was an issue with connection pooling that was fixed in the first
.Net 4 reliability update (see issue 14 which vaguely describes this)
with a feature called “Connection Pool Resiliency”. The update meant
that when a connection is about to be taken from the pool, it is
checked for TCP validity and only returned if it is in a good state.

HornetQ local connection never timing out

My application, running in a JBOSS standalone env, relies on a HornetQ (v2.2.5.Final) middleware to exchange messages between parts of my application in a local environment - not over the network.
The default TTL (time-to-live) value for the connection is 60000ms, I am thinking of changing that to -1 since, from an operative point of view, I am looking forward to keep sending messages through such connection from time to time (not known in advance). Also, that would prevent issues like jms queue connection failure.
The question is: what are the issues with never timing out a connection on the server side, in such context? Is that a good choice? If not, is there a strategy that is suited for such situation?
The latest versions of HornetQ automatically disable connection checking for in-vm connections so there shouldn't be any issues if you configure this manually.

How do you know if PGBouncer is working?

I've set up PGBouncer and configured it to connect to my postgres DB and it all connects just fine, however I'm not sure if its actually working.
I have a php script that runs as a daemon and picks up beanstalk jobs. The problem is for every distinct user/action on the system it opens a new connection to postgres and then leaves that connection idling because the daemon doesn't actually stop running so the connection is never terminated (a quick fix for this was to reset the connection at the end of the script loop, but that is going to be inefficient with many connects).
Anyway, this caused postgres to eventually run out of connections and lock up...
So PGBouncer seems like the answer.
But now when I run it, I see the same db connection multiple times when i do ps ax | grep postgres.
Isn't PGBouncer supposed to only ever have 1 connection open to the DB and route all traffic through that connection? Then open a new connection if that one is full?
At present I have 3 for one db connection (my access control system) and 2 for the other database (my client specific data).
To me, it feels like if I roll out these changes then I will just be faced with the same problem that the connections will just get eaten up again because they're not being released.
I hope that explains enough for someone to offer any advice.
It sounds like an important step here is to fix the scripts so they release the connections when they're done. Once you do that, PgBouncer will help reduce the connection setup/teardown overhead, but in connection pooling mode it won't give you the ability to maintain more connections to Pg than you otherwise could.
However, you can also use PgBouncer in transaction-pooling mode. When used for transaction pooling, PgBouncer keeps a pool of idle backend transactions and only assigns them when a client does a BEGIN. Connections are returned to the pool after the client does a COMMIT or ROLLBACK. That means that in transaction pooling mode you can have large numbers of open connections to PgBouncer; they don't each need a corresponding connection to a PostgreSQL backend, so long as most of them are idle at any point in time and don't have any transaction open.
The only real downside of transaction pooling mode is that it breaks applications that expect to SET session-level variables, keep prepared statements across transactions, etc. Applications may need modification, e.g. to use SET LOCAL.

Missed connections using select() on many non-blocking connecting TCP sockets on windows XP

I have a small portable tool which connects to around 150 servers at diverse locations to get a quick status check from them. It is important to get the status for all servers back to the user relatively quickly so the tool connects to the servers in parallel using non-blocking connect, and uses select() to determine when each socket is ready. The use of select() is fairly straightforward, and the tool is failure mature now and works well on Linux. It runs on windows xp, but connections to the vast majority of the servers out there do not complete. The tool staggers the calls to connect to avoid creating what looks like a SYN flood. It connects to one server about 100 msecs. I also have a check in place to ensure FD_SETSIZE is not violated. I have anecdotal evidence from someone else that the behaviour is better on later windows versions, but have not been able to verify.
I have used WinDump to verify that the syn packets are being sent, and I can see ack packets coming back, but select() keeps returning zero, and the code simply can't connect to most of the servers that do exist, and I can connect to just fine with the same code on Linux.
Has anyone seen and or solved any similar issues with many non-blocking connects and select on Windows XP?
After another day or so of digging I seem to have found the answer. On windows XP SP2 there is a limit of 10 concurrent connecting sockets system wide. If 10 or more half open connections exist, a System event is logged noting that the limit has been reached, and new connecting sockets are throttled silently. The System Event number is 4226.
I have fixed my code by adding version checks for Windows XP, and throttled to less than 10 connections on those systems. So far I have no reports of other versions being affected.