How to analize OVH server download speed and improve it - server

I have a dedicated 1gbps server with OVH, and CDN of Akamai.
When I download a 1.1 MB file using Akamai, I have a speed of total request of 319ms.
When I download a 1.1 MB file directly from my OVH server, I have speed of total request of: 1.43s.
When I analyze them in chrome:
Akamai request:
Request sent: 83 us
Waiting for server response: 183.86 ms
Content download: 134.58ms
OVH dedicated server:
Request sent: 68 us
Waiting for server response: 92.25 ms
Content download: 1.34s
The dedicated server comes with a 1gbps connection and I'm not using even hagh of this BW.
Looks like that OVH should be faster because my Request sent and Waiting for server response is faster, but the content download is much slower.
why is it? how can I change it? hat are the parameters that influence the download speed?

Related

Socket - java proxy Packets separated once send to client

I have a client Server configuration with one connection, the server cannot process the requests received in parallel, it processes them in series, to overcome this problem, we developed a proxy server (installed between client and server) to receive request open connection with the server ==> send request server ==> send response to the client ==> close connection.
The problem we have is this, the response is sent divided on 2 part, we did a TCPDUMP on the port, we see that the request is sent devised on two part one with a length 1 and the second with à length 33
We don't know if it's a configuration on the server or on the network
Can some one help us ?

Drools workbench (business-central) requests timing out

I have installed business central along with Keycloak authentication using MySQL as a database for storing Keycloak's data. The business-central workbench and Keycloak server are behind Nginx.
While working on the workbench some of the request timeout giving a 504 error code. The whole business central UI freezes and the user is not able to do anything after that.
The urls that error out in 504 are like: https://{host}:{port}/business-central/out.43601-24741.erraiBus?z=105&clientId=43601-24741
Other details about the setup are as below:
Java: 1.8.0_242
Business central version: 7.34.Final
Keycloak version: 9.0.0
MySql: 8
Java options for business central: -Xms1024M -Xmx2048M -XX:MaxPermSize=2048M -XX:MaxHeapSize=2048M
Note: All of this setup of mine is on a 4GB EC2 instance.
Any help on this issue would be appreciated.
EDIT: I have checked the access_log.log and it looks like the server takes more than 45 sec to process the request. Here is a log:
"POST /business-central/in.93979-28827.erraiBus?z=15&clientId=93979-28827&wait=1 HTTP/1.1" 200 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36"i 45001 45.001
EDIT 2: Here is a sample request data that is sent:
[{"CommandType":"CDIEvent","BeanType":"org.kie.workbench.common.screens.library.api.ProjectCountUpdate","BeanReference":{"^EncodedType":"org.kie.workbench.common.screens.library.api.ProjectCountUpdate","^ObjectID":"1","count":1,"space":{"^EncodedType":"org.uberfire.spaces.Space","^ObjectID":"2","name":"Fraud_Team"}},"FromClient":"1","ToSubject":"cdi.event:Dispatcher"},{"ToSubject":"org.kie.workbench.common.screens.library.api.LibraryService:RPC","CommandType":"getAllUsers:","Qualifiers":{"^EncodedType":"java.util.ArrayList","^ObjectID":"1","^Value":[]},"MethodParms":{"^EncodedType":"java.util.Arrays$ArrayList","^ObjectID":"2","^Value":[]},"ReplyTo":"org.kie.workbench.common.screens.library.api.LibraryService:RPC.getAllUsers::94:RespondTo:RPC","ErrorTo":"org.kie.workbench.common.screens.library.api.LibraryService:RPC.getAllUsers::94:Errors:RPC"}]
The URL hit is : business-central/in.59966-45867.erraiBus?z=56&clientId=59966-45867&wait=1
It took more than a minute to process.
Problem Description
I had this same problem on 7.38.0. The problem, I believe, is that ERRAI seems to keep rolling 45 second requests open between the client and server to ensure communication is open. For me, Nginx had a default socket timeout of 30s which meant that it was returning a 504 gateway timeout for these requests, when in actuality they weren't "stuck". This would only happen if you didn't do anything within Business Central for 30 seconds, as otherwise the request would close and a new one takes over. I feel like ERRAI should really be able to recover from such a scenario, but anyway.
Solution
For me, I updated the socket timeout of my Nginx server to 60s such that the 45s requests didn't get timed out by Nginx. I believe this is equiavalent to the proxy_read_timeout config in Nginx.
If you can't touch your Nginx config, it seemed like there may also be a way to turn off the server to client communication as outlined here: https://docs.jboss.org/errai/4.0.0.Beta3/errai/reference/html_single/#sid-59146643_BusLifecycle-TurningServerCommunicationOnandOff. I didn't test this as I didn't need to, but it may be an option.

Magento 1.9 - How to solve GoDaddy Hosting Timeout Error?

I have VPS server in GoDaddy, configuration 8 GB Ram and 4 Core CPU Process.
Frequently i am getting following error,
Note : I am using OroCRM, in my Apache status Request -> index.php/api/v2_soap/index/?wsdl=1 so many url sending reply. How to set resource limit.

TCP handshake for HTTP Response?

Since HTTP is an application layer protocol using TCP, if I request to download a big file via HTTP here is what happens:
My HTTP request is going to be fragmented into TCP packets, and TCP is going to do a 3-way handshake and send my request packets to server. My question is the response from server ( the file) going to pass through old TCP connection, or server initiates another Transport layer connection with my browser and another 3-way handshake in order to send me the file?
The file transfer will use the existing connection. that will however make the connection busy until the file is transferred.
So if the user clicks on a link while the file is downloaded the connection is then busy. The web browser will therefore have to open an additional connection to be able to request the clicked url.
In HTTP/1.1 existing connections will be used if idle (idle connections will be closed when a period of time have passed).

Azure VM TCP idle timeout

I have a problem with setting up a FTP Server on a Azure VM.
In normal using the server runs great. Problem are coming with large file transfer over passive FTP connection.
Setup
FTP-Server software is a FileZilla Server.
Azure VM endpoint, Windows Firewall and Filezilla are configurated to use port 10000-10009 for passive connections.
The client is a 3rd party device.
Problem
On large file transfers with a duration over 4min the connection gets an idle timeout.
I found a Microsoft blog entry where is written:
"When FTP is transferring large files, the elapsed time for transfer may exceed 4 minutes, especially if the VM size is A0. Any time the file transfer exceeds 4 minutes, the Azure SLB will time out the idle TCP/21 connection, which causes issues with cleanly finishing up the FTP transfer once all the data has been transferred. [..] Basically, FTP uses TCP/21 to set everything up and begin the transfer of data. The transfer of data happens on another port. The TCP/21 connection goes idle for the duration of the transfer on the other port. When the transfer is complete, FTP tries to send data on the TCP/21 connection to finish up the transfer, but the SLB sends a TCP reset instead."
Now... for my 3rd party client is it not possible to set it up to send a TCP keepalive command to avoid idle timeout.
Question
How can I tell the Azure VM to not close idel TCP connection after 4min?
I even don't understand why this is happens because this violates the TCP specifications (RFC 5382 makes this especially clear its 2h 4m in normal). In other words, Azure that is dropping idle connections too early cannot be used for long FTP transfers.
Please help!
Regards
Steffen
I found two solutions!
1.
It is possible to set the timeout of VM endpoints up to 30 minutes.
Powershell command to do this is:
> Get-AzureVM -ServiceName "MyService" -Name "MyVM" | Set-AzureEndpoint -Name "MyEndpoint" -IdleTimeoutInMinutes 30 | Update-AzureVM
More information here.
2. Create ILIP (instance level IP)
You can create a ILIP to bypass the VM webservice enpoint layer. The PowerShell command to do this is:
Get-AzureVM -ServiceName “MyService” -Name “MyVM” | Set-AzurePublicIP -PublicIPName "MyNewEndpoint" | Update-AzureVM
More information here.
I'm using the latest version of Filezilla (3.14.1) and you can set Filezilla to send Keep-Alive packets, which would be recommended you try that first, rather than attempting to alter the default Azure load-balancer timeouts. However, the load balancer timeouts are user-configurable (ie: under your control) and details can be found here: https://azure.microsoft.com/en-us/documentation/articles/load-balancer-tcp-idle-timeout/
To set keep-alive commands on in Filezilla:
•Open the FileZilla "Edit" menu and select "Settings." On a Mac, open the "FileZilla" menu and choose "Preferences."
•Select the "FTP" page in the "Connection" section of the Settings dialog box. Look for the "FTP Keep-Alive" section of the page.
•Activate the "Send FTP keep-alive commands" box in the "FTP Keep-alive" section. This sends commands between FileZilla and the FTP server at short intervals, resetting the time-out function and preventing the server from closing the connection.
Hope that helps.