How to balance server response time with high traffic? - server

I have Hostgator basic plan of dedicated server. When in Peak time the response time of my website goes very high for REST API and it does not respond properly within time the loader run for a long time and show The 500 status code, or Internal Server Error, means that server cannot process the request for an unknown reason.
When I hit the same API at night it send response immediately.
My server is with APACHE and web-services are build in core PHP.

Related

Rate Limiting by Request in Haproxy

My goal is to: Limit no of requests (not connection) for API per backend server, so that server won't get loaded with too many requests.
I can do this in middleware of each server, but problem is if server goes in stuck state, and won't able to perform any action on request, Request will go in wait state and that will impact client.
So I won't to perform this using haproxy, where haproxy based on requests on each server, will transfer request to next available node
I read documentation on haproxy.
But it has connection based rate limiting on each server. Or total request rate limiter on frontend, problem with this is if no of servers increase no of allowed request should increase and this does not limit request on each server instead its for service
https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/
Any help will be appreciated

GET rest api fails at client side

Suppose client call server using GET API, is it possible that server send a response but client misses that response??
If yes how to handle such situation, as I want to make sure that client receives the data. For now I am using second REST call by client as ack of first.
It is certainly possible. For example if you are using a site with a REST API and a request is just sent to the API and your internet connection dies when the answer is supposed to arrive, then it is quite possible that the server has received your request, successfully handled it, even sent the response, but your computer did not receive it. It could be an issue on a server responsible for transmitting the request as well. The solution to this kind of issue is to have a timeout and if a request timed out, then resend it until it is no longer timed out.

Very first HTTP request for a particluar REST API takes a lot of time but subsequent requests are quick

I'm requesting a REST API through a HTTP Client from my machine (fiddler and C# app) and very first time, it takes around 30 seconds to get a response but it's very quick (around 250 ms) for subsequent requests to the same API.
REST API has been hosted on a different environment
Using fiddler and a C# app as client for the requests and both results in same way
The same REST API if requested from a different machine (in same domain, not API hosted one) is not having problem and very quick. So it's my machine only.
However there is no issue when requesting the REST API from any browser(IE, Chrome, Firefox) and REST Console.
I tried toggling 'Reuse server connections' (ON by default) option in fiddler and found that disabling that is making all requests slower (around 30 secs)
No impact of 'Reuse client Connections' option in fiddler
Could you please suggest what is the problem here as to why requests are taking so much time for a response and how can it be solved.

Programmatically Call REST URL

Summary
Is there a way to programmatically call REST URLs setup in JBoss via RESTEasy so that the programmatic method call actually drills down through the REST processor to find/execute the correct endpoint?
Background
We have an application that has ~20 different REST endpoints and we have set the application up to receive data from other federated peers. To cut down on cross network HTML requests, the peer site sends a bulk of requests to the server, and the receiving server needs to act upon the URL it receives. Example data flow:
Server B --> [Bulk of requests sent via HTTP/Post] --> Server A breaks list down to individual URLs --> [Begin Processing]
The individual URLs are REST URLs that the receiving server is familiar with.
Possible Solutions
Have the receiving server read through the URLs it receives, and call the management beans directly
The downside here is that we have to write additional processing code to decode the URL strings that are received.
The upside to this approach is that there is no ambiguity as to what happens
Have the receiving server execute the URL on itself
The receiving server could reform the URL to be http://127.0.0.1:8080/rest/..., and make a HTTP request on itself.
The downside here is that the receiving server could have to make a lot of HTTP requests upon itself (it's already somewhat busy processing "real" requests from the outside world)
Preferred: Have the receiving server access the main RESTEasy bean somehow and feed it the request.
Sort of combo of 1 & 2, without the manual processing of 1 or the HTTP requests involved with 2.
Technology Stack
JBoss 6.0.0 AS (2010 release) / Java 6
RESTEasy

RESTful way implementing an connection control

I am implementing a software updating server by REST web service. It is designed to get an client upgrade "instructions" (not the file itself) by GET request to resource
/clients/{clientId}/upgrades?completed=false
Clients are designed to polls the resource in a 30 minutes interval. The resource returns status code 404 when no upgrades available and return the upgrade instructions if available. When a client upgrading is completed, client will report to server by a PUT request to
/clients/{clientId}/upgrades/{upgradeId}
with some status change.
Now, an upgrade connection control from server-side is needed, i.e., a maximum simultaneously upgrade connection limit.
I can add a field of "upgrading" status indicator to upgrade resources, change the indicator when /clients/{clientId}/upgrades is accessed, and calculating a "count of upgrades with upgrading==true" to find the connection number. Then return status code 404 to client if connection number limit is exceed. However, it do breaks the stateless principle of REST web service.
Any idea is welcomed. Thanks in advance.
You could require that a client make a successful PUT to the resource with a value requesting to start the upgrade, such as a status of "upgrading". Every time your server gets one of those values it will check the current total of clients it has approved. If there are resources left then it can return success which allows the client to proceed.
When the clients send their completion PUT requests then you can decrement the resource counter.