Delphi rest client/server (webbroker) + database +simultaneous client requests - rest

I'm new to REST developing and i'm creating a simple rest API to request database values from clients. I have used the "Delphi Web Server Application" project assistant (the one that uses TIdHTTPWebBrokerBridge and the WebModule where you create the different 'Actions'). It works fine and I can make requests from client(s).
The server WebModule contains a FDConnection and some FDQuery components to make the database (MySQL) queries, and each Action executes a specific query with specific params obtained throug request params.
The client app uses TRESTResponse, TRESTRequest, TRESRResponse components to send/receive the data.
For example:
client request to server some values for a specific user, sending "user = user1" and "passwd = ***" as request params.
server executes the query "select * from xxx where user = user1 and passwd = ...…..." and sends the response to the client.
Every query is "user-specific".
Ok, it works, but now I have some misgivings due to my rest/webbroker functioning ignorance.
What if thousands of requests are made at a time? Could the server respond incorrect data because the FDQuery cursor is in another record?
Or does the webbroker create the query for each request with no problem?
Is it better to create the FDQuery at runtime for each request and destroy it after request completion?
I made a simple test yesterday, running three instances of the Client application and sending 300 requests to the server (100 from each client) simultaneously and it worked, receiving correct data, but I don't know if this is enough guarantee.
Is this (Delphi Web Server Application) the correct method to créate the server? What are the differences with DataSnap?
Any advice?

In the Datasnap architecture (there are several flavours, but they all have a common architecture), the "server" makes 1 copy of the ServerMethodsUnit for each client connection. This is with the ServerClass.LifeCycle set to Session. Therefore, each client will be able to execute a servermethod and have the result returned to it, independently of what any other client may be requesting.
In your case, each ServerMethodsUnit will have it's own FdConnection, FdQuery and so on, wether you place design-time components there or instantiate them at runtime, the consequences are the same.
The limit here will be the hardware that the Datasnap/WebBroker application is running on. (Network bandwidth, RAM, Hard drive speed etc)
Datasnap (REST, DBX, Standalone, ISAPI, Apache, Linux), in my opinion is a sound basis for client/server development.

Related

consuming REST server methods from BizTalk Server using WCF-WebHttp adapter

I'm using VS 2019 and BTS 2020 developer edition. I need to implement a scenario in which BizTalk sits between the client and the REST server (implemented in APS.NET Core) and the client send request to BizTalk as he/she typically sends to REST server. The aim is to practice BizTalk WCF-WebHttp adapters (for both receive and send). My idea is to handle all the API requests and methods in a single receive location, send port, orchestration. How can I achieve it? The reason I'm using orchestration is to map and do other process on the messages later.
Does this idea wrong? Should we individually create send ports/receive locations for every API method?
Is there any relation between the operation name of logical port in orchestration and operation name in WCF-WebHttp adapter URL mapping (<Operation Name="SomeName" ... />)? (to one single orchestration and handle all methods)
How to design the desired orchestration? (I have tried 'Decide' shape (adding rules like msg_input(BTS.Operation) == "SomeName") to separate different requests identified by URL mapping in the receive location and I was successful in this step, but is it the correct way either? However, I don't have any idea for designing shapes the way to correctly start orchestration. Also, I don't know ho to send requests from rule branches to send port within the orchestration)
I would also appreciate to hear any other suggestions for solving this problem in a different perspective.

Play WebSocket client for load testing another play websocket server app

We have an existing play server app to which mobile clients talk via web sockets (two way communication). Now as part of load testing we need to simulate hundreds of client requests to the server.
I was thinking to write a separate play client (faceless) app and somehow in a loop make 100s of requests to a server app? Given that I am new to web sockets, does this approach sound reasonable?
Also what is the best way to write a faceless web socket client that makes web socket requests to a web socket server?
If you want to properly validate the performance of your application, it is very important to :
simulate the behavior of real users by simulating real "websocket" connections
- reproduce a realistic end-user journey on the application utilizing the websocket channel
It's important to generate the proper user workflow ( actions done by a user when receiving a websocket message). For example in a betting application users interact with the applicaiton depending on the messages received by the browser.
To be able to generate a realistic load test, I would recommend to use a real loadtesting software supporting Websocket. It will allow you to generate Different kind of users, with different kind of network, different kind of browsers....etc
What is the framework use by your applicaiton? Depending on the framework i could recommend the proper tool for you need.
You have to make a difference between hundreds of clients and hundreds of requests from the same client.
When you have hundreds of clients, the requests can come in at the same time.
When you only have one client, requests will mostly come in sequentially (depending on using one or multiple threads).
When you only have one client, you can perfectly send requests using a loop. What you will actually measure here is the processing latency of the server.
When you want to simulate multiple clients, this is a bit more difficult. If you simulate them from one machine, the requests are pipelined through the network card and hence the requests are not really send in parallel. Also, you are limited by the bandwidth of the machine. Suppose the server has a 1Gb connection and your test machine has a 1Gb connection, then you can never overload the bandwidth of the server. If your clients are supposed to have a limited bandwidth like 50Mb, then you can run 20 clients (not taking into account the serialisation that happens through the network card).
In theory, you should use as many machines as the number of clients you want to test. In reality, you would use a number of machines each running a limited number of clients.
Regarding a headless test application, you could use a headless browser testing framework like PhantomJS.
I have written a simple websocket client using Node.js.
If the server is open and ready to accept the request then you can fire the requests as written below,
const WebSocket = require('ws')
const url = ws://localhost:9000/ws
const connection = new WebSocket(url)
connection.onopen = () => {
for (var i=0;i<100;i++) {
connection.send('hello')
}
}
connection.onmessage = (event) => {
console.log(event.data)
}
connection.onerror = (error) => {
console.log(`WebSocket error: ${error}`)
}

How to handle timeouts in a REST Client when calling methods with side-effect

Let's say we have a REST client with some UI that lists items it GETs from the server. The server also exposes some REST methods to manipulate the items (POST / PUT).
Now the user triggers one of those calls that are supposed to change the data on the server side. The UI will reflect the server state change, if the call was successful.
But what are good strategies to handle the situation when the server is not available?
What is a reasonable timeout lengths (especially in a 3G / Cloud setup)?
How do you handle the timeout in the client, considering the fact that the client can't tell whether the operation succeeded or not?
Are there any common patterns to solve that, other than a complete client termination (and subsequent restart)?
This will be application specific. You need to decide what makes the most sense in your usage case.
Perhaps start with a timeout similar to that of the the default PHP session of 24 minutes. Adjust as necessary based on testing.
Do you have server and client mixed up here? If so the server cannot tell if the client times out other than reaching the end of a session. The client can always query the server for a progress update.
This one is a little general to provide an answer for.

How often does RESTful client pull server data

I have a RESTful web-service application that I developed using the Netbeans IDE. The application uses MySQL server as its back end server. What I am wondering now is how often a client application that uses my RESTful application would refresh to reflect the data change in the server.
Are there any default pull intervals that clients get from the RESTful application? Does the framework(JAX-RS) do something about it Or is that my business to take care of.
Thanks in advance
#Abraham
There are no such rules. Only thing you can use for properly implementing this is HTTP's caching capabilities. Service must include control information how long representation of a particular resource can be cached, revalidated, never cached etc...
On client application side of things each client may decide it's own path how it will keep itself in sync with a service. It can be done by locally storing data and serve end user from local cache etc... Service can not(and shouldn't know) how clients are implemented, only thing service can do is to include caching information in response messages as i already mentioned above.
It is your responsibility to schedule the service to execute again and again. We can set time out interval but there is no pull interval.

How do they make real time data live on a web page?

How do they do this? I would like to have web pages with data fields that change in real time as a person views the web page. Here is an example.
How do they do this? JQuery? PHP?
I need to connect my field data to mySQL database.
There are two approaches:
Polling
Client requests data on a regular basis. Uses network and server resources even when there is no data. Data is not quite 'live'. Extremely easy to implement, but not scalable.
Push
Server sends data to the client, so client can simply wait for it to arrive instead of checking regularly.
This can be achieved with a socket connection (since you are talking about web pages, this doesn't really apply unless you are using Flash, since support for sockets in the browser in the browser is currently immature) - or by using the technique known as 'comet'.
Neither socket connections nor comet are particularly scalable if the server end is implemented naively.
- To do live data on a large scale (without buying a boat load of hardware) you will need server software that does not use a thread for each client.
I did it with JavaScript timer set execution in milliseconds, each time timer executed function that queried Server with Ajax and returned value(possibly JSON format), then you you update your field with the value. I did it each 5 sec and it works perfectly. In ASP.NET I think it called Ajax Timer Control.
There are two things needed to do this:
Code that runs on the browser to fetch the latest data. This could be Javascript or something running in a plugin such as Silverlight or Flash. This will need to periodically request updated content from the server.
Which leads to a need for...
Code that runs on the server to retrieve and return the latest data (from the database). This could be created with any server sided scripting language.