Can progressive web app do server side calculation when internet is off - progressive-web-apps

I am new to PWA and also searched to find the answer but was not successful.
I know that PWA can handle all database transactions when internet is off by using indexdb.
For consistency I need to do some of the calculation at server side but wonder if by using PWA I can handle those calculations locally too or not.
Any advice would be appreciated.

You can avoid code duplication if you will use node.js. In that case you can reuse same function on server and client side
For example: You can write file summ.js like this:
const sum = (a, b) => a + b;
module.exports = sum;
after that you can import it to use on FE:
import sum from './sum'
console.log(sum(1,2));
or create node.js file handleSum.js on BE:
const sum = require('./sum');
console.log(sum(1, 2));
and you can execute your handle.js like this: node handleSum.js

I think it would be best in terms of architecture design to have the calculations done either server-side or client-side, but not both. For maintainability, consider the DRY principle.
So in your case, consider the need for the calculations both locally and server-sided. Why does it need to be on either of those sides? If the server performs calculations and then stores the result, you can use the background sync principles to delay the sending off the data and have the result calculated and stored when needed.
When the client immediately requires the result of the calculation, consider doing the calculations only locally and send the result to the server with background sync when a a connection is available again.
If you absolutely do need the calculations on both the client and the server, consider an architecture where the module that performs the calculations can be used both locally as server-sided. This is possible when both the server and the client use JavaScript, i.e. NodeJS. Then you can import this module server sided and download + cache the module client sided.

Related

Jmeter recorded script is not added data on frontend side

I recorded my jmeter script on server x and make it dynamic after that run that same script on server y - it fetch all data by post processor and did not give any error but data is not added on fronted . how can I solve it any reason behind it? (website is same just change the server for testing)
expected-Data should add on fronted like create lead on server y (successfully create on server x)
actual -data not added on server y
Most probably you need to correlate your script as it is not doing what it is supposed to be doing.
You can run your test with 1 virtual user and 1 iteration configured in the Thread Group and inspect request and response details using View Results Tree listener
My expectation is that you either not getting logged in (you have added HTTP Cookie Manager to your Test Plan, haven't you?) or fail to provide valid dynamic parameters. Modern web applications widely use dynamic parameters for example for client side state tracking or for CSRF protection
You can easily detect dynamic parameters by recording the same scenario one more time and compare the generated scripts. All the values which differ need to be correlated, to wit extracted from the previous response using a suitable Post-Processor and stored into a JMeter Variable. Once done you will need to replace recorded hard-coded value with the aforementioned JMeter Variable.
Check out How to Handle Correlation in JMeter article for comprehensive information with examples.

Using GraphQL strictly as a query language

I think that my problem is a common one, and I'm weighing the costs and benefits of GraphQL as a solution.
I work on a product whose data is stored by a monolithic CRUD-based REST API. We have components of our application expose a search interface for data, and of course need some kind of server-side support for making requests for that data. This could include sorting, filtering, choosing fields, etc. There are, of course, more traditional ways of providing these functions in a REST context, like query parameter add-ons for endpoints, but it would be cool to try out GraphQL in this context to build a foundation for expanding its use for querying a bit.
GraphQL exposes a really nice query language for searching on data, and ultimately allows me to tailor the language of search specifically to my domain. However, I'm not sure if there is a great way to leverage the IDL without managing a separate server altogether.
Take the following Java Jersey API Proof-of-Concept example:
#GET
#Path("/api/v1/search")
public Response search(QueryIDL query) throws IOException {
final SchemaParser schemaParser = new SchemaParser();
TypeDefinitionRegistry typeDefinitionRegistry = // load schema
RuntimeWiring runtimeWiring = // wire up data-fetching classes
SchemaGenerator schemaGenerator = new SchemaGenerator();
GraphQLSchema graphQLSchema =
schemaGenerator.makeExecutableSchema(typeDefinitionRegistry, runtimeWiring);
GraphQL build = GraphQL.newGraphQL(graphQLSchema).build();
ExecutionResult executionResult = build.execute(query.toString());
return Response.ok(executionResult.getData()).build();
}
I am just planning to take a request body into my Jersey server that looks exactly like the request that would be sent to a GraphQL server. I'm then leveraging some library support to interpret and execute the request for data.
Without really thinking too much about everything that could go wrong, it looks like a client would be able to use this API similar to the way they would use a GraphQL server, except that I don't need to necessarily manage a separate server just to facilitate my search requirements.
Does it seem valuable, or silly, to use the GraphQL IDL in an endpoint-based context like this?
Apart from not needing to rebuild the schema or the GraphQL instance on each request (there are cases where you may want to rebuild the GraphQL instance, but your case is not the one), this is pretty much the canonical way of using it.
It is rather uncommon to keep a separate server for GraphQL, and it usually gets introduced exactly the way you described - as just another endpoint next to your usual REST endpoints. So your usage is legit - not silly at all :)
Btw, I'm not sure what would QueryIDL be... the query is just a string. No need for a special class.

Rest Security Ensure Resource Delete

Background:I'm a new developer fresh out of college at a company that uses RPC architectural style for a lot its internal services.They also seem to change which tool they use behind the scenes pretty frequently, so the tight coupling between the client and server implementations in RPC is problematic. I was tasked with rewriting one of the services, and I feel a RESTful api would be a good match because the backing technology can only deal with files anyway, but I have a few questions.My understanding of REST so far is that you break operations up as much as possible and shift the focus to resources, so both the client and the server together make a state machine with the server mainly handling the transitions through hypermedia.Example:say you have a service that takes a file and splits it in two byte-wise.I would design the sequence for this likethe client would POST the file they want split,server splits the fileserver writes both result pieces to a temp folderserver returns that the client should GET and both files URI'sthe client sends a GET for the pieceserver returns the piece and that the client should DELETE the URIthe client sends a DELETE for the URI
and 2 and 3 are done for both pieces.My question is: How do you ensure that the pieces get deleted at the end?a client could just not follow step 3if you combine step 2&3, a malicious (or negligent) client could just stop after step 1but if you combine them all, isn't that just RPC over HTTP?
If the 2 pieces in question are inseparable, then they are in fact just properties of a single resource.
And yes, if a POST/PUT must be followed by a DELETE, then you're probably just trying to shoehorn RPC into a REST-style architecture.
There's no real definition of what "REST" actually is, but if the one thing certain about it is that it MUST be stateless; i.e. every separate request must be self-sufficient - it cannot depend on a previous request, and cannot mandate subsequent requests.

is this a bad habit when using a database call?

i'm using tornado for its simplicity, and am using it with Pymongo, so because i hear always about asynchronous calls, to serve lot of clients, then, i was asking, what is really an asynchronous calls to a database, so this code for example:
for example, suppose a page where a user have 4 areas where to search, so the result will be a 4 results.
A = calls the database to search for an element a.
B = calls the database to search for an element b.
C = calls the database to search for an element c.
D = calls the database to search for an element d.
then render a pages where a user will see the results (a,b,c,d)
so, this will be a killer for the server, since he must stay for all the 4 requests to finish, or do it serve the first result and then wait even if the database calls are blocking and make a bucket where he joins all the results to be served to the client? or the split of the 4 operations must be done with asynchronous database library (like Motor or Asyncmongo)?
Every call to PyMongo will block Tornado's IOLoop and prevent further processing of any client HTTP request until the PyMongo method completes.
http://api.mongodb.org/python/current/faq.html#does-pymongo-support-asynchronous-frameworks-like-gevent-tornado-or-twisted

MSMQ querying for a specific message

I have a questing regarding MSMQ...
I designed an async arhitecture like this:
CLient - > WCF Service (hosted in WinService) -> MSMQ
so basically the WCF service takes the requests, processes them, adds them to an INPUT queue and returns a GUID. The same WCF service (through a listener) takes first message from queue (does some stuff...) and then it puts it back into another queue (OUTPUT).
The problem is how can I retrieve the result from the OUTPUT queue when a client requests it... because MSMQ does not allow random access to it's messages and the only solution would be to iterate through all messages and push them back in until I find the exact one I need. I do not want to use DB for this OUTPUT queue, because of some limitations imposed by the client...
You can look in your Output-Queue for your message by using
var mq = new MessageQueue(outputQueueName);
mq.PeekById(yourId);
Receiving by Id:
mq.ReceiveById(yourId);
A queue is inherently a "first-in-first-out" kind of data structure, while what you want is a "random access" data structure. It's just not designed for what you're trying to achieve here, so there isn't any "clean" way of doing this. Even if there was a way, it would be a hack.
If you elaborate on the limitations imposed by the client perhaps there might be other alternatives. Why don't you want to use a DB? Can you use a local SQLite DB, perhaps, or even an in-memory one?
Edit: If you have a client dictating implementation details to their own detriment then there are really only three ways you can go:
Work around them. In this case, that could involve using a SQLite DB - it's just a file and the client probably wouldn't even think of it as a "database".
Probe deeper and find out just what the underlying issue is, ie. why don't they want to use a DB? What are their real concerns and underlying assumptions?
Accept a poor solution and explain to the client that this is due to their own restriction. This is never nice and never easy, so it's really a last resort.
You may could use CorrelationId and set it when you send the message. Then, to receive the same message you can pick the specific message with ReceiveByCorrelationId as follow:
message = queue.ReceiveByCorrelationId(correlationId);
Moreover, CorrelationId is a string with the following format:
Guid()\\Number