Are batched requests sent over Neo4j REST API executed in parallel - rest

If I use Neo4j REST Batch endpoint, are the requests in the same batch executed in parallel? I suspect not, because how else would one request be able to refer to another in the same batch? But I haven't been able to find any documentation that clearly states one way or the other, and I am trying to make a recommendation to someone else about the performance of REST batches vs. transactional Cypher.

They are not executed in parallel but sequentially.
But they are streamed, if you use the header: X-Stream:true

Related

Is MongoDB Realm function scalable?

I am pretty new to MongoDB.
I am in a scenario where it is possible for a system to invoke functions simultaneously many time.
I have gone through mongoDB Atlas function documentation and didn't find anything which speaks about scalability or concurrency issues.
Can a single function be invoked multiple times in parallel?
for example: Three different request trying to invoke same function will all three request be handled one by one or in parallel.
You can call the functions concurrently, provided the workload adheres to the App Services' limitation of 5000 concurrent requests. So, to address your point: if 3 different services try to invoke the same function at a time, they will be handled in parallel.
Additionally, you can use HTTPS Endpoints to expose a Function and trigger it through an endpoint call.

JMeter vs ReadyAPI SOAP API Performance Testing

Scenario: Read data from a CSV file with unknown number of records. Use the data to create Soap XML MSg and Post Method. Continue to do this until all the records have been read.
Problem: I used ReadyAPI to perform these actions and was able to achieve the intended TPS at server with whatever i have provided in VU's option. Tried with 150 vu's and observed constant load of 150 requests at the server. But when i try to do the same in JMeter, i was not able to achieve more than 70 TPS and the load isn't evenly distributed as well no matter how many threads i use. I am using a Thread Group, CSV DataSet Config, UserDefined Parameters to create unique request ID and JSR223 PreProcessor with Groovy Script as a child of HTTPRequest to remove empty xml tags.
Read some posts where it was mentioned that JMeter throughput will be stagnant based on servers response capability. But it's not in my case since i can generate 150TPS with ReadyAPI. Annual Licensing costs and Renewal costs associated with ReadyAPI is the Reason that i am looking for solution with JMeter.
Not only the server need to be able to respond fast enough, JMeter must be able to send requests fast enough as well.
Default JMeter configuration is suitable for tests development and debugging and creating some load (rather limited though), you need to properly tune your JMeter instance in order to fully utilize your machine resources so make sure to follow:
JMeter Best Practices
9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure
If your single machine is not powerful enough to conduct the required load it's possible to run JMeter in distributed mode using as many load generators as needed in order to create the necessary number of virtual users/requests per second

How to perform multiple HTTP DELETE operation on same Resource with different IDs in JMeter?

I have a question regarding **writing test for HTTP DELETE method in JMeter using Concurrency Thread Group**. I want to measure **how many DELETEs** can it perform in certain amount of time for certain amount of Users (i.e. Threads) who are sending Concurrent HTTP (DELETE) Requests.
Concurrency Thread Group parameters are:
Target Concurrency: 50 (Threads)
RampUp Time: 10 secs
RampUp Steps Count: 5 secs
Hold Target Rate Time (sec): 5 secs
Threads Iterations Limit: infinite
The thing is that HTTP DELETE is idempotent operation i.e. if inovked on same resource (i.e. Record in database) it kind of doesn't make much sense. How can I achieve deletion of multiple EXISTING records in database by passing Entity's ID in URL? E.g.:
http://localhost:8080/api/authors/{id}
...where ID is being incremented for each User (i.e. Thread)?
My question is how can I automate deletion of multiple EXISTING rows in database (Postgres 11.8)...should I write some sort of script or is there other easier way to achieve that?
But again I guess it will probably perform multiple times same thing on same resources ID (e.g. HTTP DELETE will be invoked more than once on http://localhost:8080/api/authors/5).
Any help/advice is greatly appreciated.
P.S. I'm doing this to performance test my SpringBoot, Vert.X and Dropwizard RESTful Web service apps.
UPDATE1:
Sorry, I've didn't fully specify reason for writing these Test Use Case for my Web Service apps which communicate with Postgres DB. MAIN reason why I'm actually doing this testing is to test PERFORMANCES of blocking and NON-blocking WEB Server implementations for mentioned frameworks (SpringBoot, Dropwizard and Vert.X). Web servers are:
Blocking impelementations:
1.1. Apache Tomcat (SpringBoot)
1.2. Jetty (Dropwizard)
Non-blocking: Vert.X (uses own implementation based on Netty)
If I am using JMeter's JDBC Request in my Test Plan won't that actually slow down Test execution?
The easiest way is using either Counter config element or __counter() function in order to generate an incrementing number on each API hit:
More information: How to Use a Counter in a JMeter Test
Also the list of IDs can be obtained from the Postgres database via JDBC Request sampler and iterated using ForEach Controller

can spring batch be used as job framework for non batch jobs (regular job)

Is it possible to use spring batch as a regular job framework?
I want to create a device service (microservice) that has the responsibility
to get events and trigger jobs on devices. The devices are remote so it will take time for the job to be complete, but it is not a batch job (not periodically running or partitioning large data set).
I am wondering whether spring batch can still be used a job framework, or if it is only for batch processing. If the answer is no, what jobs framework (besides writing your own) are famous?
Job Description:
I need to execute against a specific device a job that will contain several steps. Each step will communicate with a device and wait for a device to confirm it executed the former command given to it.
I need retry, recovery and scheduling features (thought of combining spring batch with quartz)
Regarding read-process-write, I am basically getting a command request regarding a device, I do a little DB reads and then start long waiting periods that all need to pass in order for the job/task to be successful.
Also, I can choose (justify) relevant IMDG/DB. Concurrency is outside the scope (will be outside the job mechanism). An alternative that came to mind was akka actors. (job for a device will create children actors as steps)
As far as I know - not periodically running or partitioning large data set are not primary requirements for usage of Spring Batch.
Spring Batch is basically a read - process - write framework where reading & processing happens item by item and writing happens in chunks ( for chunk oriented processing ) .
So you can use Spring Batch if your job logic fits into - read - process - write paradigm and rest of the things seem secondary to me.
Also, with Spring Batch , you should also evaluate the part about Job Repository . Spring Batch needs a database ( either in memory or on disk ) to store job meta data and its not optional.
I think, you should put more explanation as why you need a Job Framework and what kind of logic you are running that you are calling it a Job so I will revise my answer accordingly.

How to design spring batch to avoid long queue of request and restart failed job

I am writing a project that will be generating reports. It would read all requests from a database by making a rest call, based on the type of request it will make a rest call to an endpoint, after getting the response it will save the response in an object and save it back to the database by making a call to an endpoint.
I am using spring-batch to handle the batch work. So far what I came up with is a single job (reader, processor, writer) that will do the whole things. I am not sure if this is the correct design considering
I do not want to queue up requests if some request is taking a long time to get a response back. [not sure yet]
I do not want to hold up saving response until all the responses are received. [using commit-internal will help]
If the job crashes for some reason, how can I restart the job [maybe using batch-admin will help but what are my other options]
By using chunk oriented processing Reader, Processor and Writer get executed in order until Reader has nothing to return.
If you can read one item at a time, process it and send it back to the endpoint that handles the persistence this approach is handy.
If you must read ALL the information at once the reader will get a big collection with all items and pass it to processor. The processor will process all the items and send the result to the writer. You cannot send just a few to the writer so you would have to do the persistence directly from processor and that would be against the design.
So, as I understand this, you have two options:
Design a reader that can read one item at a time. Use the chunk oriented processing that you already started to read one item, process it and send it back for persistence. Have a look at how other readers are implemented (like JdbcCursorItemReader).
You create a tasklet that reads the whole collection of items process it and sends them back for processing. You can break this in different tasklets.
commit-interval only controls after how many items transaction is commited. So it will not help you as all the processing and persistence is done by calling rest services.
I have figured out a design and I think it will work fine.
As for the questions that I asked, following are the answers:
Using asynchronous processors will help avoiding any queue.
http://docs.spring.io/spring-batch/trunk/reference/html/springBatchIntegration.html#asynchronous-processors
using commit-internal will solve it
This thread has the answer - Spring batch :Restart a job and then start next job automatically