Say I have 5 jobs that wants to access single method that will read this big file and put it to RDD. Instead of reading this file multiple times (because there will be 5 jobs that will do the same method), there's this "mother" class that will check if there already exist a job that already called the method.
Assuming that these 5 jobs are executed in a sequence, then you can read the file and cache it <RDD>.cache(...) in the first job itself and rest all job can check if the file already exists in cache then just use it, else read it again.
for more info, Refer to RDD API.
Related
In Azure Data Factory I am using a Lookup activity to get a list of files to download, then pass it to a ForEach where a dataflow is processing each file
I do not have 'Sequential' mode turned on, I would assume that the data flows should be running in parallel. However, their runtimes are not the same but actually have almost constant time between them (like, first data flow ran 4 mins, second 6, third 8 and so on). It seems as if the second data flow is waiting for the first one to finish and then uses its cluster to process the file.
Is that intended behavior? I have TTL on the cluster set but that did not help too much. If it is, then what is a workaround? I am currently working on creating a list of files first and using that instead of a ForEach but I am not sure if I am going to see an increase in efficiency
I have not been able to solve the issue with the Parallel data flows not executing in parallel, however, I have managed to change the solution that would increase performance.
What was before: A lookup activity that would get a list of files to process, passed on to a ForEach loop with a data flow activity.
What I am testing now: A Data flow activity that would get a list of files, and save them in a text file in ADLS, Then another data flow activity that was previously in a ForEach loop, but changed its source to use "List of Files" and point to that list
The result was an increase in efficiency (Using the same cluster, 40 files would take around 80 mins using ForEach and only 2-3 mins using List of Files), however, debugging is not easy now that everything is in 1 data flow
You can overwrite a list of files file, or use dynamic expressions and name the file as the pipelineId or something else
In my locustfile I defined test_on_start and test_on_stop events to read a file needed for the test and to write detailed statistics in a CSV at the end of the test. when running in distributed mode, these events occur on the master, not the worker. I am assembling a list of detailed stats for each task in a task sequence and at the end of the test writing a CSV file when the test stops. I found this stackoverflow question which references a setup and teardown. I added these to my class User(HttpUser): but they appear to not be executed.
How can I mimic these events when the test is running on a worker in distributed mode?
Is there a better way?
I am using User on_start and on_stop already - my on_start calls a function to select a random user from a list which was created when the #events.test_start.add_listener is fired, which only happens on the master and not on the workers, so the worker doesn't have any user login data.
It seems counter productive to open the file, read it, select a user at random and close it every time the User on_start method is called. User on_start also sets up the iteration list [] which is where i store the times per task.
When the task sequence is done, meaning the last task is executed, i do a self.interrupt() which runs on_stop, which is where I take the iteration times, and put them into a second list, which is later written using the CSV module. maybe it would be better to just write the data to the CSV during on_stop
The setup/teardown for individual Users has been removed (because they were confusing, as it was run on the first instance of that User class, and when people set properties on that instance got very confused by the fact that later instances didnt get that). Tbh, I wish they had just been replaced by class methods...
The User still has on_start/stop methods though, and if you combine that with a flag it may be able to do what you want. Something like this:
class MyUser(HttpUser):
stopped = False
...
def on_stop(self):
if not MyUser.stopped:
MyUser.stopped = True
# write your csv
# this doesnt guarantee that all your Users are finished though.
https://docs.locust.io/en/stable/writing-a-locustfile.html#on-start-and-on-stop-methods
I have a requirement where I have to prepare a file using one job and another job which runs once a day will send the file to external system and delete/or move from the location. When this job tries to delete/or move the file it can't access it.
I tried setting writable to true when file is created. Running jobs on separate times (Running one job at a time). Tried adding "delete" as a step to the same job as well. Nothing worked.
I am using file.delete(). Also tried Files.deleteIfExists().
I suspect the first job is not assigning proper permissions but don't know a way around it set permissions in spring batch
Are these jobs run by the same user? i.e. Same user and permissions?
Also what is the actual error message? Does it say permissions denied? If so they it is likely an OS restriction not Spring Batch/Java limitation.
An easier solution would be to just add a step to the first job to send the files are part of the job and drop the job that just transfers the files.
Answering my own question 😀. Hope it helps someone.
Issue was the last ItemWriter was holding the resources because I was using the composite writer. While using CompositeWriter beforeStep, afterStep methods are “hidden”. You have to call them explicitely. I selected the approach to write a custom writer which will explicitely call writer.close().
Adding afterStep method and calling super.close() should also work. Though I have nit tries that out.
My requirement is
Parallel Job1 --I extract data from a table, when row count is more than 0
Parallel job 2 should be triggered in the sequencer only when the row count from source query in Job1 is greater than 0
I want to achieve this without creating any intermediate file in job1.
So basically what you want to do is using information from a data stream (of your Job1) and use it in the "above" sequence as a parameter.
In your case you want to decide on sequence level to run subsequent jobs (if more than 0 rows get returned) or not.
Two options for that:
Job1 writes information to a file which is a value file of a parameterset. These files are stored in a fixed directory. The parameter of the value file could then be used in your sequence to decide your further processing. Details for parameter sets can be found here.
You could use a server job for Job1 and set a user status (basic function DSSetUserStatus) in a transfomer. This is also passed back to the sequence and could be referenced in subsequent stages of the sequence. See the documentation but you will find many other information on the internet as well regarding this topic.
There are more solution to this problem - or let us call it challenge. Other ways may be a script called at sequence level which queries the database and will avoid Job1...
We have a springbatch job that reads a file (flatfileitemreader), process it and writes data to a queue (jmsitemwriter).
We have another job that reads the queue (jmsitemreader) and writes a file (flatfileitemwriter). It's asynchronous process (in between the execution of the two jobs, we have some manual process that must be performed).
The flat file content doesn't have a line identifier. And we use a multi-threaded approach when reading the file ("throttle-limit"). So, the messages queued do not maintain the same order that they used to have into the flat file.
The problem is that we should generate an output file respecting the original order. So the line 33 inside the incoming file, should be line 33 into the outgoing file (it will have the contents of the original line, plus some data).
Does springbatch provide "native" a way to order the output, respecting the original read order? I used "native" because one solution that we thought is to create an additional step just to add a line number to the file and use it at the end, but I was wondering if this "reinvent the wheel"...
We are using SB 3.0.3
TIA,
Bob
The use case you are describing asks that you maintain order across multiple jobs which is not supported. In theory (while not guaranteed) a single, single threaded step would retain the order of the input file.
Since you are reading in a multithreaded manor, there really isn't a good way to guarantee the order of the items as they are being read. The best you could do is synchronize the read method and add an id as the items are being read. If the bottleneck you're attempting to address with multithreading is in the processor or writer, this may not be a bad option.