I have a config file that has:
#Configuration
#ComponentScan(basePackages = { "xxxxxxxx", "xxxxxxxx" })
#EnableBatchProcessing
#Import(DataSourceConfig.class)
#EnableRetry
public class BatchConfiguration extends DefaultBatchConfigurer {
and another method in a separate class that has:
#Override
#Retryable(ValidationException.class)
public FlowExecutionStatus decide(JobExecution jobExec, StepExecution stepExec) {
boolean passed = validationStatus.getValidationStatus();
if(!passed) {
LOG.info("******Batch job validation FAILED******");
throw new ValidationException("Batch job validation FAILED");
}
else return FlowExecutionStatus.COMPLETED;
}
At the very least it should retry and print this 3 times instead of once, since it does not pass. All the annotations are imported successfully. Its just not doing what I would expect.:
******Batch job validation FAILED******
All I get is the stacktrace for the one ValidationException
Caused by: org.springframework.batch.item.validator.ValidationException: Batch job validation FAILED
at com.xxxxx.xxxxx.decision.ValidationFlowDecider.decide(ValidationFlowDecider.java:31)
The decide method is part of the JobExecutionDecider contract and will be driven by the flow you defined for your job. So if you want to "retry" some logic in your job flow, you should define it in the flow definition itself, not using an annotated method (with Spring Retry or any other library).
A typical usage of a decider is shown with a code example in the reference documentation here: Programmatic Flow Decisions.
Related
My goal is to create a pipeline that invokes a back-end (Cloud hosted) service a maximum number of times per second ... how can I achieve that?
Back story: Imagine a back-end service that is invoked with a single input and returns a single output. This service has quotas associated with it that permit a maximum number of requests per second (let's say 10 requests per second). Now imagine an unbounded source PCollection where I wish to transform the elements in the input by passing them through my back-end service. I can envisage a ParDo invoking the back-end service once for each element in the input PCollection. However, this doesn't perform any kind of flow control against the back-end.
I could imagine my DoFn logic testing the response from the back-end response and retrying till it succeeds but this doesn't feel right. If I have 100 workers, then I seem to be burning a lot of resources and putting a load on the back-end. What I think I want to do is throttle the calls to the back-end from the pipeline.
Good Day, kolban. In addition to Bruno Volpato's helpful RampupThrottlingFn example, I've seen a combination of the following. Please do not hesitate at all to let me know how I can update the example with more clarity.
PeriodicImpulse - emits an Instant at a fixed specified interval.
Fix the number of workers with the maxNumWorkers and numWorkers (Please see Dataflow Pipeline Options), if using the Dataflow runner.
Beam Metrics API to monitor the actual resource request count over time and set alerts. When using Dataflow, the Beam Metrics API automatically connects to Cloud Monitoring as Custom metrics
The following shows abbreviated code starting from the whole pipeline followed by some details as needed to provide clarity. It assumes a target of 10 workers, using Dataflow with the arguments --maxNumWorkers=10 and --numWorkers=10 and a goal to limit the resource requests among all workers to 10 requests per second. This translates to 1 request per second per worker.
PeriodicImpulse limits the Request creation to 1 per second
public class MyPipeline {
public static void main(String[] args) {
Pipeline pipeline = Pipeline.create(/* Usually with options */);
PCollection<Response> responses = pipeline.apply(
"PeriodicImpulse",
PeriodicImpulse
.create()
.withInterval(Duration.standardSeconds(1L))
).apply(
"Build Requests",
ParDo.of(new RequestFn())
)
.apply(ResourceTransform.create());
}
}
RequestFn DoFn emits Requests per Instant emitted from PeriodicImpulse
class RequestFn extends DoFn<Instant, Request> {
#ProcessElement
public void process(#Element Instant instant, OutputReceiver<Request> receiver) {
receiver.output(
Request.builder().build()
);
}
}
ResourceTransform transforms Requests to Responses, incrementing a Counter
class ResourceTransform extends PTransform<PCollection<Request>, PCollection<Response>> {
static ResourceTransform create() {
return new ResourceTransform();
}
public PCollection<Response> expand(PCollection<Request> input) {
return ParDo.of("Consume Resource", new ResourceFn());
}
}
class ResourceFn extends DoFn<Request, Response> {
private Counter counter = Metrics.counter(ResourceFn.class, "some:resource");
private transient ResourceClient client = null;
#Setup
public void setup() {
client = new ResourceClient();
}
#ProcessElement
public void process(#Element Request request, OutputReceiver<> receiver)
{
counter.inc(); // Increment the counter.
// not showing error handling
Response response = client.execute(request);
receiver.output(response);
}
}
Request and Response classes
(Aside: consider creating a Schema for the request input and response output classes. Example below uses AutoValue and AutoValueSchema)
#DefaultSchema(AutoValueSchema.class)
#AutoValue
abstract class Request {
/* abstract Getters. */
abstract String getId();
#AutoValue.Builder
static abstract class Builder {
/* abstract Setters. */
abstract Builder setId(String value);
abstract Request build();
}
}
#DefaultSchema(AutoValueSchema.class)
#AutoValue
abstract class Response {
/* abstract Getters. */
abstract String getId();
#AutoValue.Builder
static abstract class Builder {
/* abstract Setters. */
abstract Builder setId(String value);
abstract Response build();
}
}
I've stumbled upon a pretty twisted issue with a spring batch recently.
Requirements are as follows :
I have two main steps :
The first one reads some data from an oracle database, from a table to write to another table.
The second one does some other database stuff, based upon a data handled on first step.
From a design standpoint, first step looks like this :
#Bean
public Step myFirstStep(JdbcCursorItemReader<Revision> reader) {
return stepBuilderFactory.get("my-first-step")
.<Revision, Revision>chunk(1)
.reader(readerRevisionNumber)
.writer(compositeItemWriter())
.listener(executionContextPromotionListener())
.build();
Composite item writer :
#Bean
public CompositeItemWriter<Revision> compositeItemWriter() {
CompositeItemWriter writer = new CompositeItemWriter();
writer.setDelegates(Arrays.asList(somewriter(), someOtherwriter(), aWriterThatIsSupposedToPassDataToAnotherStep()));
return writer;
}
While the first two writer are not complex, my interest is focused on the third one.
aWriterThatIsSupposedToPassDataToAnotherStep()
As you might have guessed, this one will be used to get some data being processed before, to promote it on my second Step :
#Component
#StepScope
public class AWriterThatIsSupposedToPassDataToAnotherStep implements ItemWriter<SomeEntity> {
private StepExecution stepExecution;
public void write(List<? extends SomeEntity> items) {
ExecutionContext stepContext = this.stepExecution.getExecutionContext();
stepContext.put("revisionNumber", items.stream().findFirst().get().getSomeField());
System.out.println("writing : " + items.stream().findFirst().get().getSomeField()+ "to ExecutionContext");
}
#BeforeStep
public void saveStepExecution(StepExecution stepExecution) {
this.stepExecution = stepExecution;
}
}
Problem is : As long as this writer is part of a composite writer list (as declared above)
The #BeforeStep of my last writer is never executed, this ends up me unable to transmit my information to execution context.
When replacing my CompositeItemWriter by my single "AWriterThatIsSupposedToPassDataToAnotherStep" inside step definition, it gets executed properly.
Does it have to do anything with some kind of declaration order or something ?
Big Thanks to further help.
Found the solution (with some of my coworkers help), and sourced-in from : https://stackoverflow.com/a/39698653/1957764
You'll need to both declare the writer as part of the composite writer AND a step listener to make it execute the #BeforeStep annotated method.
I want to use Spring Batch (v3.0.9) restart functionality so that when JobInstance restarted the process step reads from the last failed chunk point forward. My restart works fine as long as I don't use #StepScope annotation to my myBatisPagingItemReader bean method.
I was using #StepScope so that i can do late binding to get the JobParameters in my myBatisPagingItemReader bean method #Value("#{jobParameters['run-date']}"))
If I use #StepScope annotation on myBatisPagingItemReader() bean method the restart does not work as it creates new instance (scope=step, name=scopedTarget.myBatisPagingItemReader).
If i use stepscope, is it possible for my myBatisPagingItemReader to set the read.count from the last failure to get restart working?
I have explained this issue with example below.
#Configuration
#EnableBatchProcessing
public class BatchConfig {
#Bean
public Step step1(StepBuilderFactory stepBuilderFactory,
ItemReader<Model> myBatisPagingItemReader,
ItemProcessor<Model, Model> itemProcessor,
ItemWriter<Model> itemWriter) {
return stepBuilderFactory.get("data-load")
.<Model, Model>chunk(10)
.reader(myBatisPagingItemReader)
.processor(itemProcessor)
.writer(itemWriter)
.listener(itemReadListener())
.listener(new JobParameterExecutionContextCopyListener())
.build();
}
#Bean
public Job job(JobBuilderFactory jobBuilderFactory, #Qualifier("step1")
Step step1) {
return jobBuilderFactory.get("load-job")
.incrementer(new RunIdIncrementer())
.start(step1)
.listener(jobExecutionListener())
.build();
}
#Bean
#StepScope
public ItemReader<Model> myBatisPagingItemReader(
SqlSessionFactory sqlSessionFactory,
#Value("#{JobParameters['run-date']}") String runDate)
{
MyBatisPagingItemReader<Model> reader = new
MyBatisPagingItemReader<>();
Map<String, Object> parameterValues = new HashMap<>();
parameterValues.put("runDate", runDate);
reader.setSqlSessionFactory(sqlSessionFactory);
reader.setParameterValues(parameterValues);
reader.setQueryId("query");
return reader;
}
}
Restart Example when I use #Stepscope annotation to myBatisPagingItemReader(), the reader is fetching 5 records and I have chunk size(commit-interval) set to 3.
Job Instance - 01 - Job Parameter - 01/02/2019.
chunk-1:
- process record-1
- process record-2
- process record-3
writer - writes all 3 records
chunk-1 commit successful
chunk-2:
process record-4
process record-5 - Throws and exception
Job completes and set to 'FAILED' status
Now the Job is Restarted again using same Job Parameter.
Job Instance - 01 - Job Parameter - 01/02/2019.
chunk-1:
process record-1
process record-2
process record-3
writer - writes all 3 records
chunk-1 commit successful
chunk-2:
process record-4
process record-5 - Throws and exception
Job completes and set to 'FAILED' status
The #StepScope annotation on myBatisPagingItemReader() bean method creates a new instance , see below log message.
Creating object in scope=step, name=scopedTarget.myBatisPagingItemReader
Registered destruction callback in scope=step, name=scopedTarget.myBatisPagingItemReader
As it is new instance it start the process from start, instead of starting from chunk-2.
If i don't use #Stepscope, it restarts from chunk-2 as the restarted job step sets - MyBatisPagingItemReader.read.count=3.
The issue here is that you are returning an ItemReader instead of the fully qualified class (MyBatisPagingItemReader) or at least ItemStreamReader. When you use Spring Batch's step scope, we create a proxy to allow for late initialization. The proxy is based on the return type of the method (ItemReader in your case). The issue you are running into is that because the proxy is of ItemReader, Spring Batch does not know that your bean also implements ItemStream and it is that interface that enables restartability. By default, Spring Batch will automatically register all beans of type ItemStream for you (you can also explicitly register the beans yourself, but it's typically not needed).
To address your issue, the following should work (note the change in the return type):
#Bean
#StepScope
public MyBatisPagingItemReader<Model> myBatisPagingItemReader(
SqlSessionFactory sqlSessionFactory,
#Value("#{JobParameters['run-date']}") String runDate) {
MyBatisPagingItemReader<Model> reader =
new MyBatisPagingItemReader<>();
Map<String, Object> parameterValues = new HashMap<>();
parameterValues.put("runDate", runDate);
reader.setSqlSessionFactory(sqlSessionFactory);
reader.setParameterValues(parameterValues);
reader.setQueryId("query");
return reader;
}
This is why it is my recommendation that where possible, when using #Bean annotated methods, you should return the most concrete type possible to allow Spring to help as much as possible.
I'm trying to execute the following SQL statement every time the Database Session gets refreshed. I have a Spring Boot 2.0.1.RELEASE with JPA application and a PostgreSQL Database.
select set_config('SOME KEY', 'SOME VALUE', false);
As the PostgreSQL documentation states the is_local parameter is used to indicate that this configuration value will apply just for the current transaction -if true- or will be attached to the session (as I require) -if false-
The problem is that I'm not aware when Hibernate/Hikari are refreshing the db session, so, in practice, the application start failing when it has a couple of minutes running, as you can imagine...
My approach -that is not working yet- is to implement a EmptyInterceptor, for that I have added a DatabaseCustomizer class to inject my hibernate.session_factory.interceptor properly in a way that Spring can fill out all my #Autowires
DatabaseInterceptor.class
#Component
public class DatabaseInterceptor extends EmptyInterceptor {
#Autowired
private ApplicationContext context;
#Override
public void afterTransactionBegin(Transaction tx) {
PersistenceService pc = context.getBean(PersistenceService.class);
try {
pc.addPostgresConfig("SOME KEY", "SOME VALUE");
System.out.println("Config added...");
} catch (Exception e) {
e.printStackTrace();
}
}
}
DatabaseCustomizer.class
#Component
public class DatabaseCustomizer implements HibernatePropertiesCustomizer {
#Autowired
private DatabaseInterceptor databaseInterceptor;
#Override
public void customize(Map<String, Object> hibernateProperties) {
hibernateProperties.put("hibernate.session_factory.interceptor", databaseInterceptor);
}
}
Obviously, there is a problem with this approach because when I #Override the afterTransactionBegin method to start another transaction I get an Infinite loop.
I tried to look something inside that Transaction tx that could help to be sure that this transaction is not being generated by my own addPostgresConfig but there is not much on it.
Is there something else I could try to achieve this?
Thanks in advance,
I'm testing an upgrade of my Spring Cloud DataFlow services from Spring Cloud Dalston.SR4/Spring Boot 1.5.9 to Spring Cloud Edgware/Spring Boot 1.5.9. Some of my services extend source (or sink) components from the app starters. I've found this does not work with Spring Cloud Edgware.
For example, I have overridden org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration and bound my app to my overridden version. This has previously worked with Spring Cloud versions going back almost a year.
With Edgware, I get the following (whether the app is run standalone or within dataflow):
***************************
APPLICATION FAILED TO START
***************************
Description:
Field channels in org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration required a bean of type 'org.springframework.cloud.stream.messaging.Source' that could not be found.
Action:
Consider defining a bean of type 'org.springframework.cloud.stream.messaging.Source' in your configuration.
I get the same behaviour with the 1.3.0.RELEASE and 1.2.0.RELEASE of spring-cloud-starter-stream-rabbit.
I override RabbitSourceConfiguration so I can set a header mapper on the AmqpInboundChannelAdapter, and also to perform a connectivity test prior to starting up the container.
My subclass is bound to the Spring Boot application with #EnableBinding(HeaderMapperRabbitSourceConfiguration.class). A cutdown version of my subclass is:
public class HeaderMapperRabbitSourceConfiguration extends RabbitSourceConfiguration {
public HeaderMapperRabbitSourceConfiguration(final MyHealthCheck healthCheck,
final MyAppConfig config) {
// ...
}
#Bean
#Override
public AmqpInboundChannelAdapter adapter() {
final AmqpInboundChannelAdapter adapter = super.adapter();
adapter.setHeaderMapper(new NotificationHeaderMapper(config));
return adapter;
}
#Bean
#Override
public SimpleMessageListenerContainer container() {
if (config.performConnectivityCheckOnStartup()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info("Attempting connectivity with ...");
}
final Health health = healthCheck.health();
if (health.getStatus() == Status.DOWN) {
LOGGER.error("Unable to connect .....");
throw new UnableToLoginException("Unable to connect ...");
} else if (LOGGER.isInfoEnabled()) {
LOGGER.info("Connectivity established with ...");
}
}
return super.container();
}
}
You really should never do stuff like healthCheck.health(); within a #Bean definition. The application context is not yet fully baked or started; it may, or may not, work depending on the order that beans are created.
If you want to prevent the app from starting, add a bean that implements SmartLifecycle, put the bean in a late phase (high value) so it's started after everything else. Then put your code in start(). autStartup must be true.
In this case, it's being run before the stream infrastructure has created the channel.
Some ordering might have changed from the earlier release but, in any case, performing activity like this in a #Bean definition is dangerous.
You just happened to be lucky before.
EDIT
I just noticed your #EnableBinding is wrong; it should be Source.class. I can't see how that would ever have worked - that's what creates the bean for the channels field of type Source.
This works fine for me after updating stream and the binder to 1.3.0.RELEASE...
#Configuration
public class MySource extends RabbitSourceConfiguration {
#Bean
#Override
public AmqpInboundChannelAdapter adapter() {
AmqpInboundChannelAdapter adapter = super.adapter();
adapter.setHeaderMapper(new MyMapper());
return adapter;
}
}
and
#SpringBootApplication
#EnableBinding(Source.class)
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
If that doesn't work, please edit the question to show your POM.