I have an Apache Beam pipeline that processes unbounded data and the results are written into MySQL. In this process, there is a need to look up the username from the user identifier. I'm using side input user id and username map to the pipeline.
Since we keep adding users, the side input needs to be updated periodically. I've gone through the side input patterns "Slowly updating global window side inputs" and "Slowly updating side input using windowing".
I lean towards the first due to new users added not being that frequent.
Reading users from the database using JdbcIO.
final PCollection userCollection =
pipeline.apply("read-users-info", jdbcMgr.readUserInfo(userDsFn));
Reading data from MySQL
public PTransform<PBegin, PCollection<KV<String, String>>> readUserInfo(
SerializableFunction<Void, DataSource> dataSourceProviderFn) {
LOG.info("reading users");
return JdbcIO.<KV<String, String>>read()
.withDataSourceProviderFn(dataSourceProviderFn)
.withQuery("select id, concat(first_name, ' ', last_name) from users")
.withCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of()))
.withRowMapper(
(JdbcIO.RowMapper<KV<String, String>>) rs -> KV.of(rs.getString(1), rs.getString(2)));
}
}
Updating side input using global window.
final PCollectionView<Map<String, String>> userMap =
pipeline
.apply(GenerateSequence.from(0).withRate(1, Duration.standardSeconds(30)))
.apply(Window.into(FixedWindows.of(Duration.standardSeconds(30))))
.apply(Sum.longsGlobally().withoutDefaults())
.apply(
ParDo.of(
new DoFn<Long, Map<String, String>>() {
#ProcessElement
public void process(
#Element Long input,
#Timestamp Instant timestamp,
OutputReceiver<PCollection<KV<String, String>>> o) {
o.output(userCollection);
}
}))
.apply(
Window.<Map<String, String>>into(new GlobalWindows())
.triggering(Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane()))
.discardingFiredPanes())
.apply(View.asSingleton());
I'm sure there is an issue with o.output(userCollection);, can you please help me out here.
I'm running into the following issue.
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: com.streaming.pipelines.ContactPipeline$1, #ProcessElement process(Long, Instant, OutputReceiver), #ProcessElement process(Long, Instant, OutputReceiver), parameter of type DoFn.OutputReceiver<PCollection<KV<String, String>>> at index 2: OutputReceiver should be parameterized by java.util.Map<java.lang.String, java.lang.String>
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
Caused by: java.lang.IllegalArgumentException: com.streaming.pipelines.ContactPipeline$1, #ProcessElement process(Long, Instant, OutputReceiver), #ProcessElement process(Long, Instant, OutputReceiver), parameter of type DoFn.OutputReceiver<PCollection<KV<String, String>>> at index 2: OutputReceiver should be parameterized by java.util.Map<java.lang.String, java.lang.String>
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures$ErrorReporter.throwIllegalArgument(DoFnSignatures.java:2397)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures$ErrorReporter.checkArgument(DoFnSignatures.java:2403)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures.analyzeExtraParameter(DoFnSignatures.java:1406)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures.analyzeProcessElementMethod(DoFnSignatures.java:1230)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures.parseSignature(DoFnSignatures.java:638)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures.lambda$getSignature$0(DoFnSignatures.java:294)
at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures.getSignature(DoFnSignatures.java:294)
at org.apache.beam.sdk.transforms.ParDo.validate(ParDo.java:614)
at org.apache.beam.sdk.transforms.ParDo.of(ParDo.java:403)
at com.streaming.pipelines.ContactPipeline.buildPipeline(ContactPipeline.java:61)
at com.streaming.pipelines.StreamingApp.main(StreamingApp.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
... 8 more
Thanks,
Suresh
Related
I'm developing an Apache beam pipeline to publish unbounded data into a pubsub topic. Publishing is done using pubsub IO connector PubsubIO.writeMessages().
If pubsub connection is failed during pipeline is processing, I need to capture the connection failure and identify the data which is being processed during the connection failure. But I couldn't find a straight forward failure handling mechanism in Apache beam pubsub write.
When I test this using a bad pubsub connection, pipeline is trying to connect throwing following exception for a while and if the connection is unsuccessful pipeline execution will fail.
com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at io.grpc.Status.asRuntimeException(Status.java:535)
... 10 more
Caused by: io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: /127.0.0.1:58843
Caused by: java.net.ConnectException: Connection refused: no further information
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)
at io.grpc.netty.shaded.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I tried to catch this exception from the pubsub write transform and it is not working either.
So my question is: Is there any way to capture above exception and continue pipeline until the connection is successful? My pubsub write code snippet is as follows:
public class PubSubWrite extends PTransform<PCollection<String>, PDone> {
private final String outputTopic;
public PubSubWrite(String outputTopic) {
this.outputTopic = outputTopic;
}
#Override
public PDone expand(PCollection<String> input) {
return input
.apply(
"convertMessagesToPubsubMessages",
MapElements.into(TypeDescriptor.of(PubsubMessage.class))
.via(
(String json) ->
new PubsubMessage(json.getBytes(Charsets.UTF_8), ImmutableMap.of("SOURCE", "TEST"))))
.apply(
"writePubsubMessagesToPubSub", PubsubIO.writeMessages().to(outputTopic));
}
}
There is not a native API for error handling in transforms for PubsubIO as you can see on the documentation.
I recommend you to open a feature request on issue tracker asking for a error handling implementation on the Java Library - PubsubIO connector.
While, you could return ans empty error collection or implement it to catch the exception by yourself.
Example for the empty error:
#Override
public WithFailures.Result < PDone, PubsubMessage > expand(PCollection < PubsubMessage > input) {
PDone done = input //
.apply(
"convertMessagesToPubsubMessages",
MapElements.into(TypeDescriptor.of(PubsubMessage.class))
.via(
(String json) - >
new PubsubMessage(json.getBytes(Charsets.UTF_8), ImmutableMap.of("SOURCE", "TEST"))))
.apply(
"writePubsubMessagesToPubSub", PubsubIO.writeMessages().to(outputTopic));
return WithFailures.Result.of(done, EmptyErrors.in(input.getPipeline()));
}
private static class EmptyErrors extends PTransform < PBegin, PCollection < PubsubMessage >> {
/** Creates an empty error collection in the given pipeline. */
public static PCollection < PubsubMessage > in (Pipeline pipeline) {
return pipeline.apply(new EmptyErrors());
}
#Override
public PCollection < PubsubMessage > expand(PBegin input) {
return input.apply(Create.empty(PubsubMessageWithAttributesCoder.of()));
}
}
Usually such failures are retried by the runner. For example, Dataflow runner will retry failures indefinitely for streaming jobs. Note that this is in addition to any local (VM level) retries for errors that produce re-triable HTTP error codes (for example 5xx). So pipeline should continue once you fix the underlying issue. But note that your backlog might significantly increase if the pipeline is unable to process data for some time so you might see a delay.
Long story short: I am in the middle of implementing a processor topology: the processor is to store the received records into corresponding local state stores and do event-based processing as a record arrives. And the related code looks like this:
#Override
public void configureBuilder(StreamsBuilder builder) {
final Map<String, String> serdeConfig =
Collections.singletonMap("schema.registry.url", processorConfig.getSchemaRegistryUrl());
final Serde<GenericRecord> valueSerde = new GenericAvroSerde();
valueSerde.configure(serdeConfig, false); // `true` for record keys
final Serde<EventKey> keySerde = new SpecificAvroSerde();
keySerde.configure(serdeConfig, true); // `true` for record keys
Map<String, String> stateStoreConfigMap = new HashMap<>();
//stateStoreConfigMap.put(KafkaAvroSerializerConfig.VALUE_SUBJECT_NAME_STRATEGY, RecordNameStrategy.class.getName());
StoreBuilder<KeyValueStore<EventKey, GenericRecord>> aggSequenceStateStoreBuilder =
Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore(processStateStore), keySerde, valueSerde)
.withLoggingEnabled(stateStoreConfigMap)
.withCachingEnabled();
final Serde<EnrichedSmcHeatData> enrichedSmcHeatDataSerde = new SpecificAvroSerde<>();
enrichedSmcHeatDataSerde.configure(serdeConfig, false); // `true` for record keys
StoreBuilder<KeyValueStore<EventKey, EnrichedSmcHeatData>> enrichedSmcHeatStateStoreBuilder =
Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("enriched-smc-heat-state-store"), keySerde, enrichedSmcHeatDataSerde)
.withLoggingEnabled(stateStoreConfigMap)
.withCachingEnabled();
Topology topology = builder.build();
topology
.addSource(
PROCESS_EVENTS_SOURCE,
keySerde.deserializer(),
valueSerde.deserializer(),
processorConfig.getInputCcmProcessEvents())
.addSource(
SCHEDULED_SEQUENCES_SOURCE,
keySerde.deserializer(),
valueSerde.deserializer(),
processorConfig.getScheduledCastSequences())
.addSource(
SMC_HEAT_EVENTS_SOURCE,
keySerde.deserializer(),
valueSerde.deserializer(),
processorConfig.getInputSmcHeatEvents())
.addProcessor(
PROCESS_STATE_AGGREGATOR,
() -> new ProcessStateProcessor(processStateStore, processorConfig),
PROCESS_EVENTS_SOURCE,
SCHEDULED_SEQUENCES_SOURCE,
SMC_HEAT_EVENTS_SOURCE)
.addStateStore(aggSequenceStateStoreBuilder, PROCESS_STATE_AGGREGATOR)
.addStateStore(enrichedSmcHeatStateStoreBuilder, PROCESS_STATE_AGGREGATOR);
If there are updates for the store created by aggSequenceStateStoreBuilder, the record values could be saved to the store without problems. However, if updates came for the second store, the following error was getting thrown:
Caused by:
io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException:
Schema being registered is incompatible with an earlier schema for
subject
"ccm-process-events-processor-ccm-process-state-store-changelog-value";
error code: 409
My use case: the state processor accepts inbound records from multiple source topics and do event-handling (including storing the modified values to the corresponding stores) when a record arrives from any of the source topics.
It appears that there can only be one schema registered with the schema registry for the same processor. Is that by design, or am I missing anything, or what alternative options do I have instead?
Thanks in advance!
The following ItemReader gets a list of thousands accounts (acc).
The database that the ItemReader will connected to in order to retrieve the data is HIVE. I don’t have permission to create any table, only read option.
#Bean
#StepScope
public ItemReader<OmsDto> omsItemReader(#Value("#{stepExecutionContext[acc]}") List<String> accountList) {
String inParams = String.join(",", accountList.stream().map(id ->
"'"+id+"'").collect(Collectors.toList()));
String query = String.format("SELECT ..... account IN (%s)", inParams);
BeanPropertyRowMapper<OmsDto> rowMapper = new BeanPropertyRowMapper<>(OmsDto.class);
rowMapper.setPrimitivesDefaultedForNullValue(true);
JdbcCursorItemReader<OmsDto> reader = new JdbcCursorItemReader<OmsDto>();
reader.setVerifyCursorPosition(false);
reader.setDataSource(hiveDataSource());
reader.setRowMapper(rowMapper);
reader.setSql(query);
reader.open(new ExecutionContext());
return reader;
}
This is the error message that I get when using ItemReader:
Caused by: org.springframework.batch.item.ItemStreamException: Failed to initialize the reader
at org.springframework.batch.item.support.AbstractItemCountingItemStreamItemReader.open(AbstractItemCountingItemStreamItemReader.java:153) ~[spring-batch-infrastructure-4.2.4.RELEASE.jar:4.2.4.RELEASE]
Caused by: java.sql.SQLException: Error executing query
at com.facebook.presto.jdbc.PrestoStatement.internalExecute(PrestoStatement.java:279) ~[presto-jdbc-0.243.2.jar:0.243.2-128118e]
at com.facebook.presto.jdbc.PrestoStatement.execute(PrestoStatement.java:228) ~[presto-jdbc-0.243.2.jar:0.243.2-128118e]
at com.facebook.presto.jdbc.PrestoPreparedStatement.<init>(PrestoPreparedStatement.java:84) ~[presto-jdbc-0.243.2.jar:0.243.2-128118e]
at com.facebook.presto.jdbc.PrestoConnection.prepareStatement(PrestoConnection.java:130) ~[presto-jdbc-0.243.2.jar:0.243.2-128118e]
at com.facebook.presto.jdbc.PrestoConnection.prepareStatement(PrestoConnection.java:300) ~[presto-jdbc-0.243.2.jar:0.243.2-128118e]
at org.springframework.batch.item.database.JdbcCursorItemReader.openCursor(JdbcCursorItemReader.java:121) ~[spring-batch-infrastructure-4.2.4.RELEASE.jar:4.2.4.RELEASE]
... 63 common frames omitted
Caused by: java.lang.RuntimeException: Error fetching next at https://prestoanalytics-ch2-p.sys.comcast.net:6443/v1/statement/executing/20201118_131314_11079_v3w47/yf55745951e0beccc234c98f36005723457073854/0 returned an invalid response: JsonResponse{statusCode=502, statusMessage=Bad Gateway, headers={cache-control=[no-cache], content-length=[107], content-type=[text/html]}, hasValue=false} [Error: <html><body><h1>502 Bad Gateway</h1>
The server returned an invalid or incomplete response.
</body></html>
]
I was sure that the root cause is because of the driver but I have tested the driver with the same SQL this time using DriverManager and its run perfectly.
#Component
public class OmsItemReader implements ItemReader<OmsDto>, StepExecutionListener {
private ItemReader<OmsDto> delegate;
public SikOmsItemReader() {
Properties properties = new Properties();
properties.setProperty("user", "....");
properties.setProperty("password", "...");
properties.setProperty("SSL", "true");
Connection connection = null;
try {
connection = DriverManager.getConnection("jdbc:presto://.....", properties);
Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery(
I am not sure what is the different ? Is it the driver or sparing batch ?
I am looking for a workaround. How can I retrieve thousands of accounts via IN clauses with spring batch ?
Thank you
Background :
Diagram :
Statemachine uml state diagram
We have a normal state machine as depicted in diagram that monitors spring-BATCH micro-service(deployed on streams source/processor/sink design) ,for each batch that is started .
We receive sequence of REST calls to internally fire events per batch id on respective batch's machine object. i.e. per batch id the new state machine object is created .
And each machine is having n number of parallel regions(representing spring batch's chunks ) also as shown in the diagram.
REST calls made are using multi-threaded environment where 2 simultaneous calls of same batchId may come for different region Ids of BATCHPROCESSING state .
Up till now we had a single node(single installation) running of this state machine micro-service but now we want to deploy it on multiple instances ; to receive REST calls .
For this , the Distributed State Machine is what we want to introduce . We have below configuration in place for Running Distributed State Machine .
#Configuration
#EnableStateMachine
public class StateMachineUMLWayConfiguration extends
StateMachineConfigurerAdapter<String, String> {
..
..
#Override
public void configure(StateMachineModelConfigurer<String,String> model)
throws Exception {
model
.withModel()
.factory(stateMachineModelFactory());
}
#Bean
public StateMachineModelFactory<String,String> stateMachineModelFactory() {
StorehubBatchUmlStateMachineModelFactory factory =null;
try {
factory = new StorehubBatchUmlStateMachineModelFactory
(templateUMLInClasspath,stateMachineEnsemble());
} catch (Exception e) {
LOGGER.info("Config's State machine factory got exception
:"+factory);
}
LOGGER.info("Config's State machine factory method Called:"+factory);
factory.setStateMachineComponentResolver(stateMachineComponentResolver());
return factory;
}
#Override
public void configure(StateMachineConfigurationConfigurer<String,
String>
config) throws Exception {
config
.withDistributed()
.ensemble(stateMachineEnsemble());
}
#Bean
public StateMachineEnsemble<String, String> stateMachineEnsemble() throws
Exception {
return new ZookeeperStateMachineEnsemble<String, String>(curatorClient(), "/batchfoo1", true, 512);
}
#Bean
public CuratorFramework curatorClient() throws Exception {
CuratorFramework client =
CuratorFrameworkFactory.builder().defaultData(new byte[0])
.retryPolicy(new ExponentialBackoffRetry(1000, 3))
.connectString("localhost:2181").build();
client.start();
return client;
}
StorehubBatchUmlStateMachineModelFactory's build method:
#Override
public StateMachineModel<String, String> build(String batchChunkId) {
Model model = null;
try {
model = UmlUtils.getModel(getResourceUri(resolveResource(batchChunkId)).getPath());
} catch (IOException e) {
throw new IllegalArgumentException("Cannot build model from resource " + resource + " or location " + location, e);
}
UmlModelParser parser = new UmlModelParser(model, this);
DataHolder dataHolder = parser.parseModel();
ConfigurationData<String, String> configurationData = new ConfigurationData<String, String>( null, new SyncTaskExecutor(),
new ConcurrentTaskScheduler() , false, stateMachineEnsemble,
new ArrayList<StateMachineListener<String, String>>(), false,
null, null,
null, null, false,
null , batchChunkId, null,
null ) ;
return new DefaultStateMachineModel<String, String>(configurationData, dataHolder.getStatesData(), dataHolder.getTransitionsData());
}
Created new custom service interface level method in place of DefaultStateMachineService.acquireStateMachine(machineId)
#Override
public StateMachine<String, String> acquireDistributedStateMachine(String machineId, boolean start) {
synchronized (distributedMachines) {
DistributedStateMachine<String,String> distributedStateMachine = distributedMachines.get(machineId);
StateMachine<String,String> distMachineDelegateX = null;
if (distributedStateMachine == null) {
StateMachine<String, String> machine = stateMachineFactory.getStateMachine(machineId);
distributedStateMachine = (DistributedStateMachine<String, String>) machine;
}
distributedMachines.put(machineId, distributedStateMachine);
return handleStart(distributedStateMachine, start);
}
}
Problem :
Now problem is that , micro service deployed on single instance runs successfully even for events received by it are from multi threaded environment where one thread hits with the event REST call belonging to Region 1 and simultaneously other thread comes for region 2 of same batch . Machine goes ahead in synch ,with successful parallel regions' processing , till its last state i.e BATCHCOMPLETED .
Also we checked at zookeeper side that at last the BATCHCOMPLETED STATE was being recorded in node's current version.
But , besides 1st instance , when we keep same micro service app-jar deployed on some other location to treat it as a 2nd instance of micro-service that is also now running to accept event REST calls(say by listening at another tomcat port 9002) ; it fails in middle somewhere randomly . This failure happens randomly after any one of the events among parallel regions is fired and when ensemble.setState() is being called internally on state change of that event .
It gives following error:
[36mo.s.s.support.AbstractStateMachine [0;39m [2m:[0;39m Interceptors threw exception, skipping state change
org.springframework.statemachine.StateMachineException: Error persisting data; nested exception is org.springframework.statemachine.StateMachineException: Error persisting data; nested exception is org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion
at org.springframework.statemachine.zookeeper.ZookeeperStateMachineEnsemble.setState(ZookeeperStateMachineEnsemble.java:241) ~[spring-statemachine-zookeeper-2.0.1.RELEASE.jar!/:2.0.1.RELEASE]
at org.springframework.statemachine.ensemble.DistributedStateMachine$LocalStateMachineInterceptor.preStateChange(DistributedStateMachine.java:209) ~[spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.StateMachineInterceptorList.preStateChange(StateMachineInterceptorList.java:101) ~[spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.callPreStateChangeInterceptors(AbstractStateMachine.java:859) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.switchToState(AbstractStateMachine.java:880) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.access$500(AbstractStateMachine.java:81) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine$3.transit(AbstractStateMachine.java:335) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.handleTriggerTrans(DefaultStateMachineExecutor.java:286) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.handleTriggerTrans(DefaultStateMachineExecutor.java:211) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.processTriggerQueue(DefaultStateMachineExecutor.java:449) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.access$200(DefaultStateMachineExecutor.java:65) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor$1.run(DefaultStateMachineExecutor.java:323) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) [spring-core-4.3.13.RELEASE.jar!/:4.3.13.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.scheduleEventQueueProcessing(DefaultStateMachineExecutor.java:352) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.execute(DefaultStateMachineExecutor.java:163) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.sendEventInternal(AbstractStateMachine.java:603) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.sendEvent(AbstractStateMachine.java:218) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.ensemble.DistributedStateMachine.sendEvent(DistributedStateMachine.java:108)
..skipping Lines....
Caused by: org.springframework.statemachine.StateMachineException: Error persisting data; nested exception is org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion
at org.springframework.statemachine.zookeeper.ZookeeperStateMachinePersist.write(ZookeeperStateMachinePersist.java:113) ~[spring-statemachine-zookeeper-2.0.1.RELEASE.jar!/:2.0.1.RELEASE]
at org.springframework.statemachine.zookeeper.ZookeeperStateMachinePersist.write(ZookeeperStateMachinePersist.java:50) ~[spring-statemachine-zookeeper-2.0.1.RELEASE.jar!/:2.0.1.RELEASE]
at org.springframework.statemachine.zookeeper.ZookeeperStateMachineEnsemble.setState(ZookeeperStateMachineEnsemble.java:235) ~[spring-statemachine-zookeeper-2.0.1.RELEASE.jar!/:2.0.1.RELEASE]
... 73 common frames omitted
Caused by: org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion
at org.apache.zookeeper.KeeperException.create(KeeperException.java:115) ~[zookeeper-3.4.8.jar!/:3.4.8--1]
at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:1006) ~[zookeeper-3.4.8.jar!/:3.4.8--1]
at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:910) ~[zookeeper-3.4.8.jar!/:3.4.8--1]
at org.apache.curator.framework.imps.CuratorTransactionImpl.doOperation(CuratorTransactionImpl.java:159)
Question :
1.So is the configuration mentioned above needs something more to be configured to avoid that exception mentioned above??
Because Both state-machine micro-service instances were tested with the case when they both were connecting to same instance i.e. same string .connectString("localhost:2181").build() or case when they were made to connect to different zookeeper instances(i.e. 'localhost:2181' , 'localhost:2182').
Same exception of BAD VERSION occurs during state machine ensemble's processing in both cases .
2.Also If Batches would run in parallel so their respective machines would need to be created to run in parallel at state-machine micro-service end .
So here , technically new State machine we need for new batchId , running simultaneously .
But looking at the ZookeeperStateMachineEnsemble , One znode path seems to be associated with one ensemble , whenever ensemble object is instantiated once in the main config class ("StateMachineUMLWayConfiguration") .
So is it expected to only use that singleton ensemble instance only? Can't multiple ensembles be created at run-time referencing different znode paths run in parallel to log their respective Distributed State Machine's states to their respective znode paths??
a. Because batches running in parallel would need separate znode paths to be created . Thus due to our attempt of keeping separate znode path per batch , we need separate ensemble to be instantiated per batch's machine. But that seems to be getting into the lock condition while getting connection to znode through curator client.
b. REST call fired for event triggering does not complete , as the machine it acquired is stuck in ensemble to connect .
Thanks in advance .
I have a sql query which returns DateTime as one of the objects. I am getting an error when it is being to added to JsonArray.
Stack Trace:
SEVERE: An exception occurred
java.lang.IllegalStateException: Illegal type in JsonObject: class org.joda.time.DateTime
at io.vertx.core.json.Json.checkAndCopy(Json.java:120)
at io.vertx.core.json.JsonArray.add(JsonArray.java:437)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl$2.apply(AsyncSQLConnectionImpl.java:286)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl$2.apply(AsyncSQLConnectionImpl.java:274)
at scala.collection.Iterator$class.foreach(Iterator.scala:743)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1195)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at com.github.mauricio.async.db.general.ArrayRowData.foreach(ArrayRowData.scala:22)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl.rowToJsonArray(AsyncSQLConnectionImpl.java:274)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl.access$000(AsyncSQLConnectionImpl.java:46)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl$1.apply(AsyncSQLConnectionImpl.java:265)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl$1.apply(AsyncSQLConnectionImpl.java:262)
at scala.collection.Iterator$class.foreach(Iterator.scala:743)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1195)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at com.github.mauricio.async.db.general.MutableResultSet.foreach(MutableResultSet.scala:27)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl.rowDataSeqToJsonArray(AsyncSQLConnectionImpl.java:262)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl.queryResultToResultSet(AsyncSQLConnectionImpl.java:250)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl.lambda$null$10(AsyncSQLConnectionImpl.java:130)
at io.vertx.ext.asyncsql.impl.ScalaUtils$3.apply(ScalaUtils.java:81)
at io.vertx.ext.asyncsql.impl.ScalaUtils$3.apply(ScalaUtils.java:77)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at io.vertx.ext.asyncsql.impl.VertxEventLoopExecutionContext.lambda$execute$5(VertxEventLoopExecutionContext.java:70)
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$18(ContextImpl.java:335)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
I believe the error is occuring here:
private List<JsonArray> rowDataSeqToJsonArray(com.github.mauricio.async.db.ResultSet set) {
List<JsonArray> list = new ArrayList<>();
set.foreach(new AbstractFunction1<RowData, Void>() {
#Override
public Void apply(RowData row) {
list.add(rowToJsonArray(row));
return null;
}
});
return list;
}
My Rowdata looks like this:
Some(MutableResultSet(ArrayRowData(, , hphan, 2016-04-26T00:00:00.000-07:00, 1), ArrayRowData(, , hphan, 2016-04-28T00:00:00.000-07:00, 2), ArrayRowData(BXBSVA, BLUE CROSS BLUE SHIELD VIRGINIA, null, 2016-04-26T00:00:00.000-07:00, 1)))
Does anyone know how to fix this ?
This should work with the upcoming release 3.3. However note that the asynchronous client is marked as technology preview so there are rough edges.
Having that said if you need more features that I'd suggest to use the jdbc client instead. While not so high performance it is feature complete.