Flink state empty (reinitialized) after rerun - scala

I'm trying to connect two streams, first is persisting in MapValueState:
RocksDB save data in checkpoint folder, but after new run, state is empty. I run it locally and in flink cluster with cancel submiting in cluster and simply rerun locally
env.setStateBackend(new RocksDBStateBackend(..)
env.enableCheckpointing(1000)
...
val productDescriptionStream: KeyedStream[ProductDescription, String] = env.addSource(..)
.keyBy(_.id)
val productStockStream: KeyedStream[ProductStock, String] = env.addSource(..)
.keyBy(_.id)
and
productDescriptionStream
.connect(productStockStream)
.process(ProductProcessor())
.setParallelism(1)
env.execute("Product aggregator")
ProductProcessor
case class ProductProcessor() extends CoProcessFunction[ProductDescription, ProductStock, Product]{
private[this] lazy val stateDescriptor: MapStateDescriptor[String, ProductDescription] =
new MapStateDescriptor[String, ProductDescription](
"productDescription",
createTypeInformation[String],
createTypeInformation[ProductDescription]
)
private[this] lazy val states: MapState[String, ProductDescription] = getRuntimeContext.getMapState(stateDescriptor)
override def processElement1(value: ProductDescription,
ctx: CoProcessFunction[ProductDescription, ProductStock, Product]#Context,out: Collector[Product]
): Unit = {
states.put(value.id, value)
}}
override def processElement2(value: ProductStock,
ctx: CoProcessFunction[ProductDescription, ProductStock, Product]#Context, out: Collector[Product]
): Unit = {
if (states.contains(value.id)) {
val product =Product(
id = value.id,
description = Some(states.get(value.id).description),
stock = Some(value.stock),
updatedAt = value.updatedAt)
out.collect(product )
}}

Checkpoints are created by Flink for recovering from failures, not for resuming after a manual shutdown. When a job is canceled, the default behavior is for Flink to delete the checkpoints. Since the job can no longer fail, it won't need to recover.
You have several options:
(1) Configure your checkpointing to retain checkpoints when a job is cancelled:
CheckpointConfig config = env.getCheckpointConfig();
config.enableExternalizedCheckpoints(
CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
Then when you restart the job you'll need to indicate that you want it restarted from a specific checkpoint:
flink run -s <checkpoint-path> ...
Otherwise, whenever you start a job it will begin with an empty state backend.
(2) Instead of canceling the job, use stop with savepoint:
flink stop [-p targetDirectory] [-d] <jobID>
after which you'll again need to use flink run -s ... to resume from the savepoint.
Stop with a savepoint is a cleaner approach than relying on there being a recent checkpoint to fall back to.
(3) Or you could use Ververica Platform Community Edition, which raises the level of abstraction to the point where you don't have to manage these details yourself.

Related

Does flink streaming job maintain its keyed value state between job runs?

Our usecase is we want to use flink streaming for a de-duplicator job, which reads it's data from source(kafka topic) and writes unique records into hdfs file sink.
Kafka topic could have duplicate data, which can be identified by using composite key
(adserver_id, unix_timestamp of the record)
so I decided to use flink keyed state stream to achieve de-duplication.
val messageStream: DataStream[String] = env.addSource(flinkKafkaConsumer)
messageStream
.map{
record =>
val key = record.adserver_id.get + record.event_timestamp.get
(key,record)
}
.keyBy(_._1)
.flatMap(new DedupDCNRecord())
.map(_.toString)
.addSink(sink)
// execute the stream
env.execute(applicationName)
}
Here is the code for de-duplication using value state from flink.
class DedupDCNRecord extends RichFlatMapFunction[(String, DCNRecord), DCNRecord] {
private var operatorState: ValueState[String] = null
override def open(configuration: Configuration) = {
operatorState = getRuntimeContext.getState(
DedupDCNRecord.descriptor
)
}
#throws[Exception]
override def flatMap(value: (String,DCNRecord), out: Collector[DCNRecord]): Unit = {
if (operatorState.value == null) { // we haven't seen the element yet
out.collect(value._2)
// set operator state to true so that we don't emit elements with this key again
operatorState.update(value._1)
}
}
}
While this approach works fine as long as streaming job is running and maintaining list of unique keys through valueState and performing de-duplication.
But as soon as I cancel the job, flink looses it's state(unique keys seen in previous run of the job) for valueState(only keeps unique keys for the current run) and let the records pass, which were already processed in previous run of the job.
Is there a way, we can enforce flink to mainatain it's valueState(unique_keys) seen so far ?
Appreciate your help.
This requires you capture a snapshot of the state before shutting down the job, and then restart from that snapshot:
Do a stop with savepoint to bring down your current job while taking a snapshot of its state.
Relaunch, using the savepoint as the starting point.
For a step-by-step tutorial, see Upgrading & Rescaling a Job in the Flink Operations Playground. The section on Observing Failure & Recovery is also relevant here.

Stop Flink Kafka consumer task programmatically

I'm using Kafka consumer with Flink 1.9 (in Scala 2.12), and facing the following problem (similar to this question): the consumer should stop fetching data (and finish the task) when no new messages are received for a specific amount of time (since the stream is potentially infinite, so there is no "end-of-stream" message in the topic itself).
I've tried to use ProcessFunction which calls consumer.close(), but this did not help (consumer continues to run). Throwing an exception in ProcessFunction kills the job completely, which is not what I want (since the job consists of several stages, which are canceled after throwing an exception). Here is my ProcessFunction:
class TimeOutFunction( // delay after which an alert flag is thrown
val timeOut: Long, consumer: FlinkKafkaConsumer[Row]
) extends ProcessFunction[Row, Row] {
// state to remember the last timer set
private var lastTimer: ValueState[Long] = _
override def open(conf: Configuration): Unit = { // setup timer state
val lastTimerDesc = new ValueStateDescriptor[Long]("lastTimer", classOf[Long])
lastTimer = getRuntimeContext.getState(lastTimerDesc)
}
override def processElement(value: Row, ctx: ProcessFunction[Row, Row]#Context, out: Collector[Row]): Unit = { // get current time and compute timeout time
val currentTime = ctx.timerService.currentProcessingTime
val timeoutTime = currentTime + timeOut
// register timer for timeout time
ctx.timerService.registerProcessingTimeTimer(timeoutTime)
// remember timeout time
lastTimer.update(timeoutTime)
// throughput the event
out.collect(value)
}
override def onTimer(timestamp: Long, ctx: ProcessFunction[Row, Row]#OnTimerContext, out: Collector[Row]): Unit = {
// check if this was the last timer we registered
if (timestamp == lastTimer.value) {
// it was, so no data was received afterwards.
// stop the consumer.
consumer.close()
}
}
}
The isEndOfStream() method on a deserialization schema is also no good, since it requires nextElement (and my case is kind of vice-versa, since the stream should stop when there is no next element for some time).
So, there is a way to do this (preferably without subclassing FlinkKafkaConsumer and/or using reflection)?

handling infinite loop in apache-kafka consumer

iam new to kafka i want to implement a kafka message pipeline system for our play-scala project,i created a topic in which records are inserted, and i wrote a consumer code like this,
val recordBatch = new ListBuffer[CountryModel]
consumer.subscribe(Arrays.asList("session-queue"))
while (true) {
val records = consumer.poll(1000)
val iter = records.iterator()
while (iter.hasNext()) {
val record: ConsumerRecord[String, String] = iter.next()
recordBatch += Json.parse(record.value()).as[CountryModel]
}
processData(recordBatch)
// Thread.sleep(5 * 1000)
}
but after certain time the service is stopping, the processor is going to 100% after certain time machine is stopping, how can i handle this infinite loop.
in production environment i can not rely on this loop right i tried to sleep the thread for some time but it is not a elegant solution

Apache Spark: how to cancel job in code and kill running tasks?

I am running a Spark application (version 1.6.0) on a Hadoop cluster with Yarn (version 2.6.0) in client mode. I have a piece of code that runs a long computation, and I want to kill it if it takes too long (and then run some other function instead).
Here is an example:
val conf = new SparkConf().setAppName("TIMEOUT_TEST")
val sc = new SparkContext(conf)
val lst = List(1,2,3)
// setting up an infite action
val future = sc.parallelize(lst).map(while (true) _).collectAsync()
try {
Await.result(future, Duration(30, TimeUnit.SECONDS))
println("success!")
} catch {
case _:Throwable =>
future.cancel()
println("timeout")
}
// sleep for 1 hour to allow inspecting the application in yarn
Thread.sleep(60*60*1000)
sc.stop()
The timeout is set for 30 seconds, but of course the computation is infinite, and so Awaiting on the result of the future will throw an Exception, which will be caught and then the future will be canceled and the backup function will execute.
This all works perfectly well, except that the canceled job doesn't terminate completely: when looking at the web UI for the application, the job is marked as failed, but I can see there are still running tasks inside.
The same thing happens when I use SparkContext.cancelAllJobs or SparkContext.cancelJobGroup. The problem is that even though I manage to get on with my program, the running tasks of the canceled job are still hogging valuable resources (which will eventually slow me down to a near stop).
To sum things up: How do I kill a Spark job in a way that will also terminate all running tasks of that job? (as opposed to what happens now, which is stopping the job from running new tasks, but letting the currently running tasks finish)
UPDATE:
After a long time ignoring this problem, we found a messy but efficient little workaround. Instead of trying to kill the appropriate Spark Job/Stage from within the Spark application, we simply logged the stage ID of all active stages when the timeout occurred, and issued an HTTP GET request to the URL presented by the Spark Web UI used for killing said stages.
I don't know it this answers your question.
My need was to kill jobs hanging for too much time (my jobs extract data from Oracle tables, but for some unknonw reason, seldom the connection hangs forever).
After some study, I came to this solution:
val MAX_JOB_SECONDS = 100
val statusTracker = sc.statusTracker;
val sparkListener = new SparkListener()
{
override def onJobStart(jobStart : SparkListenerJobStart)
{
val jobId = jobStart.jobId
val f = Future
{
var c = MAX_JOB_SECONDS;
var mustCancel = false;
var running = true;
while(!mustCancel && running)
{
Thread.sleep(1000);
c = c - 1;
mustCancel = c <= 0;
val jobInfo = statusTracker.getJobInfo(jobId);
if(jobInfo!=null)
{
val v = jobInfo.get.status()
running = v == JobExecutionStatus.RUNNING
}
else
running = false;
}
if(mustCancel)
{
sc.cancelJob(jobId)
}
}
}
}
sc.addSparkListener(sparkListener)
try
{
val df = spark.sql("SELECT * FROM VERY_BIG_TABLE") //just an example of long-running-job
println(df.count)
}
catch
{
case exc: org.apache.spark.SparkException =>
{
if(exc.getMessage.contains("cancelled"))
throw new Exception("Job forcibly cancelled")
else
throw exc
}
case ex : Throwable =>
{
println(s"Another exception: $ex")
}
}
finally
{
sc.removeSparkListener(sparkListener)
}
For the sake of future visitors, Spark introduced the Spark task reaper since 2.0.3, which does address this scenario (more or less) and is a built-in solution.
Note that is can kill an Executor eventually, if the task is not responsive.
Moreover, some built-in Spark sources of data have been refactored to be more responsive to spark:
For the 1.6.0 version, Zohar's solution is a "messy but efficient" one.
According to setJobGroup:
"If interruptOnCancel is set to true for the job group, then job cancellation will result in Thread.interrupt() being called on the job's executor threads."
So the anno function in your map must be interruptible like this:
val future = sc.parallelize(lst).map(while (!Thread.interrupted) _).collectAsync()

scalaz-stream: combining queues based on one queue's size

In my application I have up to N consumers working in parallel and a producer. Consumers grab resources from the producer, do their work, append results to an updateQueue and ask for more resources. Producer has some resources available initially and can generate more by applying updates from the updateQueue. It is important to apply all available updates before a new resource is emitted to a consumer. I've tried using a following generator, requesting updates "in bulk" whenever a consumer makes a request and setting aside new resources (which are not needed by the consumer but may be later requested by other consumers) in a ticketQueue:
def updatesOrFresh: Process[Task, Seq[OptimizerResult] \/ Unit] =
Process.await(updateQueue.size.continuous.take(1).runLast) {
case Some(size) =>
println(s"size: $size")
if (size == 0)
wye(updateQueue.dequeueAvailable, ticketQueue.dequeue)(wye.either)
else
updateQueue.dequeueAvailable.map(_.left[Unit])
}.take(1) ++ Process.suspend(updatesOrFresh)
It doesn't work - initially available resource are emitted from the ticketQueue.dequeue and then it appears to block on the wye, logging:
size: 0
<<got ticket>>
size: 0
<<got ticket>>
size: 0 // it appears the updateQueue did not receive the consumer output yet, but I can live with that, it should grab an update from the wye anyway
<<blocks>>
when there were two resources available initially on the ticketQueue. However, if I change it to just
val updatesOrFresh = wye(updateQueue.dequeueAvailable, ticketQueue.dequeue)(wye.either)
It works as expected (although without the "apply updates before emitting new resource" guarantee). How can I make it work ensuring the updates are applied at the right time?
Edit: I've solved it using the following code:
val updatesOrFresh: Process[Task, Seq[OptimizerResult] \/ Unit] =
Process.repeatEval {
for {
sizeOpt <- updateQueue.size.continuous.take(1).runLast
nextOpt <-
if (sizeOpt.getOrElse(???) == 0)
wye(updateQueue.dequeueAvailable, ticketQueue.dequeue)(wye.either).take(1).runLast
else
updateQueue.dequeueAvailable.map(_.left[Unit]).take(1).runLast
} yield nextOpt.getOrElse(???)
}
However the question why the original def didn't work remains...