How do I avoid automatic reconnection when QuickFix/J fails to connect? - quickfix

Imagine a scenario the connection IP becomes unreachable. In this case, QuickFIX/J will try to reconnect automagically every 30s or so, as configured by parameter ReconnectInterval. How do I avoid such behavior?

Your application class should extend ApplicationExtended instead of Application. Then you can override canLogon method and if you return false, Quickfixj will not try to login.

The idea consists on retrieving parameter ReconnectInterval from QuickFix/J and create a separate thread which is going to kill the Session if and only if the Session is not logged on yet.
In order for this to work, our thread must obviously be fired before the QuickFix/J thread tries to reconnect. In other words: if you configured ReconnectInterval=30 ... you have to fire the aforementioned thread before that and close all initiators. This way, QuickFix/J will end up not really retrying to reconnect.
import java.io.InputStream
import java.util.Locale
import scala.util.control.NonFatal
import quickfix._
import quickfix.field._
class SimpleConnection(val configInputStream: InputStream,
val messageFactory: quickfix.MessageFactory)
extends MessageCracker
with quickfix.Application {
private val locale = Locale.getDefault.getCountry
private val settings = new SessionSettings(configInputStream)
private val storeFactory = new FileStoreFactory(settings)
private val loggerFactory = new QuickfixLoggerFactory(settings)
private var initiatorOption: Option[SocketInitiator] = None
private var sessionOption : Option[SessionID] = None
private var senderSeqOption: Option[Int] = None
private var targetSeqOption: Option[Int] = None
override def onLogout(sessionId: SessionID): Unit = {
log.info("onLogout called for %s".format(sessionId))
initiatorOption.foreach(initiator => initiator.stop(true))
fireDisconnectedState // inform listeners
initiatorOption = None
sessionOption = None
}
override def onCreate(sessionId: SessionID): Unit = {
log.info("onCreate called for %s".format(sessionId))
val session = Session.lookupSession(sessionId)
val interval = settings.getLong(session.getSessionID, "ReconnectInterval")
if(interval <= 10)
log.error("ReconnectInterval should be at least 10secs.")
else {
import java.util.concurrent.Executors
import scala.concurrent.ExecutionContext
val executor = ExecutionContext.fromExecutor(Executors.newSingleThreadExecutor)
val monitor = new Runnable {
override def run(): Unit = {
val sleep = (interval-5)*1000
Thread.sleep(sleep)
if(!session.isLoggedOn) {
log.warn("Killing QuickFix/J session before reconnection.")
onLogout(session.getSessionID)
}
}
}
executor.execute(monitor)
}
senderSeqOption.foreach(session.setNextSenderMsgSeqNum(_))
targetSeqOption.foreach(session.setNextTargetMsgSeqNum(_))
senderSeqOption = None
targetSeqOption = None
}
}

Related

Apache beam stops to process PubSub messages after some time

I'm trying to write a simple Apache Beam pipeline (which will run on the Dataflow runner) to do the following:
Read PubSub messages containing file paths on GCS from a subscription.
For each message, read the data contained in the file associated with the message (the files can be of a variery of formats (csv, jsonl, json, xml, ...)).
Do some processing on each record.
Write back the result on GCS.
I'm using a 10 seconds fixed window on the messages. Since incoming files are already chunked (max size of 10MB) I decided not to use splittable do functions to read the files, in order to avoid adding useless complexity (especially for files that are not trivially splittable in chunks).
Here is a simplified code sample that gives the exact same problem of the full one:
package skytv.ingester
import java.io.{BufferedReader, InputStreamReader}
import java.nio.charset.StandardCharsets
import kantan.csv.rfc
import org.apache.beam.sdk.Pipeline
import org.apache.beam.sdk.io.{Compression, FileIO, FileSystems, TextIO, WriteFilesResult}
import org.apache.beam.sdk.io.gcp.pubsub.{PubsubIO, PubsubMessage}
import org.apache.beam.sdk.options.PipelineOptionsFactory
import org.apache.beam.sdk.transforms.DoFn.ProcessElement
import org.apache.beam.sdk.transforms.windowing.{BoundedWindow, FixedWindows, PaneInfo, Window}
import org.apache.beam.sdk.transforms.{Contextful, DoFn, MapElements, PTransform, ParDo, SerializableFunction, SimpleFunction, WithTimestamps}
import org.apache.beam.sdk.values.{KV, PCollection}
import org.joda.time.{Duration, Instant}
import skytv.cloudstorage.CloudStorageClient
import skytv.common.Closeable
import kantan.csv.ops._
import org.apache.beam.sdk.io.FileIO.{Sink, Write}
class FileReader extends DoFn[String, List[String]] {
private def getFileReader(filePath: String) = {
val cloudStorageClient = new CloudStorageClient()
val inputStream = cloudStorageClient.getInputStream(filePath)
val isr = new InputStreamReader(inputStream, StandardCharsets.UTF_8)
new BufferedReader(isr)
}
private def getRowsIterator(fileReader: BufferedReader) = {
fileReader
.asUnsafeCsvReader[Seq[String]](rfc
.withCellSeparator(',')
.withoutHeader
.withQuote('"'))
.toIterator
}
#ProcessElement
def processElement(c: ProcessContext): Unit = {
val filePath = c.element()
Closeable.tryWithResources(
getFileReader(filePath)
) {
fileReader => {
getRowsIterator(fileReader)
.foreach(record => c.output(record.toList))
}
}
}
}
class DataWriter(tempFolder: String) extends PTransform[PCollection[List[String]], WriteFilesResult[String]] {
private val convertRecord = Contextful.fn[List[String], String]((dr: List[String]) => {
dr.mkString(",")
})
private val getSink = Contextful.fn[String, Sink[String]]((destinationKey: String) => {
TextIO.sink()
})
private val getPartitioningKey = new SerializableFunction[List[String], String] {
override def apply(input: List[String]): String = {
input.head
}
}
private val getNaming = Contextful.fn[String, Write.FileNaming]((destinationKey: String) => {
new Write.FileNaming {
override def getFilename(
window: BoundedWindow,
pane: PaneInfo,
numShards: Int,
shardIndex: Int,
compression: Compression
): String = {
s"$destinationKey-${window.maxTimestamp()}-${pane.getIndex}.csv"
}
}
})
override def expand(input: PCollection[List[String]]): WriteFilesResult[String] = {
val fileWritingTransform = FileIO
.writeDynamic[String, List[String]]()
.by(getPartitioningKey)
.withDestinationCoder(input.getPipeline.getCoderRegistry.getCoder(classOf[String]))
.withTempDirectory(tempFolder)
.via(convertRecord, getSink)
.withNaming(getNaming)
.withNumShards(1)
input
.apply("WriteToAvro", fileWritingTransform)
}
}
object EnhancedIngesterScalaSimplified {
private val SUBSCRIPTION_NAME = "projects/<project>/subscriptions/<subscription>"
private val TMP_LOCATION = "gs://<path>"
private val WINDOW_SIZE = Duration.standardSeconds(10)
def main(args: Array[String]): Unit = {
val options = PipelineOptionsFactory.fromArgs(args: _*).withValidation().create()
FileSystems.setDefaultPipelineOptions(options)
val p = Pipeline.create(options)
val messages = p
.apply("ReadMessages", PubsubIO.readMessagesWithAttributes.fromSubscription(SUBSCRIPTION_NAME))
// .apply("AddProcessingTimeTimestamp", WithTimestamps.of(new SerializableFunction[PubsubMessage, Instant] {
// override def apply(input: PubsubMessage): Instant = Instant.now()
// }))
val parsedMessages = messages
.apply("ApplyWindow", Window.into[PubsubMessage](FixedWindows.of(WINDOW_SIZE)))
.apply("ParseMessages", MapElements.via(new SimpleFunction[PubsubMessage, String]() {
override def apply(msg: PubsubMessage): String = new String(msg.getPayload, StandardCharsets.UTF_8)
}))
val dataReadResult = parsedMessages
.apply("ReadData", ParDo.of(new FileReader))
val writerResult = dataReadResult.apply(
"WriteData",
new DataWriter(TMP_LOCATION)
)
writerResult.getPerDestinationOutputFilenames.apply(
"FilesWritten",
MapElements.via(new SimpleFunction[KV[String, String], String]() {
override def apply(input: KV[String, String]): String = {
println(s"Written ${input.getKey}, ${input.getValue}")
input.getValue
}
}))
p.run.waitUntilFinish()
}
}
The problem is that after the processing of some messages (in the order of 1000), the job stops processing new messages and they remain in the PubSub subscription unacknowledged forever.
I saw that in such a situation the watermark stops to advance and the data freshness linearly increases indefinitely.
Here is a screenshot from dataflow:
And here the situation on the PubSub queue:
Is it possible that there are some messages that remain stuck in the dataflow queues filling them so that no new messages can be added?
I thought that there was some problem on how timestamps are computed by the PubsubIO, so I tried to force the timestamps to be equal to the processing time of each message, but I had no success.
If I leave the dataflow job in this state, it seems that it continuously reprocesses the same messages without writing any data to storage.
Do you have any idea on how to solve this problem?
Thanks!
Most likely the pipeline has encountered an error while processing one(or more) elements in the pipeline (and it shouldn't have anything to do with how timestamps are computed by the PubsubIO), which stops the watermark from advancing since the failed work will be retried again and again on dataflow.
You can check if there's any failure from the log, specifically from worker or harness component. If there's an unhandled runtime exception such as parse error etc, it is very likely being the root cause of a streaming pipeline getting stuck.
If there's no UserCodeException then it is likely some other issue caused by dataflow backend and you can reach out to Dataflow customer support so engineers can look into the backend issue for your pipeline.

Akka gRPC + Slick application causes "IllegalStateException: Cannot initialize ExecutionContext; AsyncExecutor already shut down"

I try to develop gRPC server with Akka-gRPC and Slick. I also use Airframe for DI.
Source code is here
The issue is that it cause failure if it receive request when execute as gRPC server.
If it doesn't start as a gRPC server, but just reads resources from the database, the process succeeds.
What is the difference?
At Follows, It read object from database with slick.
...Component is airframe object. It will use by main module.
trait UserRepository {
def getUser: Future[Seq[Tables.UsersRow]]
}
class UserRepositoryImpl(val profile: JdbcProfile, val db: JdbcProfile#Backend#Database) extends UserRepository {
import profile.api._
def getUser: Future[Seq[Tables.UsersRow]] = db.run(Tables.Users.result)
}
trait UserResolveService {
private val repository = bind[UserRepository]
def getAll: Future[Seq[Tables.UsersRow]] =
repository.getUser
}
object userServiceComponent {
val design = newDesign
.bind[UserResolveService]
.toSingleton
}
Follows is gRPC Server source code.
trait UserServiceImpl extends UserService {
private val userResolveService = bind[UserResolveService]
private val system: ActorSystem = bind[ActorSystem]
implicit val ec: ExecutionContextExecutor = system.dispatcher
override def getAll(in: GetUserListRequest): Future[GetUserListResponse] = {
userResolveService.getAll.map(us =>
GetUserListResponse(
us.map(u =>
myapp.proto.user.User(
1,
"t_horikoshi#example.com",
"t_horikoshi",
myapp.proto.user.User.UserRole.Admin
)
)
)
)
}
}
trait GRPCServer {
private val userServiceImpl = bind[UserServiceImpl]
implicit val system: ActorSystem = bind[ActorSystem]
def run(): Future[Http.ServerBinding] = {
implicit def ec: ExecutionContext = system.dispatcher
val service: PartialFunction[HttpRequest, Future[HttpResponse]] =
UserServiceHandler.partial(userServiceImpl)
val reflection: PartialFunction[HttpRequest, Future[HttpResponse]] =
ServerReflection.partial(List(UserService))
// Akka HTTP 10.1 requires adapters to accept the new actors APIs
val bound = Http().bindAndHandleAsync(
ServiceHandler.concatOrNotFound(service, reflection),
interface = "127.0.0.1",
port = 8080,
settings = ServerSettings(system)
)
bound.onComplete {
case Success(binding) =>
system.log.info(
s"gRPC Server online at http://${binding.localAddress.getHostName}:${binding.localAddress.getPort}/"
)
case Failure(ex) =>
system.log.error(ex, "occurred error")
}
bound
}
}
object grpcComponent {
val design = newDesign
.bind[UserServiceImpl]
.toSingleton
.bind[GRPCServer]
.toSingleton
}
Follows is main module.
object Main extends App {
val conf = ConfigFactory
.parseString("akka.http.server.preview.enable-http2 = on")
.withFallback(ConfigFactory.defaultApplication())
val system = ActorSystem("GRPCServer", conf)
val dbConfig: DatabaseConfig[JdbcProfile] =
DatabaseConfig.forConfig[JdbcProfile](path = "mydb")
val design = newDesign
.bind[JdbcProfile]
.toInstance(dbConfig.profile)
.bind[JdbcProfile#Backend#Database]
.toInstance(dbConfig.db)
.bind[UserRepository]
.to[UserRepositoryImpl]
.bind[ActorSystem]
.toInstance(system)
.add(userServiceComponent.design)
.add(grpcComponent.design)
design.withSession(s =>
// Await.result(s.build[UserResolveService].getUser, Duration.Inf)) // success
// Await.result(s.build[UserServiceImpl].getAll(GetUserListRequest()), Duration.Inf)) // success
s.build[GRPCServer].run() // cause IllegalStateException when reciece request.
)
}
When UserResolveService and UserServiceImpl are called directly, the process of loading an object from the database is successful.
However, when running the application as a gRPC Server, an error occurs when a request is received.
Though I was thinking all day, I couldn't resolve...
Will you please help me to resolve.
It resolved. if execute async process, It has to start gRPC server with newSession.
I fix like that.

Can't understand TwoPhaseCommitSinkFunction lifecycle

I needed a sink to Postgres DB, so I started to build a custom Flink SinkFunction. As FlinkKafkaProducer implements TwoPhaseCommitSinkFunction, then I decided to do the same. As stated in O'Reilley's book Stream Processing with Apache Flink, you just need to implement the abstract methods, enable checkpointing and you're up to go. But what really happens when I run my code is that commit method is called only once, and it is called before invoke, what is totally unexpected since you shouldn't be ready to commit if your set of ready-to-commit transactions is empty. And the worst is that, after committing, invoke is called for all of the transaction lines present in my file, and then abort is called, which is even more unexpected.
When the Sink is initialized, It is of my understanding that the following should occur:
beginTransaction is called and sends an identifier to invoke
invoke adds the lines to the transaction, according to the identifier received
pre-commit makes all final modification on current transaction data
commit handles the finalized transaction of pre-commited data
So, I can't see why my program doesn't show this behaviour.
Here goes my sink code:
package PostgresConnector
import java.sql.{BatchUpdateException, DriverManager, PreparedStatement, SQLException, Timestamp}
import java.text.ParseException
import java.util.{Date, Properties, UUID}
import org.apache.flink.api.common.ExecutionConfig
import org.apache.flink.configuration.Configuration
import org.apache.flink.streaming.api.functions.sink.{SinkFunction, TwoPhaseCommitSinkFunction}
import org.apache.flink.streaming.api.scala._
import org.slf4j.{Logger, LoggerFactory}
class PostgreSink(props : Properties, config : ExecutionConfig) extends TwoPhaseCommitSinkFunction[(String,String,String,String),String,String](createTypeInformation[String].createSerializer(config),createTypeInformation[String].createSerializer(config)){
private var transactionMap : Map[String,Array[(String,String,String,String)]] = Map()
private var parsedQuery : PreparedStatement = _
private val insertionString : String = "INSERT INTO mydb (field1,field2,point) values (?,?,point(?,?))"
override def invoke(transaction: String, value: (String,String,String,String), context: SinkFunction.Context[_]): Unit = {
val LOG = LoggerFactory.getLogger(classOf[FlinkCEPClasses.FlinkCEPPipeline])
val res = this.transactionMap.get(transaction)
if(res.isDefined){
var array = res.get
array = array ++ Array(value)
this.transactionMap += (transaction -> array)
}else{
val array = Array(value)
this.transactionMap += (transaction -> array)
}
LOG.info("\n\nPassing through invoke\n\n")
()
}
override def beginTransaction(): String = {
val LOG: Logger = LoggerFactory.getLogger(classOf[FlinkCEPClasses.FlinkCEPPipeline])
val identifier = UUID.randomUUID.toString
LOG.info("\n\nPassing through beginTransaction\n\n")
identifier
}
override def preCommit(transaction: String): Unit = {
val LOG = LoggerFactory.getLogger(classOf[FlinkCEPClasses.FlinkCEPPipeline])
try{
val tuple : Option[Array[(String,String,String,String)]]= this.transactionMap.get(transaction)
if(tuple.isDefined){
tuple.get.foreach( (value : (String,String,String,String)) => {
LOG.info("\n\n"+value.toString()+"\n\n")
this.parsedQuery.setString(1,value._1)
this.parsedQuery.setString(2,value._2)
this.parsedQuery.setString(3,value._3)
this.parsedQuery.setString(4,value._4)
this.parsedQuery.addBatch()
})
}
}catch{
case e : SQLException =>
LOG.info("\n\nError when adding transaction to batch: SQLException\n\n")
case f : ParseException =>
LOG.info("\n\nError when adding transaction to batch: ParseException\n\n")
case g : NoSuchElementException =>
LOG.info("\n\nError when adding transaction to batch: NoSuchElementException\n\n")
case h : Exception =>
LOG.info("\n\nError when adding transaction to batch: Exception\n\n")
}
this.transactionMap = this.transactionMap.empty
LOG.info("\n\nPassing through preCommit...\n\n")
}
override def commit(transaction: String): Unit = {
val LOG : Logger = LoggerFactory.getLogger(classOf[FlinkCEPClasses.FlinkCEPPipeline])
if(this.parsedQuery != null) {
LOG.info("\n\n" + this.parsedQuery.toString+ "\n\n")
}
try{
this.parsedQuery.executeBatch
val LOG : Logger = LoggerFactory.getLogger(classOf[FlinkCEPClasses.FlinkCEPPipeline])
LOG.info("\n\nExecuting batch\n\n")
}catch{
case e : SQLException =>
val LOG : Logger = LoggerFactory.getLogger(classOf[FlinkCEPClasses.FlinkCEPPipeline])
LOG.info("\n\n"+"Error : SQLException"+"\n\n")
}
this.transactionMap = this.transactionMap.empty
LOG.info("\n\nPassing through commit...\n\n")
}
override def abort(transaction: String): Unit = {
val LOG : Logger = LoggerFactory.getLogger(classOf[FlinkCEPClasses.FlinkCEPPipeline])
this.transactionMap = this.transactionMap.empty
LOG.info("\n\nPassing through abort...\n\n")
}
override def open(parameters: Configuration): Unit = {
val LOG: Logger = LoggerFactory.getLogger(classOf[FlinkCEPClasses.FlinkCEPPipeline])
val driver = props.getProperty("driver")
val url = props.getProperty("url")
val user = props.getProperty("user")
val password = props.getProperty("password")
Class.forName(driver)
val connection = DriverManager.getConnection(url + "?user=" + user + "&password=" + password)
this.parsedQuery = connection.prepareStatement(insertionString)
LOG.info("\n\nConfiguring BD conection parameters\n\n")
}
}
And this is my main program:
package FlinkCEPClasses
import PostgresConnector.PostgreSink
import org.apache.flink.api.java.io.TextInputFormat
import org.apache.flink.api.java.utils.ParameterTool
import org.apache.flink.cep.PatternSelectFunction
import org.apache.flink.cep.pattern.conditions.SimpleCondition
import org.apache.flink.cep.scala.pattern.Pattern
import org.apache.flink.core.fs.{FileSystem, Path}
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.TimeCharacteristic
import org.apache.flink.cep.scala.{CEP, PatternStream}
import org.apache.flink.streaming.api.functions.source.FileProcessingMode
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
import java.util.Properties
import org.apache.flink.api.common.ExecutionConfig
import org.slf4j.{Logger, LoggerFactory}
class FlinkCEPPipeline {
val LOG: Logger = LoggerFactory.getLogger(classOf[FlinkCEPPipeline])
LOG.info("\n\nStarting the pipeline...\n\n")
var env : StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
env.enableCheckpointing(10)
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime)
env.setParallelism(1)
//var input : DataStream[String] = env.readFile(new TextInputFormat(new Path("/home/luca/Desktop/lines")),"/home/luca/Desktop/lines",FileProcessingMode.PROCESS_CONTINUOUSLY,1)
var input : DataStream[String] = env.readTextFile("/home/luca/Desktop/lines").name("Raw stream")
var tupleStream : DataStream[(String,String,String,String)] = input.map(new S2PMapFunction()).name("Tuple Stream")
var properties : Properties = new Properties()
properties.setProperty("driver","org.postgresql.Driver")
properties.setProperty("url","jdbc:postgresql://localhost:5432/mydb")
properties.setProperty("user","luca")
properties.setProperty("password","root")
tupleStream.addSink(new PostgreSink(properties,env.getConfig)).name("Postgres Sink").setParallelism(1)
tupleStream.writeAsText("/home/luca/Desktop/output",FileSystem.WriteMode.OVERWRITE).name("File Sink").setParallelism(1)
env.execute()
}
My S2PMapFunction code:
package FlinkCEPClasses
import org.apache.flink.api.common.functions.MapFunction
case class S2PMapFunction() extends MapFunction[String,(String,String,String,String)] {
override def map(value: String): (String, String, String,String) = {
var tuple = value.replaceAllLiterally("(","").replaceAllLiterally(")","").split(',')
(tuple(0),tuple(1),tuple(2),tuple(3))
}
}
My pipeline works like this: I read lines from a file, map them to a tuple of strings, and use the data inside the tuples to save them in a Postgres DB
If you want to simulate the data, just create a file with lines in a format like this:
(field1,field2,pointx,pointy)
Edit
The execution order of the TwoPhaseCommitSinkFUnction's methods is the following:
Starting pipeline...
beginTransaction
preCommit
beginTransaction
commit
invoke
invoke
invoke
invoke
invoke
invoke
invoke
invoke
invoke
invoke
invoke
invoke
invoke
invoke
invoke
invoke
invoke
invoke
abort
I'm not an expert on this topic, but a couple of guesses:
preCommit is called whenever Flink begins a checkpoint, and commit is called when the checkpoint is complete. These methods are called simply because checkpointing is happening, regardless of whether the sink has received any data.
Checkpointing is happening periodically, regardless of whether any data is flowing through your pipeline. Given your very short checkpointing interval (10 msec), it does seem plausible that the first checkpoint barrier will reach the sink before the source has managed to send it any data.
It also looks like you are assuming that only one transaction will be open at a time. I'm not sure that's strictly guaranteed, but so long as maxConcurrentCheckpoints is 1 (which is the default), you should be okay.
So, here goes the "answer" for this question. Just to be clear: at this moment, the problem about the TwoPhaseCommitSinkFunction hasn't been solved yet. If what you're looking for is about the original problem, then you should look for another answer. If you don't care about what you'll use as a sink, then maybe I can help you with that.
As suggested by #DavidAnderson, I started to study the Table API and see if it could solve my problem, which was using Flink to insert lines in my database table.
It turned out to be really simple, as you'll see.
OBS: Beware of the version you are using. My Flink's version is 1.9.0.
Source code
package FlinkCEPClasses
import java.sql.Timestamp
import java.util.Properties
import org.apache.flink.api.common.typeinfo.{TypeInformation, Types}
import org.apache.flink.api.java.io.jdbc.JDBCAppendTableSink
import org.apache.flink.streaming.api.TimeCharacteristic
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
import org.apache.flink.table.api.{EnvironmentSettings, Table}
import org.apache.flink.table.api.scala.StreamTableEnvironment
import org.apache.flink.streaming.api.scala._
import org.apache.flink.table.sinks.TableSink
import org.postgresql.Driver
class TableAPIPipeline {
// --- normal pipeline initialization in this block ---
var env : StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
env.enableCheckpointing(10)
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime)
env.setParallelism(1)
var input : DataStream[String] = env.readTextFile("/home/luca/Desktop/lines").name("Original stream")
var tupleStream : DataStream[(String,Timestamp,Double,Double)] = input.map(new S2PlacaMapFunction()).name("Tuple Stream")
var properties : Properties = new Properties()
properties.setProperty("driver","org.postgresql.Driver")
properties.setProperty("url","jdbc:postgresql://localhost:5432/mydb")
properties.setProperty("user","myuser")
properties.setProperty("password","mypassword")
// --- normal pipeline initialization in this block END ---
// These two lines create what Flink calls StreamTableEnvironment.
// It seems pretty similar to a normal stream initialization.
val settings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
val tableEnv = StreamTableEnvironment.create(env,settings)
//Since I wanted to sink data into a database, I used JDBC TableSink,
//because it is very intuitive and is a exact match with my need. You may
//look for other TableSink classes that fit better in you solution.
var tableSink : JDBCAppendTableSink = JDBCAppendTableSink.builder()
.setBatchSize(1)
.setDBUrl("jdbc:postgresql://localhost:5432/mydb")
.setDrivername("org.postgresql.Driver")
.setPassword("mypassword")
.setUsername("myuser")
.setQuery("INSERT INTO mytable (data1,data2,data3) VALUES (?,?,point(?,?))")
.setParameterTypes(Types.STRING,Types.SQL_TIMESTAMP,Types.DOUBLE,Types.DOUBLE)
.build()
val fieldNames = Array("data1","data2","data3","data4")
val fieldTypes = Array[TypeInformation[_]](Types.STRING,Types.SQL_TIMESTAMP,Types.DOUBLE, Types.DOUBLE)
// This is the crucial part of the code: first, you need to register
// your table sink, informing the name, the field names, field types and
// the TableSink object.
tableEnv.registerTableSink("postgres-table-sink",
fieldNames,
fieldTypes,
tableSink
)
// Then, you transform your DataStream into a Table object.
var table = tableEnv.fromDataStream(tupleStream)
// Finally, you insert your stream data into the registered sink.
table.insertInto("postgres-table-sink")
env.execute()
}

monitor system users on raspberry pi with akka actors

I've a raspberry pi on my network with an LED strip attached to it.
My purpose is to create a jar file that will sit on the pi, monitor system events such as logins and load average, and drive the LED based on the those inputs.
To continuosly monitor the logged in users, I am trying to use akka actors. Using the examples provided here, this is what I've gotten so far :
import com.pi4j.io.gpio.GpioFactory
import com.pi4j.io.gpio.RaspiPin
import sys.process._
import akka.actor.{Actor, Props, ActorSystem}
import scala.concurrent.duration._
val who :String = "who".!!
class Blinker extends Actor {
private def gpio = GpioFactory.getInstance
private def led = gpio.provisionDigitalOutputPin(RaspiPin.GPIO_08)
def receive = {
case x if who.contains("pi") => led.blink(250)
case x if who.contains("moocow") => println("falalalala")
}
val blinker = system.actorOf(Props(classOf[Blinker], this))
val cancellable = system.scheduler.schedule(
0 milliseconds,
50 milliseconds,
blinker,
who)
}
However, system is not recognised by my IDE (IntelliJ) and it says, cannot resolve symbol
I also have a main object like this:
object ledStrip {
def main(args: Array[String]): Unit = {
val blink = new Blinker
// blink.receive
}
}
In main, I'm not quite sure how to initialise the application.
Needless to say, this my first time writing a scala program
Help?
Edit::
Here is the updated program after incorporating what Michal has said
class Blinker extends Actor {
val who: String = "who".!!
private val gpio = GpioFactory.getInstance
private val led = gpio.provisionDigitalOutputPin(RaspiPin.GPIO_08)
def receive = {
case x if who.contains("pi") => led.blink(250)
case x if who.contains("moocow") => println("falalalala")
}
val system = ActorSystem()
}
object ledStrip extends Blinker {
def main(args: Array[String]): Unit = {
val blinker = system.actorOf(Props(classOf[Blinker], this))
import system.dispatcher
val cancellable =
system.scheduler.schedule(
50 milliseconds,
5000 milliseconds,
blinker,
who)
}
}
This program compiles fine, but throws the following error upon execution:
Exception in thread "main" java.lang.ExceptionInInitializerError at
ledStrip.main(ledStrip.scala) Caused by:
akka.actor.ActorInitializationException: You cannot create an instance
of [ledStrip$] explicitly using the constructor (new). You have to use
one of the 'actorOf' factory methods to create a new actor. See the
documentation. at
akka.actor.ActorInitializationException$.apply(Actor.scala:194) at
akka.actor.Actor.$init$(Actor.scala:472) at
Blinker.(ledStrip.scala:15) at
ledStrip$.(ledStrip.scala:34) at
ledStrip$.(ledStrip.scala) ... 1 more
Edit 2
Code that compiles and runs (behaviour is still not as desired)< blink(1500) is never executed when user: pi logs out from the shell>
object sysUser {
val who: String = "who".!!
}
class Blinker extends Actor {
private val gpio = GpioFactory.getInstance
private val led = gpio.provisionDigitalOutputPin(RaspiPin.GPIO_08)
def receive = {
case x if x.toString.contains("pi") => led.blink(50)
case x if x.toString.contains("moocow") => println("falalalala")
case _ => led.blink(1500)
}
}
object ledStrip {
def main(args: Array[String]): Unit = {
val system = ActorSystem()
val blinker = system.actorOf(Props[Blinker], "blinker")
import system.dispatcher
val cancellable =
system.scheduler.schedule(
50 milliseconds,
5000 milliseconds,
blinker,
sysUser.who)
}
}
Well, it looks like you haven't defined "system" anywhere. See this example for instance:
https://doc.akka.io/docs/akka/current/actors.html#here-is-another-example-that-you-can-edit-and-run-in-the-browser-
you'll find this line there:
val system = ActorSystem("pingpong")
That's what creates the ActorSystem and defines the val called "system", which you then call methods on.
In the main, I don't think you want to create another instance with "new Blinker", just use:
system.actorOf(Props[Blinker], "blinker")
(which you are already doing and putting it into the "blinker" val)
Seems it is just a akka usage issue. I don't know why you do something seems strange, so I change them for change1, change2, change3, FYI.
class Blinker extends Actor {
val who: String = "who".!!
private val gpio = GpioFactory.getInstance
private val led = gpio.provisionDigitalOutputPin(RaspiPin.GPIO_08)
def receive = {
case x if who.contains("pi") => led.blink(250)
case x if who.contains("moocow") => println("falalalala")
}
}
object ledStrip { // change1
def main(args: Array[String]): Unit = {
val system = ActorSystem() // change2
val blinker = system.actorOf(Props(classOf[Blinker])) // change3
import system.dispatcher
val cancellable =
system.scheduler.schedule(
50 milliseconds,
5000 milliseconds,
blinker,
who)
}
}

How can I reload akka scheduler when Play framework restart

I'm currently developing an application using Play-Scala framework and akka actor to send email when a CRUD action with database happen. I have a problem when restart server.
Any way to reload akka scheduler when Play framework restart or persist akka scheduler and when restart app it will run scheduler. This is my code for example.
#Singleton
class SendEmailSaveDraftActor #Inject()(system: ActorSystem,
config: Configuration,
mailService: MailService)(implicit exec: ExecutionContext) extends Actor {
val userSchedules: mutable.HashMap[String, Seq[Cancellable]] = mutable.HashMap()
private val emailAdmin = config.getString("email.admin").getOrElse("")
private val interval24Hours = config.getInt("batch.saveDarf24.extractIntervalHours").getOrElse(1)
private val interval72Hours = config.getInt("batch.saveDarf72.extractIntervalHours").getOrElse(1)
private val interval14Days = config.getInt("batch.saveDarf14.extractIntervalDays").getOrElse(1)
def receive = {
case s: SendEmailSaveDrafModel =>
userSchedules.get(s.userName) match {
case Some(schedules) =>
schedules.map(_.cancel())
userSchedules.update(s.userName, getSchedules(s))
case None =>
userSchedules += (s.userName -> getSchedules(s))
}
case userName: String =>
userSchedules.get(userName) match {
case Some(schedules) =>
schedules.map(_.cancel())
case None =>
}
}
def getSchedules(s: SendEmailSaveDrafModel): Seq[Cancellable] = {
val schedule1 = system.scheduler.scheduleOnce(5 minutes) {
mailService Send EmailSaveDraft24Hours(to = Seq(s.to), bccEmails = Seq(emailAdmin),id = s.id, orderNo = s.orderNo,designId = s.designId)
}
val schedule2 = system.scheduler.scheduleOnce(10 minutes) {
mailService Send EmailSaveDraft72Hours(to = Seq(s.to), bccEmails = Seq(emailAdmin),id = s.id, orderNo = s.orderNo,designId = s.designId)
}
val schedule3 = system.scheduler.scheduleOnce(15 minutes) {
mailService Send EmailSaveDraft14Days(to = Seq(s.to), bccEmails = Seq(emailAdmin),id = s.id, orderNo = s.orderNo,designId = s.designId)
}
Seq(schedule1, schedule2, schedule3)
}
}
I am using akka scheduler as shown in below code.
Each user can login to my Application and can create scheduler for background task.I am using Akka scheduler for this purpose.
public Cancellable buildScheduler(String actorName, SchedulerActorMessage message, long interval, TimeUnit timeUnit, long initialDelay, String actorMapKey) {
ActorRef daemonRef = actorSystem.actorOf(Props.create(SchedulerActor.class), actorName);
Cancellable cancellableActor = actorSystem.scheduler().schedule(FiniteDuration.apply(initialDelay, timeUnit),
FiniteDuration.apply(interval, timeUnit), daemonRef, message,
actorSystem.dispatcher(), ActorRef.noSender());
actorMap.put(actorMapKey, cancellableActor);
return cancellableActor;
}
I am storing the necessary information in the DB and when application start, I am staring the scheduler again in the global class.