PubSub in Scala Redis - scala

I am new to the Scala and Redis world and I am trying to do something simple:
I want to subscribe to a channel in order to be notified when new keys are added (My idea is just to set the key and publish in a channel that the key was added).
As I was reading in the website, scala-redis is the most updated of the recommended versions, so I decided to use it.
I am having some problems with the subscribing part. I have the following code:
import com.redis._
val r = new RedisClient("localhost", 6379)
r.subscribe("modifications","modifications","subscribe")
I am getting the following error message:
error: missing arguments for method subscribe in trait PubSub; follow
this method with `_' if you want to treat it as a partially applied
function
I was checking the documentation and the function looks like this:
def subscribe(channel: String, channels: String*)(fn: PubSubMessage => Any) {
if (pubSub == true) { // already pubsub ing
subscribeRaw(channel, channels: _*)
} else {
pubSub = true
subscribeRaw(channel, channels: _*)
new Consumer(fn).start
}
}
To be honest, I don't know what I am doing wrong. If someone could help me with some ideas, it would be great.
Thanks

You need to provide a function to handle the received message:
r.subscribe("modifications","modifications","subscribe"){ m => println(m) }
Unfortunately most of the documentation is in the code, but it might help if you take a look at the PubSubDemo or PubSubSpec.

Related

A journey from akka-stream to fs2 - how to define an akka-stream http flow like stage in fs2 using http4s

i'm on my journey to deepen my knowledge in fs2, and want to try fs2-kafka for a use case where i would replace akka stream. The idea is simple, read from kafka and post data via http request to a sink, then commit back to kafka on success. So far i can't really figure out the http part. In akka stream / akka http you have out of the box a flow for that https://doc.akka.io/docs/akka-http/current/client-side/host-level.html#using-a-host-connection-pool
Flow[(HttpRequest, T), (Try[HttpResponse], T), HostConnectionPool]
Which integrate flawlessly with akka stream.
I was trying to see if i could do something similar with http4s and fs2 .
Does anyone has any reference, code sample, blog and what not that shows how to do that kind of integration. So far the only thing i could think of was, wrapping the the stream into the use method of the client resource i.e
BlazeClientBuilder[IO](IORuntime.global.compute).resource.use { ..... run stream here ..... }
Even then i am not sure of the entire thing
The thing with the typelevel ecosystem is that everything is just a library, you don't need examples on how many of them interact together, you just need to understand how each library works and the basic rules of composition.
def createClient(/** whatever arguments you need */): Resource[IO, Client[IO]] = {
// Fill this based on the documentation of the client of your choice:
// I would recommend the ember client from http4s:
// https://http4s.org/v0.23/api/org/http4s/ember/client/emberclientbuilder
}
def sendHttpRequest(client: Client[IO])(data: Data): IO[Result] = {
// Fill this based on the documentation of your client:
// https://http4s.org/v0.23/client/
// https://http4s.org/v0.23/api/org/http4s/client/client
}
def getStreamOfRecords(/** whatever arguments you need */): Stream[IO, CommittableConsumerRecord[IO, Key, Data]] = {
// Fill this based on the documentation of fs2-kafka:
// https://fd4s.github.io/fs2-kafka/docs/consumers
}
def program(/** whatever arguments you need */): Stream[IO, Unit] = {
// Based on the documentation of fs2 and fs2-kafka I would guess something like this:
Stream.fromResource(createClient(...)).flatMap { client =>
getStreamOfRecords(...).evalMapFilter { committable =>
sendHttpRequest(client)(data = committable.record).map { result =>
if (result.isSuccess) Some(committable.offset)
else None
}
}.through(commitBatchWithin(...))
}
}
object Main extends IOApp.Simple {
override final val run: IO[Unit] =
program(...).compile.drain
}
Note that I wrote all this on top of my head and with just a quick glimpse of the documentation, you need to change many things (especially types, like Data & Result). As well as tunning things like error handling and when to commit back to Kafka.
However, I expect this helps you to get an idea of how to structure your code.

Using start_bundle() in apache-beam job not working. Unpickleable storage.Client()

I'm getting this error
pickle.PicklingError: Pickling client objects is explicitly not
supported. Clients have non-trivial state that is local and
unpickleable.
When trying to use beam.ParDo to call a function that looks like this
class ExtractBlobs(beam.DoFn):
def start_bundle(self):
self.storageClient = storage.Client()
def process(self, element):
client = self.storageClient
bucket = client.get_bucket(element)
blobs = list(bucket.list_blobs(max_results=100))
return blobs
I thought the whole point of the start_bundle was to initialize self.someProperty and then use that self.someProperty in the 'process' method to get rid of the pickling problem (from sources below)
Could anyone point me into the right direction of how to solve this?
[+] What I've read:
https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3191
How do I resolve a Pickling Error on class apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum?
UPDATED: The issue was actually a library issue. I had to have the correct apache-beam SDK version with the correct google-cloud versions:
gapic-google-cloud-pubsub-v1==0.15.4
gax-google-logging-v2==0.8.3
gax-google-pubsub-v1==0.8.3
google-api-core==1.1.2
google-api-python-client==1.6.7
google-apitools==0.5.10
google-auth==1.4.1
google-auth-httplib2==0.0.3
google-cloud-bigquery==1.1.0
google-cloud-core==0.28.1
google-cloud-datastore==1.6.0
google-cloud-pubsub==0.26.0
google-cloud-storage==1.10.0
google-gax==0.12.5
apache-beam==2.3.0
Was able to solve this by what seems a combination of things, first I don't serialize anything (ugly one liner in the yield) and second is using threading.local()
class ExtractBlobs(beam.DoFn):
def start_bundle(self):
self.threadLocal = threading.local()
self.threadLocal.client = storage.Client()
def process(self, element):
yield list(storage.Client().get_bucket(element).list_blobs(max_results=100))

How to complete Akka Http response with Stream and Custom Status Code

I've got an akka-http application that uses akka-streams for data processing. So, it makes some sense to complete the request with Source[Result, _] to get backpressure across HTTP boundary for free.
Versions:
akka-http 10.0.7
akka-streams 2.5.2
akka 2.5.2
This is the simplified version of the code, and it works just fine.
pathEnd { post { entity(asSourceOf[Request]) { _ =>
complete {
Source.single("ok")
}
}}}
Since this enpoint is supposed to create and entity, instead of returning 200 OK to the requester I'd like to return 204 CREATED status code. However, I wasn't able to find a way to do that:
complete { Created -> source.single("ok") } fails compilation with Type mismatch, expected: ToResponseMarshallable, actual: (StatusCodes.Success, Source[String, NotUsed])
complete { source.single((Created, "ok")) } fails with Type mismatch, expected: ToResponseMarshallable, actual: Source[(StatusCodes.Success, String), NotUsed]
complete(Created) { Source.single("ok") } fails with Type mismatch, expected: RequestContext, actual: Source[String,NotUsed]
complete(Created, Source.signle("ok") fails with too many arguments for method complete(m: => ToResponseMarshallable)
It looks like custom marshaller might be a way to achieve that, but it'll basically mean I'll need one unmarshaller per endpoint, which isn't quite convenient or clear.
So, the question is, are there a (more convenient than custom unmarshaller) way to complete the request with Source[_,_] while also providing status code.
From the documentation:
complete(Created -> "bar")
If you want to provide some Source of data then construct the HttpResponse and pass it to complete:
import akka.http.scaladsl.model.HttpEntity.Chunked
import akka.http.scaladsl.model.ContentTypes
import akka.http.scaladsl.model.HttpEntity.ChunkStreamPart
complete {
val entity =
Chunked(ContentTypes.`text/plain(UTF-8)`,
Source.single("ok").map(ChunkStreamPart.apply))
HttpResponse(status = Created, entity=entity)
}
I hit this problem and took the approach of using mapResponse to override the status code. This is the simplest approach I've found.
mapResponse(_.copy(status = StatusCodes.Accepted)) {
complete {
Source.single("ok")
}
}
The drawback of Ramon's answer is that you become responsible for both marshalling the stream (to a ByteString), and for the content negotiation.

parse.form method defined in play framework 2.2.x?

Play documentation mentions parse.form method which can be used to bind to an incoming request. I am using play 2.2.x. Is this method defined in this release? I am getting compilation error
value form is not a member of object controllers.Application.parse
def regSubmit = Action(parse.form(userForm) { implicit request =>
val userData= request.body
Ok(views.html.regconf("Registration Successful")(userForm.fill(userData)))
})
As far as I can tell from the 2.2.x source code, parse.form did not exist then, and was only introduced in 2.4.x.
Any reason not to use the "equivalent" bindFromRequest and deal with errors that might be present? Along the lines of:
def regSubmit = Action { implicit request =>
userForm.bindFromRequest.fold (
errors => //-- 'errors' is a form with FormErrors set
Ok(views.html.register(errors)) //-- register is the initial form
userData => //-- 'userData' is the case class that userForm maps to
Ok(views.html.regconf("Registration Successful")(userForm.fill(userData)))
)
}
I have not checked the source code to see whether it is in 2.2.x. It is not mentioned on the ScalaForms page of the docs.

Play 2.3 - Changing to WebSocket.tryAccept from async

I'm new rather new to Scala so I think this might be a very small problem.
I'm currently trying to change the method chat from using the deprecated WebSocket.async to WebSocket.tryAccept. The application uses the sample chat found at PlayFramework websocket-chat
I'm having trouble creating the complex Future type that the method requires.
This is the old method:
def chat() = WebSocket.async[JsValue] {
request =>
ChatRoom.join("User: 1")
}
New method:
def chat2() = WebSocket.tryAccept[JsValue] {
request =>
try {
// ChatRoom.join returns (iteratee,enumerator)
ChatRoom.join("User: 1").map(e => Right(e))
} catch {
case e: Exception =>
Left(Ok("Failed")) // Error here
}
}
My error message:
found : Left[Result,Nothing]
required: Future[Either[Result,(Iteratee[JsValue, _], Enumerator[JsValue])]]
I have no idea how I am supposed to create such a complex result for such a simple message.
Although ChatRoom.join("User: 1").map(e => Right(e)) doesn't show any errors now, I'm unsure if this is the correct implementation.
I'm not in front of an IDE at the moment, so I can't answer fully, but the return type it's asking for isn't as complex as it seems. An "Either" is a "Left" or a "Right" in the same way that an "Option" is a "Some" or a "None". So what it's asking for is a Future (which Websocket.async should also have required) that contains either a Left[Result] -- the fail-to-connect case, or a Right[(Iteratee, Enumerator)] -- the success case. Assuming that Chatroom.join returns a Future[(Iteratee, Enumerator)], the map operation is simply wrapping that in a "Right". The first thing I'd try is wrapping Left(Ok("Failed")) in a Future and see what happens.