Send action from Eff in eval in Halogen? - purescript

I'm trying to throttle a search field in purescript-halogen. What I have so far:
eval (Search search next) = do
State st <- get
-- clear last timeout
liftEff' $ maybe (return unit) T.clearTimeout st.searchTimeout
-- new timeout
t <- liftEff' $ T.timeout 1000 $ return unit -- how to send action from here???
modify (\(State s) -> State $ s { searchTimeout = Just t })
pure next
I thought about saving the UI driver in a global Var and send new actions from there, but this seems very hacky to me.
Or maybe there's another way to do that?

This is the kind of thing you'll probably need to create an EventSource for. An EventSource allows you to subscribe to something somewhat like a signal/stream/event listener and then raise actions.
This isn't quite what you want, but is an example of using an EventSource to run an interval based timer: https://github.com/slamdata/slamdata/blob/2ab704302292406e838e1a6e5541aa06ad47e952/src/Notebook/Cell/Component.purs#L213-L217

Related

Use current Celery task in chord?

I'd like to run tasks in parallel that have a data dependency at the beginning of the first task. It seems that I should be able to start a chord with the current task in the header group that's used as the args for the body callback. I don't see a way to reference the signature of the current task in the documentation, but is there a way to do this?
I was thinking it would be something like this with the get_signature() being the missing piece:
#app.task(bind=True)
def chord_test(self, id_) -> int:
data, next_id = get_data(id_)
chord([self.get_signature(), chord_test.s(next_id)])(handle_results.s())
return expensive_processing(data)

How to pass and get attributes in Gatling session from and to "exec" blocks

I'm pretty new at Scala/Gatling so forgive me if you see an anti-pattern or something wrong, I have a Gatling scenario in which I have to run some bash external scripts, and have to save some variables for their use in another exec block (I've tried calling the .exec right after the " exec(session => { ..." block, and have tried calling it as a method in another object.
exec(session => {
val scriptOutput = s"src/main/resources/thepath/myscript.sh ${arg1} ${arg2}".!!
val x_variable = "123" + scriptOutput
session.set("x_variable",x_variable)
})
.exec(MyClient.calling)
In "MyClient", I need to use the value of "x_variable", I currently have something like this:
def calling() = {
exec(http("POST to ${x_variable}")
.post("/${x_variable}"))
}
But when doing so, it doesn't work, the Post call is made but the variable "x_variable" is empty. To summarize, the question is how to pass that "session" information to any next "exec" block (right after or in another object), and how to consume it from that "session"?
The code described is working now, it seems I had some "trash" in the environment, after doing an mvn clean install it worked as expected.

Kafka commit with Akka and LogRotator

I am trying to use the Consumer.committableSource to read data from Kafka with Akka. I would then like to write the data in files on a shared folder.
When committing, we usually use something like via(Committer.flow(committerSettings).
However, this method does not return the values of the Kafka stream, so afterward I cannot call something like .runWith(LogRotatorSink.withSinkFactory(rotator, sink)) to write the data.
Here's the code without commit:
Consumer.committableSource(settings, Subscriptions.topics(kafkaTopics.toSet))
.via(processor)
.prepend(headerCSVSource)
.via(CsvFormatting.format(delimiter =
CsvFormatting.SemiColon))
.runWith(LogRotatorSink.withSinkFactory(rotator, sink))
Here's what I think I need:
Consumer
.committableSource(settings, Subscriptions.topics(kafkaTopics.toSet))
.via(processor)
.prepend(headerCSVSource)
.via(CsvFormatting.format(delimiter =
CsvFormatting.SemiColon))
.via(Committer.flow(committerSettings))
.runWith(LogRotatorSink.withSinkFactory(rotator, sink))
But that won't work because via(Committer.flow) does not return the stream values (but Flow[Committable, Done, NotUsed]).
What I need is to commit the offset only after the data has been written in the file.
If you feel that other options (like using plainSource / auto-commit) would be more appropriate I am open to considering them.
Looks like you need to pass flow element to one sink, and when it succeeded, to another.
You can run a substream inside your stream. Something along this lines:
.via(CsvFormatting.format(delimiter = CsvFormatting.SemiColon))
.mapAsync(1) { c =>
Source.single(c).runWith(LogRotatorSink.withSinkFactory(rotator, sink)).map(_ => c)
}
.runWith(Committer.sink(committerSettings))
It should work, however, after some thought, I think best would be not to use sink to write to logs, but some other way which doesn't terminate the stream.

Any way to ensure frisby.js test API calls go in sequential order?

I'm trying a simple sequence of tests on an API:
Create a user resource with a POST
Request the user resource with a GET
Delete the user resource with a DELETE
I've a single frisby test spec file mytest_spec.js. I've broken the test into 3 discrete steps, each with their own toss() like:
f1 = frisby.create("Create");
f1.post(post_url, {user_id: 1});
f1.expectStatus(201);
f1.toss();
// stuff...
f2 = frisby.create("Get");
f2.get(get_url);
f2.expectStatus(200);
f2.toss();
//Stuff...
f3 = frisby.create("delete");
f3.get(delete_url);
f3.expectStatus(200);
f3.toss();
Pretty basic stuff, right. However, there is no guarantee they'll execute in order as far as I can tell as they're asynchronous, so I might get a 404 on test 2 or 3 if the user doesn't exist by the time they run.
Does anyone know the correct way to create sequential tests in Frisby?
As you correctly pointed out, Frisby.js is asynchronous. There are several approaches to force it to run more synchronously. The easiest but not the cleanest one is to use .after(() -> ... you can find more about after() in Fisby.js docs.

Celery - error handling and data storage

I'm trying to better understand common strategies regarding results and errors in Celery.
I see that results have statuses/states and stores results if requested -- when would I use this data? Should error handling and data storage be contained within the task?
Here is a sample scenario, in case it helps better understand my objective:
I have a geocoding task that goeocodes user addresses. If the task fails or succeeds, I'd like to update a field in the database letting the user know. (Error handling) On success I'd like the geocoded data to be inserted into the database (Data storage)
What approach should take?
Let me preface this by saying that I'm still getting a feel for Celery myself. That being said, I have some general inclinations about how I'd go about tackling this, and since no one else has responded, I'll give it a shot.
Based on what you've written, a relatively simple (though I suspect non-optimized) solution is to follow the broad contours of the blog comment spam task example from the documentation.
app.models.py
class Address(models.Model):
GEOCODE_STATUS_CHOICES = (
('pr', 'pre-check'),
('su', 'success'),
('fl', 'failed'),
)
address = models.TextField()
...
geocode = models.TextField()
geocode_status = models.CharField(max_length=2,
choices=GEOCODE_STATUS_CHOICES,
default='pr')
class AppUser(models.Model):
name = models.CharField(max_length=100)
...
address = models.ForeignKey(Address)
app.tasks.py
from celery import task
from app.models import Address, AppUser
from some_module import geocode_function #assuming this returns a string
#task()
def get_geocode(appuser_pk):
user = AppUser.objects.get(pk=appuser_pk)
address = user.address
try:
result = geocode_function(address.address)
address.geocode = result
address.geocode_status = 'su' #set address object as successful
address.save()
return address.geocode #this is optional -- your task doesn't have to return anything
on the other hand, you could also choose to decouple the geo-
code function from the database update for the object instance.
Also, if you're thinking about chaining tasks together, you
might think about if it's advantageous to pass a parameter as
an input or partial input into the child task.
except Exception as e:
address.geocode_status = 'fl' #address object fails
address.save()
#do something_else()
raise #re-raise the error, in case you want to trigger retries, etc
app.views.py
from app.tasks import *
from app.models import *
from django.shortcuts import get_object_or_404
def geocode_for_address(request, app_user_pk):
app_user = get_object_or_404(AppUser, pk=app_user_pk)
...etc.etc. --- **somewhere calling your tasks with appropriate args/kwargs
I believe this meets the minimal requirements you've outlined above. I've intentionally left the view undeveloped since I don't have a sense of how exactly you want to trigger it. It sounds like you also may want some sort of user notification when their address can't be geocoded ("I'd like to update a field in a database letting a user know"). Without knowing more about the specifics of this requirement, I would it sounds like something that might be best accomplished in your html templates (if instance.attribute value is X, display q in template) or by using a django.signals (set up a signal for when a user.address.geocode_status switches to failure -- say, by emailing the user to let them know, etc.).
In the comments to the code above, I mentioned the possibility of decoupling and chaining the component parts of the get_geocode task above. You could also think about decoupling the exception handling from the get_geocode task, by writing a custom error handler task, and using the link_error parameter (for instance., add.apply_async((2, 2), link_error=error_handler.s(), where error_handler has been defined as a task in app.tasks.py ). Also, whether you choose to handle errors via the main task (get_geocode) or via a linked error handler, I would think that you would want to get much more specific about how to handle different sorts of errors (e.g., do something with connection errors different than with address data being incorrectly formatted).
I suspect there are better approaches, and I'm just beginning to understand how inventive you can get by chaining tasks, using groups and chords, etc. Hope this helps at least get you thinking about some of the possibilities. I'll leave it to others to recommend best practices.