The problem I'm facing is:
I have a request and need to check 3 types of responses for this request. Each time I need to slightly modify my request body before sending it.
dredd --names:
info: Users > User Operations > Update User > Example 1
skip: PUT (204) myurl/users/userid-123
info: Users > User Operations > Update User > Example 2
skip: PUT (422) myurl/users/userid-123
info: Users > User Operations > Update User > Example 3
skip: PUT (429) myurl/users/userid-123
My idea was in the before hook do something like "cucumber-style":
before(/^Users > User Operations > Update User > Example (1|2|3)$/) do |myvar|
Here run loop from 1 to 3 and do necessary changes
But after several trials this doesn't seem to work, looks like ruby-hooks doesn't support variables in names.
Any ideas what would be the proper approach for this case, since having separate before-hook for every single request doesn't seem right here?
I don't think the Ruby hooks support regular expressions in the transaction names. A simple workaround would be to catch all and distinguish the transactions in the hook itself:
before_each do |transaction|
if transaction.name.match(/Example (1|2|3)$/)
...
end
end
I have a flow with data associated to users. I also have a state for each user, that I can get asynchronously from DB.
I want to separate my flow with one subflow per user, and load the state for each user when materializing the subflow, so that the elements of the subflow can be treated with respect to this state.
If I don't want to merge the subflows downstream, I can do something with groupBy and Sink.lazyInit :
def getState(userId: UserId): Future[UserState] = ...
def getUserId(element: Element): UserId = ...
def treatUser(state: UserState): Sink[Element, _] = ...
val treatByUser: Sink[Element] = Flow[Element].groupBy(
Int.MaxValue,
getUserId
).to(
Sink.lazyInit(
elt => getState(getUserId(elt)).map(treatUser),
??? // this is never called, since the subflow is created when an element comes
)
)
However, this does not work if treatUser becomes a Flow, since there is no equivalent for Sink.lazyInit.
Since subflows of groupBy are materialized only when a new element is pushed, it should be possible to use this element to materialize the subflow, but I wasn't able to adapt the source code for groupBy so that this work consistently. Likewise, Sink.lazyInitdoesn't seem to be easily translatable to the Flow case.
Any idea on how to solve this issue ?
The relevant Akka issue you have to look at is #20129: add Sink.dynamic and Flow.dynamic.
In the associated PR #20579 they actually implemented LazySink stuffs.
They are planning to do LazyFlow next:
Will do next lazyFlow with similar signature.
Unfortunately you have to wait for that functionality to be implemented in Akka or write it yourself (then consider a PR to Akka).
I have a spark streaming application that needs to take these steps:
Take a string, apply some map transformations to it
Map again: If this string (now an array) has a specific value in it, immediately send an email (or do something OUTSIDE the spark environment)
collect() and save in a specific directory
apply some other transformation/enrichment
collect() and save in another directory.
As you can see this implies to lazily activated calculations, which do the OUTSIDE action twice. I am trying to avoid caching, as at some hundreds lines per second this would kill my server.
Also trying to mantaining the order of operation, though this is not as much as important: Is there a solution I do not know of?
EDIT: my program as of now:
kafkaStream;
lines = take the value, discard the topic;
lines.foreachRDD{
splittedRDD = arg.map { split the string };
assRDD = splittedRDD.map { associate to a table };
flaggedRDD = assRDD.map { add a boolean parameter under a if condition + send mail};
externalClass.saveStaticMethod( flaggedRDD.collect() and save in file);
enrichRDD = flaggedRDD.map { enrich with external data };
externalClass.saveStaticMethod( enrichRDD.collect() and save in file);
}
I put the saving part after the email so that if something goes wrong with it at least the mail has been sent.
The final 2 methods I found were these:
In the DStream transformation before the side-effected one, make a copy of the Dstream: one will go on with the transformation, the other will have the .foreachRDD{ outside action }. There are no major downside in this, as it is just one RDD more on a worker node.
Extracting the {outside action} from the transformation and mapping the already sent mails: filter if mail has already been sent. This is a almost a superfluous operation as it will filter out all of the RDD elements.
Caching before going on (although I was trying to avoid it, there was not much to do)
If trying to not caching, solution 1 is the way to go
A processor 'a' takes care the header 'a' of a message 'a_b_c_d' and passes the payload 'b_c_d' to the another processor in the next level as following:
msg(a, b_c_d).
pro(a;b;c;d).
msg(b, c_d) :- pro(X), msg(X, b_c_d).
msg(c, d) :- pro(X), msg(X, c_d).
msg(d) :- pro(X), msg(X, d).
#hide. #show msg/2. #show msg/1.
How should I represent list 'a_b_c_d' in ASP, and change the above to general cases?
No, official way, but I think most people don't realize you can construct cons-cells in ASP.
For instance, here's how you can get items for all lists of length 5 from elements 1..6
element(1..6).
listLen(empty, 0).
listLen(cons(E, L), K + 1) :- element(E); listLen(L, K); K < 5.
is5List(L) :- listLen(L, 5).
#show is5List/1.
resulting in
is5List(cons(1,cons(1,cons(1,cons(1,cons(1,empty))))))
is5List(cons(1,cons(1,cons(1,cons(1,cons(2,empty))))))
is5List(cons(1,cons(1,cons(1,cons(1,cons(3,empty))))))
...
There is no 'official' way to handle lists in ASP as far as I know. But, DLV has built-in list handling similar to Prolog's.
The way you implement a list, the list itself cannot be used as a term and thus what if you want to bind between variables in the list and other elements of a rule? Perhaps you would like something such as p(t, [q(X), q(Y)]) :- X != Y.
You can try implementing a list as (a, b, c) and an append predicate but the problem is ASP requires grounding before computing answer-sets. Consequently a list defined in this way whilst more like lists in Prolog would mean the ground-program contains all ground-instances of all possible lists (explosion) regardless of whether they are used or not.
I therefore come back to my first point, try using DLV instead of Clingo if possible (for this task, at least).
By using index, I do have a way to walk a list, however, I do not know this is the official way to handle a list in ASP. Could someone has more experience in ASP give us a hand? Thanks.
index(3,a). index(2,b). index(1,c). index(0,d).
pro(a;b;c;d). msg(3,a).
msg(I-1,N) :- pro(P), msg(I,P), index(I,P), I>0, index(I-1,N).
#hide. #show msg/2.
You can use s(ASP) or s(CASP) ASP systems. Both of them support list operations like prolog. You might need to define the list built-in in ASP .
I have just started investigating into treeline.io beta, so, I could not find any way in the existing machinepacks that would do the job(sanitizing user inputs). Wondering if i can do it in anyway, best if within treeline.
Treeline automatically does type-checking on all incoming request parameters. If you create a route POST /foo with parameter age and give it 123 as an example, it will automatically display an error message if you try to post to /foo with age set to abc, because it's not a number.
As far as more complex validation, you can certainly do it in Treeline--just add more machines to the beginning of your route. The if machine works well for simple tasks; for example, to ensure that age is < 150, you can use if and set the left-hand value to the age parameter, the right-hand value to 150, and the comparison to "<". For more custom validations you can create your own machine using the built-in editor and add pass and fail exits like the if machine has!
The schema-inspector machinepack allow you to sanitize and validate the inputs in Treeline: machinepack-schemainspector
Here a screenshot how I'm using it in my Treeline project:
The content of the Sanitize element:
The content of the Validate element (using the Sanitize output):
For the next parts, I'm always using the Sanitize output (email trimmed and in lowercase for this example).