For example, my tree is:
class TreeNode {
List<TreeNode> children;
}
I'm looking for/hoping to have something in producer like:
#ProducerModule
class RecursiveModule {
#Produces
ListenableFuture<TreeNode> produceNode(/*...?*/) {
// Somehow recursively get a node.
}
}
So that it can dynamically parse some external source, and construct a node, recursively, for me.
A more concrete little example use case may be to build a HN news reader. In their API there is an Item that may have multiple children Items. So to read a news item with all comments, one needs to fetch the root Item and recursively fetch its children.
*I'm new to dagger producers, and I'm trying to learn what it can do. I'm not sure if this recursiveness breaks the "acyclic" in dagger's name, but I'm curious to see whether this is possible.
I (kinda) have a workaround:
Let the component have the production executor passed in as a parameter.
#ProductionComponent(/* ... */)
class TreeComponent {
// ...
#ProductionComponent.Builder
interface Builder {
#BindsInstance
Builder executor(#Production Executor executor);
#BindsInstance
Builder id(#NodeId int id); // Say we need an id to know which node to process, or how to process a node.
// ...
}
}
Then for each node, process the node itself, and create one producer per child.
In this way I'm managing the producer tree myself. But the producer library still takes care of the async coordination. I think there is still benefit to it.
Related
I'm trying to build an algorithm to explore a tree with RxJava.
My reason to use RxJava is that my node processing function is already using RxJava to send RPCs to other services. Note that the order in which the different branches of the tree are explored is not important.
Ideally, I would like to have something like a QueueFlowable<Node> extends Flowable<Node> which would expose a push function that Observers could use to add new nodes in the queue after processing them.
private static main(String[] args) {
QueueFlowable<Node> nodes = new QueueFlowable<>();
nodes.push(/* the root */);
nodes.concatMap(node -> process(node)).map(nodes::push));
nodes.blockingSubscribe();
}
// Processes the node and returns a Flowable of other nodes to process.
private static Flowable<Node> process(Node node) { ... }
This sounds like something relatively common (as I would expect web crawlers to implement something similar) but I still don't manage to make it work.
My latest attempt was to use a PublishProcessor as follows:
PublishProcessor<Mode> nodes = PublishProcessor.create();
nodes.concatMap(node -> process(node)).subscribe(nodes);
nodes.onNext(/* the root node */);
nodes.blockingSubscribe();
Of course, this never terminates as the processor does not complete.
Any help would be much appreciated!
I have an external dependency on another system in my streams app and would like to publish a message to DLQ kafka topic from within my streams app whenever a Deserialization/Producer/or any external/network exception happens, so that I can monitor that topic and reprocess records as needed. I can't seem to find a good example of doing this anywhere. The closest reference I found is https://docs.confluent.io/current/streams/faq.html#option-3-quarantine-corrupted-records-dead-letter-queue, but 1. It talks only about DeserializationExceptionHandler, what about other exception scenarios? 2. It doesn't demo the right way to configure/manage/close the associated KafkaProducer.
I would like to have try catch for the external dependency code and send the record(s) that cause exception to a dead letter queue topic. Any help will be appreciated!
For the processing logic you could take this approach:
someKStream
// the processing logic
.mapValues(inputValue -> {
// for each execution the below "return" could provide a different class than the previous run!
// e.g. "return isFailedProcessing ? failValue : successValue;"
// where failValue and successValue have no related classes
return someObject; // someObject class vary at runtime depending on your business
}) // here you'll have KStream<whateverKeyClass, Object> -> yes, Object for the value!
// you could have a different logic for choosing
// the target topic, below is just an example
.to((k, v, recordContext) -> v instanceof failValueClass ?
"dead-letter-topic" : "success-topic",
// you could completelly ignore the "Produced" part
// and rely on spring-boot properties only, e.g.
// spring.kafka.streams.properties.default.key.serde=yourKeySerde
// spring.kafka.streams.properties.default.value.serde=org.springframework.kafka.support.serializer.JsonSerde
Produced.with(yourKeySerde,
// JsonSerde could be an instance configured as you need
// (with type mappings or headers setting disabled, etc)
new JsonSerde<>()));
Your classes, though different and landing into different topics, will serialize as expected.
When not using to(), but instead one wants to continue with other processing, he could use branch() with splitting the logic based on the kafka-value class; the trick for branch() is to return KStream<keyClass, ?>[] in order to further allow one to cast to the appropriate class the individual items from KStream<keyClass, ?>[].
We are using Azure Service Fabric and are using actors to model specific devices, using the id of the device as the ActorId. Service Fabric will instantiate a new actor instance when we request an actor for a given id if it is not already instantiated, but I cannot seem to find an api that allows me to query if a specific device id already has an instantiated actor.
I understand that there might be some distributed/timing issues in obtaining the point-in-time truth but for our specific purpose, we do not need a hard realtime answer to this but can settle for a best guess. We would just like to, in theory, contact the current primary for the specific partition resolved by the ActorId and get back whether or not the device has an instantiated actor.
Ideally it is a fast/performant call, essentially faster than e.g. instantiating the actor and calling a method to understand if it has been initialized correctly and is not just an "empty" actor.
You can use the ActorServiceProxy to iterate through the information for a specific partition but that does not seem to be a very performant way of obtaining the information.
Anyone with insights into this?
The only official way you can check if the actor has been activated in any Service Partition previously is using the ActorServiceProxy query, like described here:
IActorService actorServiceProxy = ActorServiceProxy.Create(
new Uri("fabric:/MyApp/MyService"), partitionKey);
ContinuationToken continuationToken = null;
do
{
PagedResult<ActorInformation> page = await actorServiceProxy.GetActorsAsync(continuationToken, cancellationToken);
var actor = page.Items.FirstOrDefault(x => x.ActorId == idToFind);
continuationToken = page.ContinuationToken;
}
while (continuationToken != null);
By the nature of SF Actors, they are virtual, that means they always exist, even though you didn't activated then previously, so it make a bit harder to do this check.
As you said, it is not performant to query all actors, so, the other workarounds you could try is:
Store the IDs in a Reliable Dictionary elsewhere, every time an Actor is activated you raise an event and insert the ActorIDs in the Dictionary if not there yet.
You can use the OnActivateAsync() actor event to notify it's creation, or
You can use the custom actor factory in the ActorService to register actor activation
You can store the dictionary in another actor, or another StatefulService
Create a property in the actor that is set by the actor itself when it is activated.
The OnActivateAsync() check if this property has been set before
If not set yet, you set a new value and store in a variable (a non persisted value) to say the actor is new
Whenever you interact with actor you set this to indicate it is not new anymore
The next activation, the property will be already set, and nothing should happen.
Create a custom IActorStateProvider to do the same as mentioned in the option 2, instead of handle it in the actor it will handle a level underneath it. Honestly I think it is a bit of work, would only be handy if you have to do the same for many actor types, the option 1 and 2 would be much easier.
Do as Peter Bons Suggested, store the ActorID outside the ActorService, like in a DB, I would only suggest this option if you have to check this from outside the cluster.
.
The following snipped can help you if you want to manage these events outside the actor.
private static void Main()
{
try
{
ActorRuntime.RegisterActorAsync<NetCoreActorService>(
(context, actorType) => new ActorService(context, actorType,
new Func<ActorService, ActorId, ActorBase>((actorService, actorId) =>
{
RegisterActor(actorId);//The custom method to register the actor if new
return (ActorBase)Activator.CreateInstance(actorType.ImplementationType, actorService, actorId);
})
)).GetAwaiter().GetResult();
Thread.Sleep(Timeout.Infinite);
}
catch (Exception e)
{
ActorEventSource.Current.ActorHostInitializationFailed(e.ToString());
throw;
}
}
private static void RegisterActor(ActorId actorId)
{
//Here you will put the logic to register elsewhere the actor creation
}
Alternatively, you could create a stateful DeviceActorStatusActor which would be notified (called) by DeviceActor as soon as it's created. (Share the ActorId for correlation.)
Depending on your needs you can also register multiple Actors with the same status-tracking actor.
You'll have great performance and near real-time information.
I have a WPF application based on MVVM with Caliburn.Micro and Ninject. I have a root viewmodel called ShellViewModel. It has a couple of dependencies (injected via constructor) which are configured in Caliburn's Bootstrapper. So far so good.
Somewhere down the line, there is a MenuViewModel with a couple of buttons, that in turn open other viewmodels with their own dependencies. These viewmodels are not created during creation of the root object, but I still want to inject dependencies into them from my IoC container.
I've read this question on service locator vs dependency injection and I understand the points being made.
I'm under the impression however that my MenuViewModel needs to be able to access my IoC container in order the properly inject the viewmodels that are being made dynamically..which is something I'm trying to avoid. Is there another way?
Yes, I believe you can do something a bit better.
Consider that if there was no on-demand requirement then obviously you could make those viewmodels be dependencies of MenuViewModel and so on up the chain until you get to the root of the object graph (the ShellViewModel) and the container would wire everything up.
You can put a "firewall" in the object graph by substituting something that can construct the dependencies of MenuViewModel for the dependencies themselves. The container is the obvious choice for this job, and IMHO from a practical standpoint this is a good enough solution even if it's not as pure.
But you can also substitute a special-purpose factory instead of the container; this factory would take a dependency on the container and provide read-only properties for the real dependencies of MenuViewModel. Accessing the properties would result in having the container resolve the objects and returning them (accessor methods would also work instead of properties; what's more appropriate is another discussion entirely, so just use whatever you think is better).
It may look like that you haven't really changed the status quo, but the situation is not the same it would be if MenuViewModel took a direct dependency on the container. In that case you would have no idea what the real dependencies of MenuViewModel are by looking at its public interface, while now you would see that there's a dependency on something like
interface IMenuViewModelDependencyFactory
{
public RealDependencyA { get; }
public RealDependencyB { get; }
}
which is much more informative. And if you look at the public interface of the concrete MenuViewModelDependencyFactory things are also much better:
class MenuViewModelDependencyFactory : IMenuViewModelDependencyFactory
{
private Container container;
public MenuViewModelDependencyFactory(Container container) { ... }
public RealDependencyA { get { ... } }
public RealDependencyB { get { ... } }
}
There should be no confusion over what MenuViewModelDependencyFactory intends to do with the container here because it's so very highly specialized.
I implemented in Java what I called a "foldable queue", i.e., a LinkedBlockingQueue used by an ExecutorService. The idea is that each task as a unique id that if is in the queue while another task is submitted via that same id, it is not added to the queue. The Java code looks like this:
public final class FoldablePricingQueue extends LinkedBlockingQueue<Runnable> {
#Override
public boolean offer(final Runnable runnable) {
if (contains(runnable)) {
return true; // rejected, but true not to throw an exception
} else {
return super.offer(runnable);
}
}
}
Threads have to be pre-started but this is a minor detail. I have an Abstract class that implements Runnable that takes a unique id... this is the one passed in
I would like to implement the same logic using Scala and Akka (Actors).
I would need to have access to the mailbox, and I think I would need to override the ! method and check the mailbox for the event.. has anyone done this before?
This is exactly how the Akka mailbox works. The Akka mailbox can only exist once in the task-queue.
Look at:
https://github.com/jboner/akka/blob/master/akka-actor/src/main/scala/akka/dispatch/Dispatcher.scala#L143
https://github.com/jboner/akka/blob/master/akka-actor/src/main/scala/akka/dispatch/Dispatcher.scala#L198
Very cheaply implemented using an atomic boolean, so no need to traverse the queue.
Also, by the way, your Queue in Java is broken since it doesn't override put, add or offer(E, long, TimeUnit).
Maybe you could do that with two actors. A facade one and a worker one. Clients send jobs to facade. Facade forwards then to worker, and remember them in its internal state, a Set queuedJobs. When it receives a job that is queued, it just discard it. Each time the worker starts processing a job (or completes it, whichever suits you), it sends a StartingOn(job) message to facade, which removes it from queuedJobs.
The proposed design doesn't make sense. The closest thing to a Runnable would be an Actor. Sure, you can keep them in a list, and not add them if they are already there. Such lists are kept by routing actors, which can be created from ready parts provided by Akka, or from a basic actor using the forward method.
You can't look into another actor's mailbox, and overriding ! makes no sense. What you do is you send all your messages to a routing actor, and that routing actor forwards them to a proper destination.
Naturally, since it receives these messages, it can do any logic at that point.