AbstractAjaxTimerBehavior start immediately - wicket

I'm trying to set an AbstractAjaxTimerBehavior to start immediately when called and then be repeated every X seconds (let's say 10 seconds), but i couldn't find something.
I thought of a hack by setting the first interval to 1 second and then inside onTimer method, set every time the interval to the desired X seconds.
myBehavior = new AbstractAjaxTimerBehavior(Duration.seconds(1)) {
private static final long serialVersionUID = 1L;
#Override
protected void onTimer(AjaxRequestTarget target) {
this.setUpdateInterval(Duration.seconds(10));
.
.
.
}
}
Is there a better way of doing that without having to set every time the interval inside onTimer? Thnx!

Related

Calculating an average using a ValueTransformerWithKey Or can I use Kafka's aggregation functions

I have a stream of objects where I'd like to calculate the average of a field in this object and then save that average back onto the object. I'd like to have a tumbling window of 5 minutes with a retention of 1 hour. I'm new to Kafka so I'm wondering if this is the correct way to approach the problem.
First, I create a persistent store:
StoreBuilder<WindowStore<String, Double>> averagesStoreSupplier =
Stores.windowStoreBuilder(
Stores.persistentWindowStore(WINDOW_STORE_NAME, Duration.ofHours(1), Duration.ofMinutes(5), true),
Serdes.String(),
Serdes.Double());
streamsBuilder.addStateStore(averagesStoreSupplier);
Then I call my transformer using:
otherKTable
.leftJoin(objectKTable.transformValues(new AveragingTransformerSupplier(WINDOW_STORE_NAME), WINDOW_STORE_NAME),
myValueJoiner)
.to("outputTopic")
And here is my transformer:
public class AveragingTransformerSupplier implements ValueTransformerWithKeySupplier<String, MyObject, MyObject> {
private final String stateStoreName;
public TelemetryAveragingTransformerSupplier(final String stateStoreName) {
this.stateStoreName = stateStoreName;
}
public ValueTransformerWithKey<String, MyObject, MyObject> get() {
return new ValueTransformerWithKey<>() {
private WindowStore<String, Double> averagesStore;
#Override
public void init(ProcessorContext processorContext) {
averagesStore = Try.of(() ->(WindowStore<String, Double>) processorContext.getStateStore(stateStoreName)).getOrElse((WindowStore<String, Double>)null);
}
#Override
public MyObject transform(String s, MyObject myObject) {
if (averagesStore != null) {
averagesStore.put(s, myObject.getNumber());
Instant timeFrom = Instant.ofEpochMilli(0); // beginning of time = oldest available
Instant timeTo = Instant.now();
WindowStoreIterator<Double> itr = averagesStore.fetch(s, timeFrom, timeTo);
double sum = 0.0;
int size = 0;
while(itr.hasNext()) {
KeyValue<Long, Double> next = itr.next();
size++;
sum += next.value;
}
myObject.setNumber(sum / size);
}
return myObject;
}
#Override
public void close() {
if (averagesStore != null) {
averagesStore.flush();
}
}
};
}
}
I have a couple of questions.
First, is the way I define the WindowStore the correct way to form a tumbling window? How would I create a hopping window?
Second, inside my transformer I get all the items from the store from the beginning of time to now. Since I defined it as a 5 minute window and 1 hour retention does that mean that the items in the store is a snapshot of 5 minutes worth of data? What does the retention do here?
I have this working on trivial cases, but not sure if there is a better way to do this using aggregations and joins or even if I'm doing this correctly. Also I had to surround the retrieval of getting the store in a try catch because the init gets called multiple times and sometimes I get Processor has no access to StateStore exception.
I would recommend to use the DSL instead of the Processor API for this use case. Cf. https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Stream+Usage+Patterns for details.
I have a couple of questions. First, is the way I define the WindowStore the correct way to form a tumbling window? How would I create a hopping window?
A windowed store can be used for either hopping or tumbling window -- it depends how you use it in your processor, not how you create the store, what window semantics you get.
Second, inside my transformer I get all the items from the store from the beginning of time to now. Since I defined it as a 5 minute window and 1 hour retention does that mean that the items in the store is a snapshot of 5 minutes worth of data? What does the retention do here?
The parameter windowSize when you create the store does not work the way you expect it. You would need to code up the windowing logic manually in your Transformer code by using put(key, value, windowStartTimestamp) -- atm, you are using put(key, value) that uses context.timestamp(), ie, the current record timestamp, as windowStartTimestamp -- I doubt that is what you want. The retention time is based on the window timestamps, ie, old windows will be deleted after they expire.

Score corruption when using computed values to calculate score

I have a use case where:
A job can be of many types, says A, B and C.
A tool can be configured to be a type: A, B and C
A job can be assigned to a tool. The end time of the job depends on the current configured type of the tool. If the tool's current configured type is different from the type of the job, then time needs to be added to change the current tool configuration.
My #PlanningEntity is Allocation, with startTime and tool as #PlanningVariable. I tried to add the currentConfiguredToolType in the Allocation as the #CustomShadowVariable and update the toolType in the shadowListener's afterVariableChanged() method, so that I have the correct toolType for the next job assigned to the tool. However, it is giving me inconsistent results.
[EDIT]: I did some debugging to see if the toolType is set correctly. I found that the toolType is being set correctly in afterVariableChanged() method. However, when I looked at the next job assigned to the tool, I see that the toolType has not changed. Is it because of multiple threads executing this flow? One thread changing the toolType of the tool the first time and then a second thread simultaneously assigning the times the second time without taking into account the changes done by the first thread.
[EDIT]: I was using 6.3.0 Final earlier (till yesterday). I switched to 6.5.0 Final today. There too I am seeing similar results, where the toolType seems to be set properly in afterVariableChanged() method, but is not taken into account for the next allocation on that tool.
[EDIT]: Domain code looks something like below:
#PlanningEntity
public class Allocation {
private Job job;
// planning variables
private LocalDateTime startTime;
private Tool tool;
//shadow variable
private ToolType toolType;
private LocalDateTime endTime;
#PlanningVariable(valueRangeProviderRefs = TOOL_RANGE)
public Tool getTool() {
return this.tool;
}
#PlanningVariable(valueRangeProviderRefs = START_TIME_RANGE)
public LocalDateTime getStartTime() {
return this.startTime;
}
#CustomShadowVariable(variableListenerClass = ToolTypeVariableListener.class,
sources = {#CustomShadowVariable.Source(variableName = "tool")})
public ToolType getCurrentToolType() {
return this.toolType;
}
private void setToolType(ToolType type) {
this.toolType = type;
this.tool.setToolType(type);
}
private setStartTime(LocalDateTime startTime) {
this.startTime = startTime;
this.endTime = getTimeTakenForJob() + getTypeChangeTime();
...
}
private LocalDateTime getTypeChangeTime() {
//typeChangeTimeMap is available and is populated with data
return typeChangeTimeMap.get(tool.getType);
}
}
public class Tool {
...
private ToolType toolType;
getter and setter for this.
public void setToolType() { ...}
public ToolType getToolType() { ...}
}
public class ToolTypeVariableListener implements VariableListener<Allocation> {
...
#Override
public void afterVariableChanged(ScoreDirector scoreDirector, Allocation entity) {
scoreDirector.afterVariableChanged(entity, "currentToolType");
if (entity.getTool() != null && entity.getStartTime() != null) {
entity.setCurrentToolType(entity.getJob().getType());
}
scoreDirector.afterVariableChanged(entity, "currentToolType");
}
[EDIT]: When I did some debugging, looks like the toolType set in the machine for one allocation is used in calculating the type change time for a allocation belonging to a different evaluation set. Not sure how to avoid this.
If this is indeed the case, what is a good way to model problems like this where the state of a item affects the time taken? Or am I totally off. I guess i am totally lost here.
[EDIT]: This is not an issue with how Optaplanner is invoked, but score corruption, when the rule to penalize it based on endTime is added. More details in comments.
[EDIT]: I commented out the rules specified in rules one-by-one and saw that the score corruption occurs only when the score computed depends on the computed values: endTime and toolTypeChange. It is fine when the score depends on the startTime, which is a planningVariable alone. However, that does not give me the best results. It gives me a solution which has a negative hard score, which means it violated the rule of not assigning the same tool during the same time to different jobs.
Can computed values not be used for score calculations?
Any help or pointer is greatly appreciated.
best,
Alice
The ToolTypeVariableListener seems to lack class to the before/after methods, which can cause score corruption. Turn on FULL_ASSERT to verify.

Target a Object Variable in a different Class

So, without posting my entire project in here, I will sum it up as best I can:
class Program
{
static void Main(string[] args)
{
Thing one = new Thing();
one.addTimer(10);
one.addTimer(4);
one.addTimer(2);
one.addTimer(8);
}
}
class Counter
{
private int Seconds;
private int TimerNum;
public Counter(int SecondsX)
{
Seconds = (SecondsX * 1000);
}
public void TimerCall(){
Thread.sleep(Seconds);
CounterCallBack();
}
public void CounterCallBack()
{
Console.WriteLine("Timer " + TimerNum + " Done");
//Then the time is up the call back is executed
//The issue I am having is how do I trigger the next timer for the list timers to go from hear automatically. It would send back TimerNum to Thing.Continue
}
}
class Thing
{
List<int> timers = new List<int>();
public Thing()
{
}
public void addTimer(new Timer(int SecondsToAdd))
{
timers.Add(SecondsToAdd);
}
public void StartTimers(){
timers[0].TimerCall();
}
public void Continue(int LastRun){
if(timers.count()-1>= LastRun){
timers[LastRun].TimerCall();
}
}
}
So I need to access the Continue method from counter to kick off the next timer.
Or I need to find a way to do the same thing.
However, the user needs to be able to edit, add, and remove timers (Which happens from the Program class)
Remember that in my program (this is a simplified version) Counter is a timer Call and CallBack that runs asynchronously.
Is it even possible to do? Or do I need to scrap this approach and start from square one?
Also, I know this is rough, but this a project is for charity and I plan to clean it up once I get this prototype working. Also I am 16. So please, any help you can give would be well appreciated.
Okay It's a dirty answer but I am going to Use a dictionary to store the Object variables, and have an assessor method that is passed the ID of Correct set of timers, and the Index of the next timer to run. That then calls the next timer, and so on and so fort.
Dirty but functional for a Prototype.

JPA: Automatically delete data after timeinterval

I've the following scenario:
Data will be created by the sender on the server.
If the receiver won't request the data within (for example) three days, the data should be deleted otherwise the receiver gets the data and after getting the data, the data will be deleted.
In other words:
The data should only exist for a limited time.
I could create a little jar-file which will delete the data manually, probably started as a cron-job. But i think, that that is a very bad solution.
Is it possible to create a trigger with JPA/Java EE that will be called after a specific time period? As far as i know trigger can only be called after/before insert, update, delete events. So that won't be a solution.
Note: I'm currently using H2-Database and Wildfly for my RESTeasy application. But i might change that in future, so the solution should be adaptive.
Best regards,
Maik
Java EE brings everything you need. You can #Schedule a periodically executed cleanup job:
#Singleton
public class Cleanup {
#Schedule(minute = "*/5")
public void clean() {
// will be called every 5 minutes
}
}
Or you programmatically create a Timer which will be executed after a certain amount of time:
#Singleton
public class Cleanup {
#Resource
TimerService timerService;
public void deleteLater(long id) {
String timerInfo = MyEntity.class.getName() + ':' + id;
long timeout = 1000 * 60 * 5; // 5 minutes
timerService.createTimer(timeout, timerInfo);
}
#Timeout
public void handleTimeout(Timer timer) {
Serializable timerInfo = timer.getInfo(); // the information you passed to the timer
}
}
More information can be found in the Timer Service Tutorial.

how to dig into this memory leak with eclipse MAT further

I have an issue where a ScheduledThreadPoolExecutor ends up with 3 million future tasks. I am trying to see what type of task so I can go to where that task is scheduled, but I am not sure how to get any info from this screen(I have tried right clicking those future tasks and selecting various choices in the menu). It seems like there is something missing in the gui like the links to the actual runnables or something...
any ideas on how to drill into further?
Some General Stuff
You need to know, that if you have a portable heap dump (phd, see types here), then it does not contain actual data (primitives), so then you can make your findings only based on reference map (which types hold a reference to which other types).
You can give a try to OQL. This is an SQL like language, with which you can query your objects.
One example:
select * from java.lang.String s where s.#retainedHeapSize>10000
This gives back all strings, that are bigger than ~10k.
You can make also some functions (like this aggregating here).
You could give a try to it.
As for the current problem
If you check the FutureTask source (here is JDK6 below):
public class FutureTask<V> implements RunnableFuture<V> {
/** Synchronization control for FutureTask */
private final Sync sync;
...
public FutureTask(Callable<V> callable) {
if (callable == null)
throw new NullPointerException();
sync = new Sync(callable);
}
...
public FutureTask(Runnable runnable, V result) {
sync = new Sync(Executors.callable(runnable, result));
}
The actual Runnable is referred by the Sync object:
private final class Sync extends AbstractQueuedSynchronizer {
private static final long serialVersionUID = -7828117401763700385L;
/** State value representing that task is running */
private static final int RUNNING = 1;
/** State value representing that task ran */
private static final int RAN = 2;
/** State value representing that task was cancelled */
private static final int CANCELLED = 4;
/** The underlying callable */
private final Callable<V> callable;
/** The result to return from get() */
private V result;
/** The exception to throw from get() */
private Throwable exception;
/**
* The thread running task. When nulled after set/cancel, this
* indicates that the results are accessible. Must be
* volatile, to ensure visibility upon completion.
*/
private volatile Thread runner;
Sync(Callable<V> callable) {
this.callable = callable;
}
So in the GUI open the Sync object (not open in your picture), and then you can check the Runnables.
I dont know if you can change the code or not, but in general it is better always limit the size of the queue used by an executor, since this way you can avoid leaks. Or you can use some persisted queue. If you apply a limit you can define the rejection policy like for example reject, run in caller and so on. See http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/ThreadPoolExecutor.html for details.