ShedLock not lock when Sleuth is present - scheduled-tasks

There is an application that run some process by scheduler, like we have multiple instances we choose shedlock to block other run the same process. However, is not working since sleuth provides an instances of Runnable which is TraceRunnable and the expected is ScheduledMethodRunnable. Any ideas to solve this?
class SpringLockConfigurationExtractor
#Override
public Optional<LockConfiguration> getLockConfiguration(Runnable task) {
if (task instanceof ScheduledMethodRunnable) {
ScheduledMethodRunnable scheduledMethodRunnable = (ScheduledMethodRunnable) task;
return getLockConfiguration(scheduledMethodRunnable.getTarget(), scheduledMethodRunnable.getMethod());
} else {
logger.debug("Unknown task type " + task);
}
return Optional.empty();
}

I talk with developer and follow documentation set the param interceptMode to PROXY_METHOD and its working. He mention that this will be the default value in future releases, in favor of avoid issues like this.
#EnableSchedulerLock(defaultLockAtMostFor = "PT5M", interceptMode = PROXY_METHOD)

You'd have to provide your own impl of SchedulerProxyScheduledLockAdvisor that verifies the task's class whether it's a TraceRunnable. You can also disable parts of Sleuth if necessary. You can also ask Shedlock's authors for some hooks to fix this.

Related

Apache Shiro; SecurityUtils.getSubject() for a schedule user?

I want to implement the following scenario:
I have a EJB scheduler, which should run every 1 minute.
My issue is the following:
I need a login for the user, who execute the schedule. This should be a system user. There will be no login via GUI.
How can I login this user and execute further task?
Currently I´m trying in my class:
#Singleton
#LocalBean
#Startup
public class Scheduler {
public void startSchedule() {
Subject currentUserShiro = SecurityUtils..getSubject();
UsernamePasswordToken token = new UsernamePasswordToken("test#domain.com", "test1234");
currentUserShiro.login(token);
In one of my function, I check e.g. for the permission:
SecurityUtils.getSubject().isPermitted("billingInvoice:create")
I´m getting currently the following issue:
No SecurityManager accessible to the calling code, either bound to the org.apache.shiro.util.ThreadContext or as a vm static singleton. This is an invalid application configuration.
Any idea?
Code update:
private void addScheduleToList(ScheduleExecution scheduleExecution) throws UnknownHostException {
synchronized (this) {
Factory<SecurityManager> factory = new IniSecurityManagerFactory("classpath:shiro-web.ini");
SecurityManager securityManager = factory.getInstance();
SecurityUtils.setSecurityManager(securityManager);
Subject subject = new Subject.Builder().buildSubject();
Runnable myRunnable = null;
subject.execute(myRunnable = new Runnable() {
#Override
public void run() {
// Add tasks
executeAction(scheduleExecution);
}
});
//////
schedulingService.addTaskToExecutor(taskId, myRunnable, 0);
}
}
I´m gettig now not anymore the issue message which I got initialy, but it seems I´m getting PermissionException, because the user has not the permission? If I check the Subject object, this object is not authenticated. This Subject object needs full permission. How can I do this?
SecurityUtils.getSubject().isPermitted("billingInvoice:create") == false
You have a couple of options.
Move your permission checking to your web-based methods, this moves the permission checkout outside of your scheduled task.
(this isn't always possible and since you are asking the question, I'm assuming you want the 2nd option)
You need to execute your task in the context of a user. Create a new Subject using a SubjectBuilder and then call the execute with a runnable from your task.
See https://shiro.apache.org/subject.html specifically the "Thread Association" section.

Does a FlowableOperator inherently supports backpressure?

I've implemented an FlowableOperator as described in the RxJava2 wiki (https://github.com/ReactiveX/RxJava/wiki/Writing-operators-for-2.0#operator-targeting-lift) except that I perform some testing in the onNext() operation something like that:
public final class MyOperator implements FlowableOperator<Integer, Integer> {
...
static final class Op implements FlowableSubscriber<Integer>, Subscription {
#Override
public void onNext(Integer v) {
if (v % 2 == 0) {
child.onNext(v * v);
}
}
...
}
}
This operator is part of a chain where I have a Flowable created with a backpressure drop. In essence, it looks almost like this:
Flowable.<Integer>create(emitter -> myAction(), DROP)
.filter(v -> v > 2)
.lift(new MyOperator())
.subscribe(n -> doSomething(n));
I've met the following issue:
backpressure occurs, so doSomething(n) cannot handle the upcoming upstream
items are dropped due to the Backpressure strategy chosen
but doSomething(n) never receives back new item after the drop has been performed and while doSomething(n) was ready to deal with new items
Reading back the excellent blog post http://akarnokd.blogspot.fr/2015/05/pitfalls-of-operator-implementations.html of David Karnok, it's seems that I need to add a request(1) in the onNext() method. But that was with RxJava1...
So, my question is: is this fix enough in RxJava2 to deal with my backpressure issue? Or do my operator have to implement all the stuff about Atomics, drain stuff described in https://github.com/ReactiveX/RxJava/wiki/Writing-operators-for-2.0#atomics-serialization-deferred-actions to properly handle my backpressure issue?
Note: I've added the request(1) and it seems to work. But I can't figure out whether it's enough or whether my operator needs the tricky stuff of queue-drain and atomics.
Thanks in advance!
Does a FlowableOperator inherently supports backpressure?
FlowableOperator is an interface that is called for a given downstream Subscriber and should return a new Subscriber that wraps the downstream and modulates the Reactive Streams events passing in one or both directions. Backpressure support is the responsibility of the Subscriber implementation, not this particular functional interface. It could have been Function<Subscriber, Subscriber> but a separate named interface was deemed more usable and less prone to overload conflicts.
need to add a request(1) in the onNext() [...]
But I can't figure out whether it's enough or whether my operator needs the tricky stuff of queue-drain and atomics.
Yes, you have to do that in RxJava 2 as well. Since RxJava 2's Subscriber is not a class, it doesn't have v1's convenience request method. You have to save the Subscription in onSubscribe and call upstream.request(1) on the appropriate path in onNext. For your case, it should be quite enough.
I've updated the wiki with a new section explaining this case explicitly:
https://github.com/ReactiveX/RxJava/wiki/Writing-operators-for-2.0#replenishing
final class FilterOddSubscriber implements FlowableSubscriber<Integer>, Subscription {
final Subscriber<? super Integer> downstream;
Subscription upstream;
// ...
#Override
public void onSubscribe(Subscription s) {
if (upstream != null) {
s.cancel();
} else {
upstream = s; // <-------------------------
downstream.onSubscribe(this);
}
}
#Override
public void onNext(Integer item) {
if (item % 2 != 0) {
downstream.onNext(item);
} else {
upstream.request(1); // <-------------------------
}
}
#Override
public void request(long n) {
upstream.request(n);
}
// the rest omitted for brevity
}
Yes you have to do the tricky stuff...
I would avoid writing operators, except if you are very sure what you are doing? Nearly everything can be achieved with the default operators...
Writing operators, source-like (fromEmitter) or intermediate-like
(flatMap) has always been a hard task to do in RxJava. There are many
rules to obey, many cases to consider but at the same time, many
(legal) shortcuts to take to build a well performing code. Now writing
an operator specifically for 2.x is 10 times harder than for 1.x. If
you want to exploit all the advanced, 4th generation features, that's
even 2-3 times harder on top (so 30 times harder in total).
There is the tricky stuff explained: https://github.com/ReactiveX/RxJava/wiki/Writing-operators-for-2.0

Recommended way to register custom serializer with StateManager

In the pre-GA version of Service Fabric I was registering a custom serializer like this:
protected override IReliableStateManager CreateReliableStateManager()
{
IReliableStateManager result = new ReliableStateManager(
new ReliableStateManagerConfiguration(
onInitializeStateSerializersEvent: InitializeStateSerializers));
return result;
}
private Task InitializeStateSerializers()
{
StateManager.TryAddStateSerializer(new KFOBinarySerializer());
return Task.FromResult(false);
}
However, the CreateReliableStateManager method was removed in the GA version. I've struggled to get something working in its place. Currently I'm calling
StateManager.TryAddStateSerializer(new KFOBinarySerializer());
from within the service's RunAsync method, which appears to work fine.
What is the recommended way to register a custom serializer?
TryAddStateSerializer is deprecated. Anyone know if this is because custom serialization support will go away or if it will simply be supported through some other mechanism?
You can create the state manager in the StatefulService's constructor (full example here):
class MyService : StatefulService
{
public MyService(StatefulServiceContext serviceContext)
: base(serviceContext, CreateReliableStateManager()) { }
private static IReliableStateManager CreateReliableStateManager() { ... }
}
Regarding the deprecated API, Microsoft says it's safe to use, but it will change in the future.

Eclipse Plug-In developement, long running task

I have several UI-Components which have listeners. All these listeners invoke method dialogChanged(). My goal is to make some long processing in this method and don't let the UI freeze. According to Lars Vogel it is possible to do this with help of UISynchronize being injected during runtime. But it fails for me, field of this type is not being injected and i get a NullPointerException. Here's relevant part of my code:
#Inject UISynchronize sync;
Job job = new Job("My Job") {
#Override
protected IStatus run(IProgressMonitor arg0)
{
sync.asyncExec(new Runnable()
{
#Override
public void run()
{
updateStatus("Checking connection...");
if (bisInstallDirSelected)
bisSettingsChanged();
else
jarSettingsChanged();
}
});
return Status.OK_STATUS;
}
};
protected void dialogChanged()
{
job.schedule();
}
The methods updateStatus(String s), bisSettingsChanged() and jarSettingsChanged() interact with UI, to be presice, they use method setErrorMessage(String newMessage) of superclass org.eclipse.jface.wizard.WizardPage
I'd appreciate if somebody could tell me what I am doing wrong or suggest a better way to handle this problem.
You can only use #Inject in classes that the e4 application model creates (such as the class for a Part or a Command Handler).
You can also use ContextInjectionFactory to do injection on your own classes.
For classes where injection has not been done you can use the 'traditional' way of running code in the UI thread:
Display.getDefault().asyncExec(runnable);

karaf osgi getServiceReference returns null

Does anyone have any experience with getServiceReference returning null for what seems like no reason?
The following bundle registers the service, and then proceeds to confirm that it's registered (whether or not this is even a valid test from the same package, idk).
package db.connector;
...
public class Activator implements BundleActivator {
private static ServiceRegistration registration;
...
public void start(BundleContext _context) throws Exception {
DatabaseConnector dbc = new DatabaseConnectorImpl();
registration = context.registerService(
DatabaseConnector.class.getName(),
dbc, null);
checkServiceRegistered();
}
...
public void checkServiceRegistered() {
System.out.println("Printing all entries:");
ServiceReference sr = context.getServiceReference(DatabaseConnector.class.getName());
DatabaseConnector dbc = (DatabaseConnector) context.getService(sr);
List<Protocol> result = dbc.getAllProtocols();
for(int i=0; i<result.size(); i++) {
Protocol p = result.get(i);
System.out.println("\t" + p.getId()+": "+p.getName()+"("+p.getOwner()+")");
}
}
}
The output runs successfully, everything seems OK. Checking in the karaf webconsole, the service seems to be registered correctly:
267 [db.connector.DatabaseConnector] database-connector (144)
The code to get the registered service is as follows:
import db.connector.DatabaseConnector;
...
public List<Protocol> printAllEntries() {
ServiceReference sr = Activator.getContext().getServiceReference(DatabaseConnector.class.getName());
DatabaseConnector dbc = (DatabaseConnector) Activator.getContext().getService(sr);
return dbc.getAllProtocols();
}
...
The DatabaseConnector bundle exports the correct package, and the one using the service imports the same.
What could possibly be going wrong here? I'm at a complete loss.
It looks alright.
What comes to mind: Is the ordering ok? Are you sure the registration is done before checking the reference? The way you check in printAllEntries you check if the service is there on just that moment. As OSGi bundles can come and go, this isn't a reliable way to check. You should use either a ServiceTracker, or better still something like Declarative Services or Blueprint.
You could add a ServiceListener to the BundleContext, then you can print out what's happening in what order.
Hope this helps.
Turns out, it was just that I didn't refresh the OSGi bundles. My servlet was pointing to a now-obsolete bundle ID, so of course the service find was failing.