When we use persistent EJBTimer with #schedule and persistent=true, deploy it to cluster and then we change the actual schedule within #Schedule and re-deploy to the cluster, does the original schedule get replaced with the new one ( removed and added with new parameters ), or both the schedules remain active ( keeping in mind the persistent=true is set )
This is what I have read so far - Each scheduler instance has a unique jndi name and #schedule automatically creates a timer through application deployment so it would be better to remove the automatic created EJBTimer or cancel the original schedule to avoid trouble. But I don't know how to cancel the original schedule programmatically or would that need to be done by the websphere admins, if both the original and changed schedules remain active
Also from this document, the removeAutomaticEJBTimers command is used to remove timers from a specified scheduler, but that also seems in the area of a websphere admin, not a developer.
How can a developer programmatically cancel an automatic EJBTimer created by using #Schedule annotation?
I am using Java EE 6 with Websphere 8.5 and EJB 3.1.
Do the following to remove persisted EJB timers:
Delete directory jboss-home\standalone\data\timer-service-data{yourporjectname}.{serivename}
See this page Creating timers using the EJB timer service
The application server automatically removes persistent automatic
timers from the database when you uninstall the application while the
server is running. If the application server is not running, you must
manually delete the automatic timers from the database. Additionally,
if you add, remove, or change the metadata for automatic timers while
the server is not running, you must manually delete the automatic
timers.
I have the following class:
#Stateless
#LocalBean
public class HelloBean {
#Schedule(persistent=true, minute="*", hour="*", info="myTimer")
public void printHello() {
System.out.println("### hello");
}
}
When I install it to the server, I can find related automatic timer:
C:\IBM\WebSphere\AppServer85\profiles\AppSrv02\bin>findEJBTimers.bat server1 -all
ADMU0116I: Tool information is being logged in file C:\IBM\WebSphere\AppServer85\profiles\AppSrv02\logs\server1\EJBTimers.log
ADMU0128I: Starting tool with the AppSrv02 profile
ADMU3100I: Reading configuration for server: server1
EJB timer : 3 Expiration: 2/14/15 12:39 PM Calendar
EJB : ScheduleTestEAR, ScheduleTest.jar, HelloBean
Info : myTimer
Automatic timer with timout method: printHello
Calendar expression: [start=null, end=null, timezone=null, seconds="0",
minutes="*", hours="*", dayOfMonth="*", month="*", dayOfWeek="*", year="*"]
1 EJB timer tasks found
After uninstalling application, the timer is removed:
C:\IBM\WebSphere\AppServer85\profiles\AppSrv02\bin>findEJBTimers.bat server1 -all
ADMU0116I: Tool information is being logged in file
C:\IBM\WebSphere\AppServer85\profiles\AppSrv02\logs\server1\EJBTimers.log
ADMU0128I: Starting tool with the AppSrv02 profile
ADMU3100I: Reading configuration for server: server1
0 EJB timer tasks found
I don't know how you are 'redeploying' your applications, but looks like your process is incorrect. As in normal install/uninstall/update process automatic timers are correctly removed.
UPDATE
On the same page you have info regarding ND environment:
Automatic persistent timers are removed from their persistent store
when their containing module or application is uninstalled. Therefore,
do not update applications that use automatic persistent timers with
the Rollout Update feature. Doing so uninstalls and reinstalls the
application while the cluster is still operational, which might cause
failure in the following cases:
If a timer running in another cluster member activates after the database entry is removed and before the database entry is recreated,
then the timer fails. In this case, a
com.ibm.websphere.scheduler.TaskPending exception is written to the
First Failure Data Capture (FFDC), along with the SCHD0057W message,
indicating that the task information in the database has been changed
or canceled.
If the timer activates on a cluster member that has not been updated after the timer data in the database has been updated, then
the timer might fail or cause other failures if the new timer
information is not compatible with the old application code still
running in the cluster member.
In JBoss/WildFly, if you change the timer-service to use a "clustered-store" instead of "default-file-store", you'll be able to programmatically cancel a Timer. Here is a brief guide explaining how to make it:
Mastertheboss.com: Creating clustered EJB 3 Timers
Published: 08 March 2015
http://www.mastertheboss.com/jboss-server/wildfly-8/creating-clustered-ejb-3-timers
Related
I have created multiple schedule tasks in laravel 5, and created cron job on cpanel it is working fine. But now i want to stop specific schedule task, i have comment the command and remove class from app/Console/kernel.php file, but still it is running on live server on that specific time.
Befor in kernel.php
protected $commands = [
Commands\CreatePostingSchedules::class,
Commands\ChangeCreatedPostDuration::class,
Commands\abc::class,
Commands\xyz::class,
Commands\NewMonthUser::class,
Commands\DeliverOrdersWithCourier::class,
];
Now Remove the class Commands\ChangeCreatedPostDuration::class
After Removed the class:
protected $commands = [
Commands\CreatePostingSchedules::class,
Commands\abc::class,
Commands\xyz::class,
Commands\NewMonthUser::class,
Commands\DeliverOrdersWithCourier::class,
];
But still it is running on live server.
Please anyone can help ? how to stop this specific schedule task ?
Thanks
Make sure that you have completely removed the scheduled function/command from "Kernel.php" of your Laravel 5 and that you do not have related manual CronJob in your cPanel -> CronJobs.
You might also want to check this article: https://laracasts.com/discuss/channels/laravel/how-to-stop-scheduled-tasks-from-running-in-kernelphp
Using 5.1.163 version of service fabric run time.
Created a service fabric application with one stateless web api (i.e. using owin communication listener).
Modified the generated code so that listening endpoint to contain partition id/instance id/new_guid (just as is the case for stateful services). This should allow me to create another app instance so that I can have multi-tenancy at application level.
By default, Local.xml file is set to 1 instance for this service.
Deployed it to local machine by F5. Verified that it is deployed to only one instance.
Verified that service is working fine.
Navigated to local service fabric explorer and clicked on the Cluster/Application/AppType node. Clicked on 'Create app instance'.
It successfully created 2nd app instance.
However in this new instance, the service is deployed to all 5 nodes.
I was expecting it deploy the service instance only one node. Is this a bug? But only in this version of service fabric?
When you deploy a Service Fabric application using Visual Studio (or from PowerShell) you use the Deploy-FabricApplication.ps1 that is generated for your application and found in /scripts under your SF project. This script does two things (mainly):
Create/update the application type
Create a new/upgrade existing instance of the application type
The second part there is similar to what you do in the SF Explorer, except this one also considers the publisher profile file you supply. The PS-script actually reads your publisher profile xml files and extracts any parameters in there to a hashset (a dictionary) and passes that as an argument in step 2.
You can create an instance of an SF application type using the PS cmdlets (alternatively you can use FabricClient). The following command does this: New-ServiceFabricApplication. Here you have the chance to supply your own application parameters, including instance count for services in your new application instance (if you have a dynamic parameter for that in your application manifest).
So, when you use the SF explorer to create a new application instance you cannot control how that instance is created, it is always using the default parameter values as specified directly in ApplicationManifest.xml, not values you have specified in your publisher profiles (local1, local5, cloud, etc.).
To controll the creation, run New-ServiceFabricApplication with yor parameters as a hashset.
Here goes - bear with me:
Two Autofac 4.2.1 Containers:
One in an Asp.NET 4.6.1 WebApi project
One in an NServiceBus 6 host
Both possess an IJobService reference to the JobService (which saves jobs to DynamoDB).
Run the project in Visual Studio...
If I make a WebApi request into the first JobService it succeeds and inserts a record to DynamoDB and drops a command on the bus for NServiceBus to pickup.
During the processing of the Saga, NServiceBus makes a call to JobService again (presumably on the second container) to save progress. This second call fails to insert to DynamoDB with the lifetime disposed. If I try to create anything from IComponentContext I get:
Instances cannot be resolved and nested lifetimes cannot be created from this LifetimeScope as it has already been disposed.
The NServiceBus host is running AsA_Server and I register the container in the Customize method of IConfigureThisEndPoint.
Any pointers on how to see where the lifetime is getting dumped or if it's mysteriously picking the wrong IJobService somehow?
Just to close this one out - we ended up redesigning the solution and moving any web service calls out to their own handlers. That was based off the advice found here http://docs.particular.net/nservicebus/sagas That change resolved the issue one way or another.
Specifically, this guidance:
Other than interacting with its own internal state, a saga should not access a database, call out to web services, or access other resources - neither directly nor indirectly by having such dependencies injected into it.
Im new to chef and trying to understand why this code does not return any error while if i do the same with 'start' i will get an error for such service does not exist.
service 'non-existing-service' do
action :stop
end
# chef-apply test.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* service[non-existing-service] action stop (up to date)
Don't know which plattform you are running on if you are running on Windows it should at least log
Chef::Log.debug "#{#new_resource} does not exist - nothing to do"
given that you have debug as log level.
You could argue this is the wrong behaviour, but if the service dose not exist it for sure isen't running.
Source code
https://github.com/chef/chef/blob/master/lib/chef/provider/service/windows.rb#L147
If you are getting one of the variants of the init.d provider, they default to getting the current status of a service by grepping the process table. Because Chef does its own idempotence checks internally before calling the provider's stop method, it would see there is no such process in the table and assume it was already stopped.
The problem I'm having is with this code:
if (!MessageQueue.Exists(QueueName))
{
MessageQueue.Create(QueueName, true);
}
It will check if a queue exists; if it doesn't I want it to create the queue. This code has been working and hasn't changed for a few months. Today I started receiving this error:
[MessageQueueException (0x80004005): A queue with the same path name
already exists.] System.Messaging.MessageQueue.Create(String path,
Boolean transactional) +239478
The queues are local and if I delete the specific queue it will work once. After the queue is created it starts to fail again with the same error message.
It looks like the issue may be because of the Network Load Balancing (NLB) configuration. I was unaware of a change that recently put the machine in a NLB environment. The configuration we are using is an unsupported one.
More information is in How Message Queuing can function over Network Load Balancing (NLB).