Hi I've got a scheduler in my custom/Extension/modules/Schedulers/Ext/ScheduledTasks folder with a file name that's the same as my function name (fetch_account_pricing_info_from_recurly)
/* make the scheduler visable in the job creator */
array_push($job_strings, 'fetch_account_pricing_info_from_recurly');
function fetch_account_pricing_info_from_recurly(){
$GLOBALS['log']->fatal('my fatal message');
/* select all the active accounts */
$sql = "SELECT * FROM accounts INNER JOIN accounts_cstm ON accounts_cstm.id_c = accounts.id WHERE status_c = 'Active Customer'";
$result = $GLOBALS['db']->query($sql);
while($row = $GLOBALS['db']->fetchByAssoc($result)){
/* iterate over the accounts fetching the recurly subscription info from the SugarRecurlySyncService */
$account = BeanFactory::getBean('Accounts', $row['id']);
try{
$account->recurly_amount_c = 100;
$account->recurly_valid_c = true;
$account->save();
}catch(Exception $e){
}
}
return true;
}
I've done a quick build and repair so that its contents show up in modules/Schedulers/ext/ScheduledTasks/scheduledtasks.ext.php
I've created a job based on this scheduler that is supposed to run every minute from the admin interface, but when I run the cron jobs from the command line
php -f cron.php
I see this line related to my job
Wed Sep 11 16:41:12 2013 [11966][1][DEBUG] process_full_list: Scheduler(641c28bd-44ad-0b02-f7ad-522e4e2abb93): name = Update Pricing
Wed Sep 11 16:41:12 2013 [11966][1][DEBUG] process_full_list: Scheduler(641c28bd-44ad-0b02-f7ad-522e4e2abb93): job = function::fetch_account_pricing_info_from_recurly
If the job were actually running though I'd think that the fatal log would show up in the log file. So why isn't it running?
When I checked my job_queue table I saw that i had an entry with a status of 'running' instead of 'done'. Deleting that row fixed my problem
Related
I'm just getting into Celery chains in my Django project. I have the following function:
def orchestrate_tasks_for_account(account_id):
# Get the account, set status to 'SYNC' until the chain is complete
account = Account.objects.get(id=account_id)
account.status = "SYNC"
account.save()
chain = task1.s(account_id) | task2.s() | task3.s()
chain()
# if any of the tasks in the chain failed, set account.status = 'ERROR'
# else set the account.status = 'OK'
The chain works as expected, but I'm not sure how to take feedback from the chain and update the account based on the results
In other words, I'd like to set the account status to 'ERROR' if any of the tasks in the chain fail, otherwise I'd like to set the account status to 'OK'
I'm confused by the Celery documentation on how to handle an error with an if/else like I've commented in the last two lines above.
Does anyone have experience with this?
Ok - here's what I came up with
I've leveraged the waiting library in this solution
from celery import chain
from waiting import wait
def orchestrate_tasks_for_account(account_id):
account = Account.objects.get(id=account_id)
account.status = "SYNC"
account.save()
job = chain(
task1.s(account_id),
task2.s(),
task3.s()
)
result = job.apply_async()
wait(
lambda: result.ready(), # when async job is completed...
timeout_seconds=1800, # wait 1800 seconds (30 minutes)
waiting_for="task orchestration to complete"
)
if result.successful():
account.status = 'OK'
else:
account.status = 'ERROR'
account.save()
I am open to suggestions to make this better!
I have a powershell script that creates an MS Access ComObject and uses it to run a macro in an MS Access database as shown below:
$AccessDb= New-Object -ComObject Access.Application
try{
foreach($file in $Files)
{
...
$AccessDb.DoCmd.RunMacro('mcrScr')
...
}
}
catch { ... }
The issue is when there are runtime errors, MS Access throws the errors in an interactive window or dialog box and this causes the powershell to hang; just waiting for the window to close and thus macros for other .mdb files do not get to run.
I have been trying out options from online articles to timeout this piece of my code $AccessDb.DoCmd.RunMacro('mcrScr'), if it runs more than x number of seconds. I used jobs, runspace and [system.diagnostics.stopwatch] but have not been successful.
Is there any better approach to do this. I am kind of running out of options.
Edit:
#Paul, in response to your comment I am adding how i am using runspace to address the problem.
function Script-Timeout {
param([scriptblock]$Command,[int]$Timeout)
$Runspace = [runspacefactory]::CreateRunspace()
$Runspace.Open()
$PS = [powershell]::Create().AddScript($Command)
$PS.Runspace = $Runspace
$chk = $PS.BeginInvoke()
if($chk.AsyncWaitHandle.WaitOne($Timeout))
{
$PS.EndInvoke($chk)
}
else
{
throw "Command taking too long to run. Timeout exceeded."
$PS.EndInvoke($chk)
}
}
The Script-Timeout function is then used in the portion of my script that runs the MS Acess macro as shown below:
Try{
forEach($mdbfile in Files)
{
...
Script-Timeout -Command {
$AccessDb= New-Object -ComObject Access.Application
$AccessDb.OpenCurrentDatabase($mdbfile)
$AccessDb.DoCmd.RunMacro('mcrUpdate')
$AccessDb.CloseCurrentDatabase()
} -Timeout 20
...
}
}
catch
{
#catch exception
}
I have artificially created a runtime error in the VBA which throws an error dialog box. This way the RunMacro portion of the script, if it gets run, will hung the powershell. This is where i expect the runspace to timeout the macro run from powershell after x seconds.
The problem with runspace is that the MS Access Macro does not get run at all. In powershell debug mode, i see the if-block of the script-timeout function always execute successfully with or without the artificial runtime error
I've been working on a custom SNMP Mib and I've come up against a wall while trying to get an agent to return the proper data.
MIB (validated by running smilint -l 6):
IDB-MIB DEFINITIONS ::= BEGIN
IMPORTS
MODULE-IDENTITY, OBJECT-TYPE, Integer32, enterprises
FROM SNMPv2-SMI
MODULE-COMPLIANCE, OBJECT-GROUP FROM SNMPv2-CONF;
idb MODULE-IDENTITY
LAST-UPDATED "201307300000Z" -- Midnight 30 July 2013
ORGANIZATION "*********"
CONTACT-INFO "email: *******"
DESCRIPTION "description"
REVISION "201307300000Z" -- Midnight 29 July 2013
DESCRIPTION "First Draft"
::= { enterprises 42134 }
iDBCompliance MODULE-COMPLIANCE
STATUS current
DESCRIPTION
"Compliance statement for iDB"
MODULE
GROUP testGroup
DESCRIPTION
"This group is a test group"
::= {idb 1}
test2 OBJECT-TYPE
SYNTAX Integer32
UNITS "tests"
MAX-ACCESS read-write
STATUS current
DESCRIPTION
"A test object"
DEFVAL { 5 }
::= { idb 3 }
testGroup OBJECT-GROUP
OBJECTS {
test2
}
STATUS current
DESCRIPTION "all test objects"
::= { idb 2 }
END
Agent file:
#!/usr/bin/perl
use NetSNMP::OID(':all');
use NetSNMP::agent(':all');
use NetSNMP::ASN(':all');
sub myhandler {
my ($handler, $registration_info, $request_info, $requests) = #_;
print "Handling request\n";
for ($request = $requests; $request; $request = $request->next()) {
#
# Work through the list of varbinds
#
my $oid = $request->getOID();
print "Got request for oid $oi\n";
if ($request_info->getMode() == MODE_GET) {
if ($oid == new NetSNMP::OID($rootOID . ".3")) {
$request->setValue(ASN_INTEGER, 2);
}
}
}
}
{
$subagent = 0;
print "Running new agent\n";
my $rootOID = ".1.3.6.1.4.1.42134";
my $regoid = new NetSNMP::OID($rootOID);
if (!$agent) {
$agent = new NetSNMP::agent('Name'=>'my_agent_name','AgentX' => 1);
$subagent = 1;
print "Starting subagent\n";
}
print "Registering agent\n";
$agent->register("my_agent_name", $regoid, \&myhandler);
print "Agent registered\n";
if ($subagent) {
$SIG{'INT'} = \&shut_it_down;
$SIG{'QUIT'} = \&shut_it_down;
$running = 1;
while ($running) {
$agent->agent_check_and_process(1);
}
$agent->shutdown();
}
}
sub shut_it_down() {
$running = 0;
print "Shutting down agent\n";
}
When I run the agent I get the following:
Running new agent
Starting subagent!
Registering agent with oid idb
Agent registered
So I know that much is working. However when I run the following command:
snmpget -v 1 -c mycommunity localhost:161 test2.0
I get this error message:
Error in packet
Reason: (noSuchName) There is no such variable name in this MIB.
Failed object: IDB-MIB::test2.0
I know from snmptranslate that the mib file is set correctly. I have even looked through the debug for snmpget (using -DALL) to make sure that the mib is being loaded and parsed correctly.
So my question is: Why is my subagent not being passed the request?
Update:
I've been told by #EhevuTov that my MIB file is not valid, however smilint does not report any issues and running snmpget -v 2c -c mycommunity localhost:161 .1.3.6.1.4.1.42134.3.0 does report the NAME of the object (IDB-MIB::test2.0) correctly, but does not find any data for it.
I am getting IDB-MIB::test2 = No Such Object available on this agent at this OID, which makes me think that my agent is not registering properly, however it's not throwing any errors.
Update 2:
I've been fiddling around with the agent code a bit. Based on the CPAN documentation (http://metacpan.org/pod/NetSNMP::agent), it looks like the $agent->register function call is supposed to return 0 if successful. So I checked the return code and got this:
Agent registered. Result: NetSNMP::agent::netsnmp_handler_registration=SCALAR(0x201e688)
Printing it out using Data::Dumper results in:
$VAR1 = bless( do{\(my $o = 34434624)}, 'NetSNMP::agent::netsnmp_handler_registration' );
I vaguely understand what bless does, but even so, I have no idea what this result is supposed to mean. So I'm starting to think that the agent is wrong somehow. Does anyone know how to debug these agents? Is there somewhere I can look to see if it's getting loaded properly into the master snmpd?
And I've solved the problem. It wasn't with the MIB, it was with the agent (which I had THOUGHT was working fine the whole time so I never bothered to check it).
I'd been running the agent stand-alone, because it seemed like it was working fine (never threw any errors when registering the handler). Apparently though, it needs to be run directly by snmpd.
I moved it to a directory that snmpd can access (because also apparently snmpd can't run scripts from /root, even though it's running as root), and added these lines in snmpd.conf:
perl print "\nRunning agents now\n";
perl do "/usr/share/snmp/agent.pl" || print "Problem running agent script: $!\n";
perl print "Agents run\n";
Note that these two lines were already present:
disablePerl false
perlInitFile /usr/share/snmp/snmp_perl.pl
I can now run the snmpget command and get the expected response.
> snmpget -v 2c -c mycommunity localhost:161 .1.3.6.1.4.1.42134.3
IDB-MIB::test2 = INTEGER: 2 tests
How do I set up a cron job in magento step by step. If I have a attribute that has a set date, i want the cron job to disable the product if the day has past...
1) Create a custom Module, there's many guides to this out there - or use a Module Creator to get your started.
2) Added Cron job setup to your modules config
config.xml
...
<crontab>
<jobs>
<mymodule_disable>
<schedule>
<!-- every 10 min -->
<cron_expr>*/10 * * * *</cron_expr>
</schedule>
<run>
<model>mymodule/Scheduler::disable</model>
</run>
</mymodule_disable>
</jobs>
</crontab>
</config>
Now Create an class to handle the task for you (modulename/Model/Scheduler.php)
Scheduler.php
<?php
class Mymodule_Model_Scheduler
{
/**
* Disable prodcuts for us
*/
public static function disable()
{
// This will be run every 10 minutes, we want to get applicable products
// you will need to customize the filter for what you need, subtracting
// or adding date values etc.. you get the idea :)
$date = Mage::getModel('core/date')->gmtDate(); // add/subtract etc
$collection = Mage::getModel('catalog/product')->getCollection();
$collection->addfieldtofilter('custom_date_attr', array(
array('to' => $date),
//array('gteq' => $date)
));
foreach($collection as $product) {
$product->setStatus(Mage_Catalog_Model_Product_Status::STATUS_DISABLED);
$product->save();
}
}
}
Now you need to setup a cron job to run Magentos Scheduler, example:
*/10 * * * * php -f /path/to/magento/cron.php
I'm trying to automate the deployment process, and as part of it, I need to run my release build from command line. I can do it, using command like
.\TFSBuild start http://server-name:8080/tfs/project-collection project-name build-name priority:High /queue
It even returns some code for the queued build — Build queued. Queue position: 2, Queue ID: 11057.
What I don't know, is how to get info about currently running builds, or about the state of my running build from powershell command line? The final aim is to start publishing after that build completes.
I've already got all necessary powershell scripts to create the deployment package from the build results, zip it, copy to production and install there. All I need now — to know when my build succeedes.
This function will wait for a build with the Queue ID given by TFSBuild.exe:
function Wait-QueuedBuild {
param(
$QueueID
)
[void][Reflection.Assembly]::LoadWithPartialName('Microsoft.TeamFoundation.Build.Client')
[void][Reflection.Assembly]::LoadWithPartialName('Microsoft.TeamFoundation.Client')
$uri = [URI]"http://server-name:8080/tfs/project-collection"
$projectCollection = [Microsoft.TeamFoundation.Client.TfsTeamProjectCollectionFactory]::GetTeamProjectCollection($uri)
$buildServer = $projectCollection.GetService([Microsoft.TeamFoundation.Build.Client.IBuildServer])
$spec = $buildServer.CreateBuildQueueSpec('*','*')
do {
$build = $buildServer.QueryQueuedBuilds($spec).QueuedBuilds| where {$_.Id -eq $QueueID}
sleep 1
} while ($build)
}
You can get the id returned by TFSBuild.exe, then call the function.
$tfsBuild = .\TFSBuild start http://server-name:8080/tfs/project-collection project-name build-name priority:High /queue
Wait-QueuedBuild [regex]::Match($tfsBuild[-1],'Queue ID: (?<id>\d+)').Groups['id'].Value
Using the work by E.Hofman available here it is possible to write a C# console app that uses TFS SDK and reveals if any build agent is currently running as follows:
using System;
using Microsoft.TeamFoundation.Build.Client;
using Microsoft.TeamFoundation.Client;
namespace ListAgentStatus
{
class Program
{
static void Main()
{
TfsTeamProjectCollection teamProjectCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://TFSServer:8080"));
var buildServer = teamProjectCollection.GetService<IBuildServer>();
foreach (IBuildController controller in buildServer.QueryBuildControllers(true))
{
foreach (IBuildAgent agent in controller.Agents)
{
Console.WriteLine(agent.Name+" is "+agent.IsReserved);
}
}
}
}
}
The parameter .IsReserved is what toggles to 'True' during execution of a build.
I 'm sorry my powershell skills are not good enough for providing with a PS variant of the above. Please take a look here, where the work by bwerks might help you do that.
# load classes for execution
[Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.Build.Client") | Out-Null
[Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.Client") | Out-Null
# declare working variables
$Uri = New-Object System.Uri "http://example:8080/tfs"
# get reference to projection collection
$ProjectCollection = [Microsoft.TeamFoundation.Client.TfsTeamProjectCollectionFactory]::GetTeamProjectCollection($Uri)
# get reference to build server
$BuildServer = $ProjectCollection.GetService([Microsoft.TeamFoundation.Build.Client.IBuildServer])
# loop through the build servers
foreach($Controller in $BuildServer.QueryBuildControllers($true))
{
# loop through agents
foreach($BuildAgent in $Controller.Agents)
{
Write-Host "$($BuildAgent.Name) is $($BuildAgent.IsReserved)"
}
}