How do you create a cron job in Kohana? I setup a regular controller which extends off the Controller_Base and I ran the command line:
/usr/bin/wget http://domain/controller/custom_cron
But I can't get it to work. It just doesn't execute. No error, nothing. I didn't put any special code in my controller ... just what I need to run my program. So if there is like a special command to call a cron job, I didn't add it (cause I don't know what it would be).
Also, I need it to make MySQL calls so I would need to include the db info and connection and what not (if it doesn't do that automatically). And I work off a custom model. How would I include that (if it doesn't do it automatically).
Thank you.
php /path/to/index.php --uri=controller/action/etc/etc
Calling it like this pretty much makes it act exactly like in a web environment. The only difference is the protocol for requests is 'cli'. You'll need to keep that in mind if you are generating links.
So if there is like a special command
to call a cron job, I didn't add it
(cause I don't know what it would be)
Daft question - have you added that wget command to crontab or similar?
If on the other hand you're looking to make a "poor man's cron", you could try creating a hook that runs on every page load and checks the last time the job was run, perhaps storing the last timestamp in a file or database.
I had to use cURL as my fire-this-script command in curl
Ex:
30 18 * * * curl "http://domain.com/controller/method"
php and wget didn't work, even when calling index.php and adding the uri as suggested above.
Also, FYI, Most transparent way to test this was just running the line from SSH manually to see what the results were. Once I confirmed it was working there, then I put it in the cron.
Related
I was wondering how one could make a command that sets up a mod log for a certain server, or start logging in one that already exists. This will probably require a database or JSON, anything is fine, as long as I can get it to work. Any help would be appreciated.
The way I would do this is with a .txt file that would contain all of the logs. I would add a line every time an event within the guild occurred, you could configure this to whatever events you wanted.
import datetime
#client.event
async def on_member_join(member):
with open("logs.txt", "a") as logsFile:
logsFile.write("\n[{}] {} just joined the server".format(datetime.datetime.now(),
member.name))
If you are currently using the logging module for discord.py or any other libraries, you can also use it for logging your own messages, either using the same or a separate logger.
I'm a newbie at this. In my CICS startup JCL I am setting a SYSIDNT of SYSIDNT=Hip1. Is it possible to echo this out to the terminal on startup just like I have found I can do with other things like &SYSUID and &APPLID?
If I have something like this:
SYSIDNT=Hip1
GMTEXT='CICS: &SYSUID, &APPLID, &RELEASE at'
Then I see something like this:
CICS: HIPPO01, IJCKBCDM, 730 at 10:47:11
But I cannot get Hip1 echoed out... and I'd like to know if that is possible to achieve. I see Hip1 echoed to screen once CICS is up and running and I'm using various transactions, so I'm positive it's set correctly - it's just that I can't see to tag it onto the GMTEXT.
I believe I got something I'm happy with by using an EXPORT and a SET statement for a new Symbol - I called SYSTEMID - and then assigned that to SYSIDNT and included it within the GMTEXT via &SYSTEMID.
//MYEXP EXPORT SYMLIST=(SYSTEMID)
//STEP1 SET SYSTEMID=HIP1
Just as we have in Intersystem Cache D ^JOBEXAM to examine the jobs running in background or scheduled.
How can we do the same in GTM?
Do we have any command for the same. Please advice.
The answer is $zinterrupt; and what triggers it: mupip intrpt. Normally it dumps a file on your GT.M start-up directory containing the process state via ZSHOW "*"; however, you can make $zinterrupt do any thing you want.
$ZINT documentation:
http://tinco.pair.com/bhaskar/gtm/doc/books/pg/UNIX_manual/ch08s35.html
A complex example of using $ZINT:
https://github.com/shabiel/random-vista-utilities/blob/master/ZSY.m
--Sam
Late answer here. In addition to what Sam has said, there is a code set, "^ZJOB" that is used in the VistA world. I could get you copies of this if you wanted.
I am running multiple specs using a Protractor configuration file as follows:
...
specs: [abc.js , xyz.js]
...
After abc.js is finished I want to reset my App to an initial state from where the next spec xyz.js can kick off.
Is there a well defined way of doing so in Protractor? I'm using Jasmine as a test framework.
You can use something like this:
specs: ['*.js']
But I recommend you to separate the specs with a suffix, such as abc-spec.js and xyz-spec.js. Thus your specs will be like this:
specs: ['*-spec.js']
This is done to avoiding the config file to be 'run'/tested if you put the config file in the same folder as your tests/spec files.
Also there is downside that the test will be run in 0 -> 9 and A -> Z order. E.g. abc-spec.js will run first then xyz-spec.js. If you want to define your custom execution order, you may prefix your spec files' names, for instance: 00-xyz-spec.js and 01-abc-spec.js.
To restart the app, sadly there is no common way (source) but you need to work around to achieve it. Use something like
browser.get('http://localhost:3030/');
browser.waitForAngular();
whenever you need to reload your app. It will force the page to be reloaded. But if your app uses cookie, you will also need to clean it out in order to make it completely reset.
I used a different approach and it worked for me. Inside my first spec I am adding Logout testcase which logouts from the app and on reaching the log in page, just clear the cookie before login again using following:
browser.driver.manage().deleteAllCookies();
The flag named restartBrowserBetweenTests can also be specified in a configuration file. However, this comes with a valid warning from the Protractor team:
// If [set to] true, protractor will restart the browser between each test.
// CAUTION: This will cause your tests to slow down drastically.
If the speed penalty is of no concern, this could help.
If the above doesn't help and you absolutely want to be sure that the state of the app (and browser!) is clean between specs, you need to roll out your own shellscript which gathers all your *_spec.js files and calls protractor --specs [currentSpec from a spec list/test suite].
I'm having a strange problem here. I'm moving a (working) site to a new apache server to which I don't have direct access (I have to go through two people to get stuff done).
The site uses a perl script called adframe to parse html templates. The URLs with which it's called look like /cgi-bin/adframe/index.html?x=something with adframe being the script. The missing suffix never caused any real problems. But on this new Ubuntu server $ENV{'QUERY_STRING'} is always empty. $ENV{'REQUEST_METHOD'} shows up correctly as GET, but the query_string shows nothing ...
Regular *.cgi scripts show the query_string without problems.
From the logs I gathered that the server seems to be running fastcgi, mod_fcgid and the server doesn't even accept .pl as an extension for scripts. I don't have that much experience with server software, but I figured it might be a problem with the server not accepting adframe as a cgi script and thus not passing the query_string correctly ... Can anyone give me a few hints to where I could point the administrator or maybe something I could do in .htaccess myself? Anyway to make sure, adframe is recognized as a cgi script!? (if that's the problem ...)
Any help is appreciated!
thomas
EDIT: I found more details: The server seems to be running a VARNISH cache ... thats's the main difference to my usual configurations ...
Also, the way the script works is, if you call /cgi-bin/adframe/somedir/somefile.html?x=something, $ENV{PATH_INFO} tells which template to parse and $ENV{QUERY_STRING} is, well, the query string. Now the query string is empty, but if I call /cgi-bin/adframe?x=something (without any PATH_INFO), the query string shows up!
Does anyone have an idea what's going on here?
thanks!
Got it. The VARNISH cache strips all the query strings off static content (*.html etc) ... phew
Just ran into the same problem. I am complete newbie in perl scripting.
I tried following:
#values = split (/&/, $ENV{'QUERY_STRING'});
but it didn`t work
this worked:
#values = split (/&/, "$ENV{'QUERY_STRING'}");
just in case if other newbies have ran into the same problem.