I've a php script which runs from webservice and insert to DB.
crontab -e
......other cron tasks above.......
...
..
..
# Run test script.php at 1610
10 16 * * * /usr/bin/wget -q -O /home/username/my_cronjobs/logs/cron_adhoc http://localhost/project/script.php
Apparently, at 16:10, this script is run twice!
16:10:01 and 16:25:02
Is it something wrong and gotta do with using wget??
Or did i set the schedule on cron job wrongly?
When i run http://localhost/project/script.php from browser, it will only run once..
Any idea regarding this problem ?
I've tested, there are no other users running the same job... I suspect the way wget works.
As my script needs at least 20mins to complete without sending back a response (it is pulling alot of data from webservicces and save to db) .. suspect there's a time out or retry of wget by default causing this problem.
The wget docs give a default read-timeout of 900 seconds, or 15 minutes.
if, at any point in the download, no data is received for more than
the specified number of seconds, reading fails and the download is
restarted
This is why you were seeing the file called again 15 minutes later. You can specify a longer read-timeout by adding the parameter and an appropriate number of seconds:
--read-timeout=1800
i think i solve my own question.
My php takes some time to load, i guess wget retries or time out after some default specified time.
I solved it by using /usr/bin/php
Whose user's crontab is this?
Check if there is another user for which you set up cron job at different time and forgot about it.
Related
Situation: VisualStudioCode (Browser library) runs a couple of .robot files (manually started)
Then it pauses because of an error...
At that point the process breaks and there is no final report.html
If you stop the run it doesn't generate an report.html that's not what you want. You actually want the results until that point. (or even better described: you still want the links output.xml, log.html and report.html)
you should be able to generate lag.htm and report.html using the rebot command. however you need output.xml for this. output.xml is created when you run the tests. when you break you will probobaly not have all the resources you need.
I would suggest to assign test timeout to the test that causes the pause. When the timeout is reached the test will be stoped automaticaly and you should have all reports. You can also set it globaly for all tests eg.:
*** Settings ***
Test Timeout 2 minutes
I have a CircleCI config in which I am running machine executor, starting up my application using docker-compose, then running E2E tests in Cypress, that are not in docker, against this app. Cypress basically spins up a headless browser and tries to hit the specified url.
Now, no matter what my first test always fails. I have created 10 tests that just hit the root url and click a button. I have those tests run first. First one always fails.
The error is
CypressError: cy.visit() failed trying to load:
http://localhost:3000/
Which basically means there was no response or a 4-500
I though the app might not be ready yet, so I added a sleep before starting tests. I set that to 9 minutes (10 minutes times out on CircleCI). First test failed. I ratcheted that down to 2 minutes. First test fails.
Again to be clear, the first 10 tests are the same, so it’s not test specific.
Update
I have crossposted this to the CircleCI forum.
I think your server isn’t ready.
Before run cypress test, you have to wait for server.
If so, don't use sleep.
You can use wait-on or start-server-and-test.
You can check this doc. https://docs.cypress.io/guides/guides/continuous-integration.html#Boot-your-server
So TJ Jiang was close with his answer. Unfortunately wait-on doesn't work with CircleCI.
So I wrote my own,
timeout 540 bash -c 'while [ $(curl -s -o /dev/null -w %{http_code} localhost:3000) != "200" ]; do sleep 15; done' || false
This pings the url every 15 seconds for 9 minutes till it receives a http status code of 200 or it exits.
To be specific I wrote a Makefile target that uses that line above, I don't know how just pasting that in config.yml would work.
So running this as a step prior to running the cypress tests does in fact make it so all 5 identical tests pass.
So now it pings for about 3 1/2 minutes then goes through. I had tested with sleep up to 9 minutes and always the same result. So I don't know why this works better than sleep but it clearly does.
I had the same issue. Solved it by server/route/wait. I first start the server, then create a rout alias, wait for it to call the Api endpoint (which triggers the app to fill the dom) then do the rest (cy.get/click).
Example before:
cy.get('myCssSelector')
After server/route/wait:
cy.server()
cy.route('GET', '*/project/*').as('getProject')
cy.wait('#getProject')
.then(resp => {
expect(resp.status).to.eq(201)
})
cy.get('myCssSelector')
I am running Ubuntu 12.04 with Postfix
Late yesterday, I added a package (ispconfig3) that modified my postfix configuration and also added an entry to the root crontab that was invoking a script.
At around 11PM last night, I uninstalled that package and went to bed. The uninstall deleted the script and it's directory ok. But it did not clean up the crontab entry.
Since cron had trouble invoking the script, it sent root#xx.org an email. But ispconfig3 had modified my postfix configuration, therefore there is no mail transport capability. So a MAILER-DAEMON email was placed in the mail queue.
Overnight, (I'm guessing here!) cron wakes up every minute and tries to do the same thing. So by 7:00AM there are now 1100+ emails in the mail queue. But since postfix is messed up, I can't see them.
At around 8:00ish I realize that something is wrong with my email set up. I check postfix configuration, backout the changes and now I can get emails ok. I can send them, receive them, etc.
Then the flurry of emails start. Every minute or so, I get around 30 MAILER-DAEMON emails indicating that cron couldn't invoke the script. I check
sudo crontab -l
see the stale command for the non-existing script. I clear it out:
sudo crontab -e
I expect the emails to stop.
They don't.
In fact, every minute they seem to be increasing in number. I then spend a few hours looking at a ton of configuration files to try to figure out what is going on. By 11:00ish or so, it's up to 50+ emails coming in every minute.
I finally realized that this stream of emails was occurring because of the failures that occurred the night before and that it was going to go on for 7 days. The "7d" comes from a postfix configuration setting. (BTW I changed that to be "2d" i.e. only a couple of hours).
In any case, I solved it. I'm adding this post so others can save themselves some time. See below.
Finally hit on the idea to look at the mail queue.
A bit of googling and I found this site:
https://www.garron.me/en/linux/delete-purge-flush-mail-queue-postfix.html
I tried
postqueue -p
which listed all of the "(mail transport unavailable)" emails:
... snip ...
-- 1104 Kbytes in 1185 Requests.
I then did:
postqueue -f # this flushes the mail queue
postqueue -p
Mail queue is empty
And all of a sudden email flurry ended.
Note: the website above said to use:
postfix -f
that did not work for me. A bit of googling found the postqueue command.
Another note: I was worried there were emails in that mail queue that were not "mail transport unavailable", so I double checked all 1185 emails to ensure it was ok to purge them.
I'm using a Raspberry Pi, and upon startup it's sending an e-mail with the time and an IP address. The problem is that the time is not correct, it's the time from last time the system was shut down. When I log in through ssh and do a date command, I get the correct time. In other words, the e-mail is sent before the system has updated its time.
I was thinking of automatically running ntpdate on boot, but after reading up on it it seems like a bad idea due to the many risks of error.
So, can I somehow wait until the time has been uppdated before continuing in a script?
There is a tool included in the ntp reference implementation for this very purpose. The utility has a rather cryptic name: ntp-wait. Five minutes with the man page and you will be all set.
I am trying to use dancer and starman for my website. And i am succeed in setting the error log into file. Of course i can run a script to move the error log everyday. But I just want to know whether exits method or cpan module to solve the problem.
Thanks~
Do not reinvent the wheel, you will repeat errors of the past that are already fixed.
Use logrotate. It is a unix tool for specifically this kind of task.
To rotate your logs you would usually create a logrotate config for your task in /etc/logrotate.d/.
For example to daily rotate and keep your logs for 14 days:
# /etc/logrotate.d/dancer-error-log
/path/to/my/dancer-error.log {
daily
rotate 14
create 0660 mydanceruser mydancergroup
}