I was wondering if we could configure a codespace to run indefinitely
and not time out when inactive
The docs state that the maximum is 4 hours:
Under "Default idle timeout", enter the time that you want, then click Save. The time must be between 5 minutes and 240 minutes (4 hours).
Related
Normally, my pipelines take 15 minutes to execute.
Recently, for some strange reasons, some pipelines take between 45 minutes and 6 hours to fail.
Is it possible to set a default timeout limit on GitHub Action's pipeline (for example, auto cancel after 30 minutes)?
You can change default time limit in two ways
job.<id>.timeout-minutes sets a timeout for a whole job
job.<id>.steps.timeout-minutes sets a timeout for a single step
Your scenario:
my-job:
runs-on: ubuntu-latest
timeout-minutes: 30
According to https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idtimeout-minutes
The timeout-minutes parameter defaults to 360 minutes (6 hours).
I parallelized my mutation testing so that my workflow takes around 6.5 hours to run (mutation testing with Stryker of ~1600 mutants on just 2 cores - 9 jobs in parallel). Thus, I’ve set the timeout-minutes to 420 minutes (7 hours) for the mutation job just in case: https://github.com/lbragile/TabMerger/blob/b53a668678b7dcde0dd8f8b06ae23ee668ff8f9e/.github/workflows/testing.yml#L53
This seems to be ignored as the workflow still ends in 6 hours 23min (without warnings/errors): https://github.com/lbragile/TabMerger/runs/2035483167?check_suite_focus=true
Why is my value being ignored?
Also, is there anything I can do to use more CPUs on the workflows virtual machine?
GitHub-hosted runners are limited to maximum 6 hours per job.
Usage limits
There are some limits on GitHub Actions usage when using GitHub-hosted runners. These limits are subject to change.
[...]
Job execution time - Each job in a workflow can run for up to 6 hours of execution time. If a job reaches this limit, the job is terminated and fails to complete.
https://docs.github.com/en/actions/reference/usage-limits-billing-and-administration#usage-limits
Is there any other way than using the "Schedule update installation" action and UpdateChecker.executeScheduledUpdate to run a downloaded update?
Specifically I want to avoid the 24 hours before retrying after a failed installation attempt of the default scheduling (I want it rescheduled immediately).
You can set the system property
-Dinstall4j.updateRetryInhibition=nnnn
that specifies the inhibition time in minutes.
We need to change our Google cloud SQL instance from db-g1-small to db-n1-standard-1. Can I change it with zero downtime?
Edit 1
I think I found the answer. It seems that it will take a few seconds of downtime.
You can change an instance's tier at any time, with just a few seconds
of downtime.
https://cloud.google.com/sql/pricing
Edit 2
I tried it on our dev env. The downtime was about 10 sec.
while true; date; do curl https://api.xxx.com/v1/items; echo ""; sleep 1s; done
2016/8/29 16:24:50 JST
{"OK"}
2016/8/29 16:24:51 JST
Error
2016/8/29 16:25:01 JST
{"OK"}
The note about changing tier in a few seconds is under the First Generation section of that page.
For a Second Generation instance, it may take several minutes.
I have noticed that my Google Cloud SQL instance is losing connectivity periodically and it seems to be associated with some read spikes on the Cloud SQL instance. See the attached screenshot for examples.
The instance is mostly idle, but my application recycles connections in the connection pool every 60 seconds so this is not a wait_timeout issue. I have verified that the connection are recycled Also, it occurred twice in 30 minutes and the wait_timeout is 8 hours.
I would suspect a backup process but you can see from the screenshot that no backups have run.
The first instance lasted 17 seconds from the time the connection loss was detected until it was reestablished. The second was only 5 seconds, but given that my connections are idle for 60 seconds the actual downtime could be up to 1:17 and 1:05 respectively. They occurred at 2014-06-05 15:29:08 UTC and 2014-06-05 16:05:32 UTC respectively. The read spikes are not initiated by me. My app continued to be idle during the issue so this is some sort of internal GCS process.
This is not a big deal for my idle app, but it will become a big deal when the app is not idle.
Has anyone else run into this issue? Is this a known issue with Google Cloud SQL? Is there a known fix?
Any help would be appreciated.
****Update****
The root cause of the symptoms above has been identified as a restart of the MySQL instance. I did not restart the instance and the operations section of the web console does not list any events at that time, so now the question becomes, what would cause the instance to restart twice in 30 minutes? Why would a production database instance restart period?
That was caused by one of our regular release. Because of the way the update takes place an instance might be restarted more than once during the push to production.
Was your instance restarted ? During the restart the spinning down/up of an instance will trigger read/write.
That may be one reason why you are seeing the activity for read/write.