I am having more than 30000 jobs and executions in my rundeck. Is there any API or CLI so that we can schedule to clean periodically. Because of too many jobs and executions rundeck throws java.lang.OutOfMemoryError: GC Overhead Limit and the services gets stops.
Also if there are more number. The UI takes long time to render the information.
Any documentations or scripts will be helpful.
Using RD CLI you have this good option. So, if you're using Rundeck 3.1 or above, you can go to Project Settings > Edit Configuration > Execution History Clean (tab) and click on "Enable" checkbox (then you can define the parameters in the same page).
More info here.
If your looking to delete the log in Rundeck Server cli, You can find the log details in below location,
Log location : /var/log/rundeck/, /var/lib/rundeck/logs/rundeck/\*/job/\*/logs
Cmd to Clean the logs more the 90 days,
sudo find /var/lib/rundeck/logs/rundeck/\*/job/\*/logs -mtime +90 -delete
sudo find /var/log/rundeck/ -type f -mtime +90 -delete
if your project is only one, you don't have to run the above script.
Go to Projects, Click on recent or failed or running tab and do a bulk delete
Related
How can I create a mongodb procedure that can be scheduled to run once every day, at a fix time, say sharp at midnight GMT?
This google group link says you cannot schedule a task in mongoDB, they have a Jira for this, but you can use Window Task Scheduler which is described in this link. Is this the only way to achieve it? Is this a good way to do it?
Quoting the comment by #Markus,
As written in a different answer, running MongoDB on Windows is a bad idea for various reasons. Under Linux, you could use crond to run a .js file easily. If your requirement is to run MongoDB and have a reliable scheduler, the right tool for the job is Linux.
This answer also mentions the way to solve this.
This is done on Windows the same way you do on Linux.
ONE: Make a script in JavaScript to manage the task. This can be done in other languages if you prefer. This is a JavaScript script to rotate logs.
use admin
db.runCommand( { logRotate : 1 } )
TWO: Create a task in Task Scheduler to run the script. This can be done with the GUI, the API, or in XML. I usually set it up in the GUI and export the XML to allow parameterization of the database server, password, port, and user.
THREE: Include the execution of the script in the task
$MONGO_HOME/Mongo localhost:27017 -u myMongoServiceAccount -p somepassword LogRotate.js
The same concept can be applied to index management, gathering database stats, or managing stale locks.
I created a task in my Win7, to run php.exe. The command is: C:\xampp\php\php.exe -q "C:\xampp\htdocs\creport\cleaner.php", and it works fine.
Then I created another task to run mysqldump.exe. The command is: C:\xampp\mysql\bin\mysqldump.exe -u root -pvince c_report > C:\dbfiles\backup-"%DATE:/=-%.sql", but when creating the task a window popped up asking for account information like:
[Sorry I don't have enough reputation to insert an image in my post]
Why is that? I mean, why are the two .exe files treated differently? And probably just because of that, I always failed to run mysqldump.exe though the task, it failed with last-run-result being 0x6.
Thanks a lot for any help!
Actually I was fooled. I just found that the whether does the window pop up has nothing to do with which exe this task is scheduled to run. It comes up just because I has ticked option [Run whether user is logged on or not], therefore, in the case that user account is logged out, Windows need to store the password to log it in to run the task.
Is there a way to stop Eclipse for publishing / deleting a project from server, if you really want it to stop in between, and not waiting for eternity it to complete it first (like it makes for me at least)? The event blocks every mods on eclipse and if I press cancel nothing really is happening to the upload process.
EDIT1:
Question in short: Can I cancel publishing to Glassfish from Eclipse servers tab in between the processing?
Answer criteria: I am happy with any insights
what could be done when you want to stop in between?
and what happens there under hood forcing the Eclipse to wait the event until end?
It seems that it is not possible to stop the server from deploying to Glassfish, but what you can interrupt is to run manual intervention before Glassfish starts the deployed program.
It is only partial solution but cuts the time to half, at least.
For manual intervention I use:
#!/bin/bash
ASADMIN=(path to Glassfish asadmin executable)
function undeploy_all {
for p in $*; do
echo "Undeploying $p..."
$ASADMIN undeploy $p
done;
}
apps=`$ASADMIN list-applications -t | awk '{print $1;}'`
undeploy_all $apps
and I run this from Cygwin. Takes effect when logs show that services and wars are copied to server generated folder and app is loading its startup stuff to the log file.
Original post about manual intervention:
[1] Undeploy all applications from Glassfish
We use an ETL process to pull data from Google Cloud Storage, but annoyingly it hangs everytime Google releases udpates to GSUtil, because it sits at a prompt asking if you want to update the library. Fine if you are doing this manually, but not cool when it's being run in an automated SSIS package, as jobs don't finish for days and you keep wasting time with the same stupid cause.
I thought I was going to be cleaver, and add "python gsutil update -n" to the top of the bash script I'm automating the building/execution of in my SSIS Package in the hope to curb this problem, but when I run this command from the prompt in either Windows Server 2008r2 or Windows 7 I get the following:
C:\gsutil>python gsutil update -f -n
Copying gs://pub/gsutil.tar.gz...
OSError: The process cannot access the file because it is being used by another process.
Any help?
P.S. - Also, Google engineers... can you PLEASE remove these prompts? for all of us using these tools in automated processes? I have other things to work on, instead of constantly going back to things like this every few days/weeks.
What version of gsutil are you running?
Also, to be clear: Are you talking about the fact that gsutil checks for available software updates periodically, and if it finds them it then prompts you whether you want to update? Or are you talking about the fact that the gsutil update command asks if you want to perform the update?
If the former, gsutil shouldn't be performing this check/prompting if you are running gsutil from a script not connected to at TTY. If that's not working correctly we'd like to know.
And also, if that's the problem you're having, you can completely disable automated software update checks by setting software_update_check_period=0 in the [GSUtil] section of your .boto config file.
I have a dev server which I often push code changes to over Git. After each push, I need to manually log into the server and restart the supervisor processes.
Is there a way to have Supervisor monitor a filesystem directory for changes and reload the process(es) on changes?
You should be able to use an Event Listener which monitors the filesystem (with perhaps watchdog) and emits a restart using the XML-RPC API. Check out the memmon listener from the superlance package for inspiration. It wouldn't need to be that complicated. And since the watchdog would call your restart routine you don't need to read the events using childutils.listener.wait.
Alternatively, git hooks might do the trick if the permissions are correct for the supervisord API to be accessed (socket permissions, HTTP passwords). A simpler but less-secure approach.
A simpler and even less-secure approach would be to allow you to issue a supervisorctl restart. The running user has to match your push user (or git, or www, depending on how you have it setup). Lot's of ways to have it go wrong security-wise. But for development, might do fine.
Related:
Supervisord: is there any way to touch-reload a child?
I also didn't find any solution so I tried to make my own.
Here it is.
You can install the package by this command:
pip install git+https://github.com/stavinsky/supervisord-touch-reload.git
(I will add it to PyPI after adding some tests. )
An example of setting up supervisor located in examples folder in github. Documentation will be very soon, I believe.
Basically all you need to start use this module is add event listener with command like:
python -m touch_reload --socket unix:///tmp/supervisor.sock --file <path/to file file> --program <program name>
where file is a file that will be monitored with absolute or relative to directory path, socket is the socket from supervisorctl section and program is program name from [program:<name>] section definition.
Also available --username and --password, that you can use if you have custom supervisor configuration.
While not a solution which uses supervisor, I typically solve this problem within the supervised app. For instance, add the --reload flag to gunicorn and it will reload whenever your app changes.
I had the same problem and created Superfsmon which can do what you want: https://github.com/timakro/superfsmon
pip install superfsmon
Here's a simple example from the README:
To restart your celery workers on changes in the /app/devops
directory your supervisord.conf could look like this.
[program:celery]
command=celery -A devops.celery worker --loglevel=INFO --concurrency=10
[program:superfsmon]
command=superfsmon /app/devops celery
Here is one liner solution with inotify tools:
apt-get install -y inotify-tools
while true; do inotifywait -r src/ && service supervisor restart; done