Speed up slow WP CLI - wp-cli

No matter what command I execute it takes about 1,3 seconds to get it done. So it seems that there some kind of bottleneck. When I pass --skip-plugins it will be faster about. It will take about 0,9 seconds.
What does --skip-plugins actually do? I can still add it even if I use the command plugin list. I would assume that it won't load the plugins but it does.
Is there any chance to speed up the execution. I have a batch of commands and it take a long time for them to execute.

Related

Does the Overpas API installation's Area Creation process resume if stopped/started?

The Area Creation process can take up to 24 hours. If something happens during that time which causes the process to stop, will it resume when I run it again or does it start back over from the beginning?
We can assume for this question that the files in $DB_DIR remain in place throughout the running/stopping/starting process.
It will start over from the beginning, assuming you're using areas.osm3s to define the area creation rules. This file contains a number of queries which are being executed to generate the areas. If you restart the process, it will execute those very same queries again from the beginning.
For performance reasons, we use areas_delta.osm3s and the accompanying rules_delta_loop.sh script on the production servers. This way, we can limit the workload to those areas, which have been changed since the last area creation run.

Sensu Scheduler Oddness

I run < 24 checks on my systems. Servers are not regularly heavily loaded. Load averages keep well under 1 during normal operation.
I have noticed a re-occurring issue where the check-cpu check would start triggering high load averages on systems where there was no organic cause for high load. Further investigation showed the high load report was actually due to the check-cpu script running in parallel with other checks. Outside of the checks executing, cpu load was fine.
I upgraded from sensu 0.20 to 0.23 and continued to observe the same issue.
We found that a re-start of the sensu-server and sensu-client services would resolve the problem for a period of time (approximately 24 hours) and then it would return.
We theorized at this point, there must be some sort of time-delay in the dispatch / execution of the checks on the host which causes this overlap to eventually occur.
All checks are set to run at an interval of 30 or 60.
I decided to set the interval of the check-cpu check to 83, and the issue has not occurred since. Presumably because the check-cpu check does not coincide with any others, thus not seeing high cpu load during that short moment.
Is this some sort of inherent scheduling issue with sensu? Is it supposed to know how to dispatch checks with adequate spacing, or is this something that should be controlled by the interval parameter?
Thanks!
I have noticed that the checks drift in execution time. i.e they do not run exactly every 30 seconds but every 30.001s or something like that. I guess the drift might be different on different checks. So eventually you will run into the problem that the checks sync up and all run at the same time, causing the problem. Running more checks at regular intervals (30s, 60s etc) will make this problem occur more often. If you want a change to this problem you have to report it to sensu directly. I think they might fix it eventually since they probably want the system to be scalable.

Matlab - batch jobs won't leave queued status

I've got some code that, as it iterates through a loop, grows by some percent in what is to be processed each time. The first few iterations take 4 seconds, but by the 100th, they're taking minutes - and this is for a lite selection of parameters, as I intend to do 350 iterations. To do serious research with this would take enormous time, and it's really inconvenient that simply running a script ties Matlab's hands behind its back until it's all done, and on top of that it hardly ever uses more than one core at a time.
I understand that turning on a Parallel Pool will enable parallel processing. Even if I can't convert any of the for loops into parfor loops, I understand that running a script as a batch job sends that process into the background, and I can do other things with the Matlab interface and the other 7 processors while I wait for this one to finish.
However, though I have the local parallel pool up and running, and I've checked the syntax for starting a batch job, it's not leaving the "queued" status. The first time I typed in batch('Script4') and hit Enter, and then realized I must have a variable name for the job, so then I did run1 = batch('Script4'). I typed get(run1,'State'), and also checked the Job Monitor, and both told me that its state was "queued".
I did some googling before I came here, and while I found some Q&As of similar experiences, they seemed to be solved by things like waiting for the pool to stop using the whole CPU as it starts up. But I started my pool up a long time ago (and it is still running at this moment!), and when I entered the first batch command, my first clue that something was wrong was that Windows Task Manager said all 8 cores were at 0%.
Is there something I need to call or maybe adjust before it will start executing the queued jobs?
I'm using Matlab R2015a on Windows 7 Enterprise.
I think the problem here is that you're trying to run batch jobs while the parallel pool is open. (Unfortunately, this is a common misunderstanding). Basically, the parallel pool and your batch job are both trying to consume local workers. However, because you opened the parallel pool first, it's consuming all the local workers, and the batch job cannot proceed. You should have seen a warning when you submit the batch job, like this:
>> parpool('local');
Starting parallel pool (parpool) using the 'local' profile ... connected to 4 workers.
>> j = batch(#rand, 1, {});
Warning: This job will remain queued until the Parallel Pool is closed.
There are two possible fixes - the first is simple
delete(gcp('nocreate'))
will ensure no parallel pool is open, and your batch submissions should proceed. The second is more appropriate if your tasks are relatively short-lived - you can use parfeval to submit work to an open parallel pool:
f = parfeval(#rand, 1); % initiate 'rand' on the parallel pool workers
fetchOutputs(f); % wait for completion, and retrieve the result

Precise scheduling of scripts in powershell

I have number of monitoring functions that I want to execute with dynamic frequencies, depending on project. The range of frequencies are from once every second, to once a day. TaskScheduler or ScheduledJob are not precise enough for this purpose.
Is any function scheduler such as this available?
Spinning up a powershell instance is not a trivial event. If you're wanting monitoring on a frequency less than a few minutes you'll be much better off to write a script that stays resident and runs in an endless process-sleep-process loop.
I created script for this purpose that you can find on github

Can I make Perl ithreads in Windows run concurrently?

I have a Perl script that I'm attempting to set up using Perl Threads (use threads). When I run simple tests everything works, but when I do my actual script (which has the threads running multiple SQLPlus sessions), each SQLPlus session runs in order (i.e., thread 1's sqlplus runs steps 1-5, then thread 2's sqlplus runs steps 6-11, etc.).
I thought I understood that threads would do concurrent processing, but something's amiss. Any ideas, or should I be doing some other Perl magic?
A few possible explanations:
Are you running this script on a multi-core processor or multi-processor machine? If you only have one CPU only one thread can use it at any time.
Are there transactions or locks involved with steps 1-6 that would prevent it from being done concurrently?
Are you certain you are using multiple connections to the database and not sharing a single one between threads?
Actually, you have no way of guaranteeing in which order threads will execute. So the behavior (if not what you expect) is not really wrong.
I suspect you have some kind of synchronization going on here. Possibly SQL*Plus only let's itself be called once? Some programs do that...
Other possiblilties:
thread creation and process creation (you are creating subprocesses for SQL*Plus, aren't you?) take longer than running the thread, so thread 1 is finished before thread 2 even starts
You are using transactions in your SQL scripts that force synchronization of database updates.
Check your database settings. You may find that it is set up in a conservative manner. That would cause even minor reads to block all access to that information.
You may also need to call threads::yield.