file recurse change very slow in puppet - content-management-system

file { '/opt/graphite/storage':
ensure => directory,
recurse => true,
owner => 'www-data',
group => 'www-data',
}
I have about 50G files in '/opt/graphite/storage' directory.
And it took about 300 seconds to finish this puppet code.
Is there a way I can speed up it?
checksum => none didn't fix my problem...

Pretty sure there's no way around this. Bottom line is puppet sucks at recursively setting permissions/attributes on a large directory tree.
You might be better off setting these permissions in a cronjob, or if you're looking for immediate updates, create a separate exec to take care of this.

Related

waf copy a file from source tree to the build tree

I have the following snippet, to copy a file as-is to the build dir:
for m in std_mibs:
print("Copying", m)
bld(name = 'cpstdmib',
rule = 'cp -f ${SRC} ${TGT}',
#source = m + '.mib',
source = bld.path.make_node(m + '.mib'), # <-- section 5.3.3 of the waf book
target = bld.path.get_bld().make_node(m + '.mib')
)
I see that this rule, though hit (from the print), the copy doesnt seem to be happening!
I also changed the source to use the make_node as shown, in an example in the section 5.3.3 of the waf book, still no luck! Am I missing something obvious here!?
Also, I have some rules after this, which rely on the copied files, and I tried adding
an intervening
bld.add_group()
I hope that the sequencing will work, if this copy succeeds
If you run the rule once, it will not be run again until source is updated. This is true even if the target is deleted, for instance (which is probably how you were testing.)
If you want to recopy if the target is deleted, you will need always=True, or you'll need to check for existence and set target.sig = None.
Two alternatives:
features="subst" with is_copy=True:
bld(features='subst', source='wscript', target='wscript', is_copy=True)
waflib.extras.buildcopy like this:
from waflib.extras import buildcopy
#...
def build(bld):
bld(features='buildcopy',buildcopy_source=['file'])
cp is not platform independent.
A task_gen object is created, which later will become a Task, that will be executed before process_sources. Don't expect an immediate effect.
Have a look into your out dir, there will be out/${TGT} (not exactly, but ${TGT} path relative to your top directory)
This is totally to be expected behaviour, since you do not want to modify your source tree when building.

Eclipse cleanup - what are the ".index" files - can I safely delete them?

Trying to reduce the size of my (DB synced) workspace - realized that the folder
${workspace_loc}\.metadata\.plugins\org.eclipse.jdt.core
was taking ~35 Mbytes - the contents of the folder are .index files (which take the most space) and some others (which are a couple Kb worth) :
[0-9]*\.index
externalLibsTimeStamps
indexNamesMap.txt
invalidArchivesCache
javaLikeNames.txt
nonChainingJarsCache
participantsIndexNames.txt
savedIndexNames.txt
variablesAndContainers.dat
I can't seem to be able to find docs on those. Can I safely delete them ? Can you point me to some docs on the JDT plugin folders/files contained in ${workspace_location}\.metadata\ directory ?
Is there any way via the gui to clean up the caches (preferably periodically) ?
PS : I 'm on Kepler if this makes a difference
PS2 : links to docs may be links to code comments and such
Yes, you can safely delete them, but it is not very useful.
According to an answer to How would you access Eclipse JDT index?, these files are the class index used when you "Open Type..." (in Refactor>Open Type... or via Ctrl+Shift+T). So if you delete them, next time you want to open a class using "Open Type..." the classes will be reindexed.
Therefore, deleting it for the sake of saving space has little sense, as it will be re-created. Deleting is however useful if you think you have something messed up in your index, it is a way to update it, as the refered answer suggests.

Hypnotoad Logfile

Does Hypnotoad write any Logfile?
I can't find anything about that here: http://mojolicio.us/perldoc/Mojo/Server/Hypnotoad
Also the option --help says nothing about it.
I understand, that application-wise I need to use stuff like $self->app->log->error('aua!')... but something like a server log does not exist? (e.g. client requests, internal errors, etc)
If the answer is just no, I'm fine.
This would then mean, that I would need to implement this in my application I guess.
I can imagine that it makes sense to keep server-code small and clean here, maybe this would be the reason for a lack of this functionality?
Or is it that I can enable it?
If your application has a log folder, the log will be written there: http://mojolicio.us/perldoc/Mojolicious/Guides/Tutorial#Mode
I don't think so, but it's easy to set one up.
use Mojo::Log;
...
app->log( Mojo::Log->new( path => <filename>, level => 'debug' ) );
...
app->start;
app->log( Mojo::Log->new( path => , level => 'debug' ) );
Insert it to startup function

SharpSvn: Why was update of subfolder from Empty Depth Checkout skipped?

I'm having some trouble cherrypicking some folders out of a repo using SharpSvn (from C#). I did this:
client.CheckOut( uri, dir, new SvnCheckOutArgs() { Depth = SvnDepth.Empty } );
foreach( var folder in folders )
{
client.Update( folder );
}
But my second call to Update didn't work. It reports that the action was SvnNotifyAction.Skip and nothing gets written to the working copy.
uri is essentially something like: svn://myserver/myrepo/mysdk and dir is something like C:\Test\mysdk. (I've changed exact names for the purposes of this question, but structurally it's identical.)
Then the 1st folder is C:\Test\mysdk\include (this works)
Then the 2nd folder is C:\Test\mysdk\bin\v100\x86 (this one doesn't update)
Why would the first one work but when I get the 2nd folder (nested subfolders) it doesn't Update? It reports that it is skipped? But I don't know how to figure out why.
It turns out that updating the nested subdirectory doesn't work because the parent directories don't exist yet and so the nested subdirectory update is skipped. To fix this, I needed to add an argument to Update to indicate that it should create the parent directories.
(The equivalent svn command line option would be --parents).
client.Update( folder, new SvnUpdateArgs() { UpdateParents = true } );
I discovered this by trying to do it manually from the svn command line (and encountered the same problem.) svn help co offered this tiny clue: --parents : make intermediate directories I'm assuming that UpdateParents and --parents are equivalent. So far so good.

How can I create RRD files in Perl?

I have a separate application printing logs in every 10 seconds. I need to create RRD files from the log files. I need some Perl code to read the log files and create the RRD only without the graphs.
I have also gone through the available Perl module in CPAN, i.e. RRD::Simple and RRD::Simple::Examples, but I still need help.
I'd start with RRD::Simple. There's some example code in the documentation. Since you don't need to create a graph, simply skip that section of the example.
Some of the examples read a single sample of data, call the update function once, and then exit. Those scripts are meant to be run periodically to collect data in real time. The example that's probably more pertinent to your needs is ApacheAccessLogActivity.pl, which reads an Apache log file, parses each line with a regular expression, does a bit of analysis to figure out what it just read, and then calls update, all in a loop. Note that that example uses the standalone functions rather than the object-oriented versions.
If you've already read the documentation for that module and need more information about how to use it, or if you've tried it and found that it has shortcomings that prevent you from using it, then please be more specific about what you need to do.
RRDTool::OO also looks promising.
I'd recommend RRDTool::OO.
Exerpt from the perldoc:
$rrd->create( ... )
Creates a new round robin database (RRD). A RRD consists of one or
more data sources and one or more archives:
$rrd->create(
step => 60,
data_source => { name => "mydatasource",
type => "GAUGE" },
archive => { rows => 5 });