I'm new to KDB and I'm looking at it from a security stand point.
Can I run a combination of a DB query and an OS command as a one liner?
Or, can I store the OS command's output to a DB object?
I've been playing around with KDB Q, but either it's not possible or
I haven't found the proper syntax.
Thank you
Yes, see below:
q)update res:system each cmd from ([] cmd:("uptime";"date";"uname -a"))
cmd res
----------------------------------------------------------------------------------------------------------------------
"uptime" " 21:01:03 up 31 days, 6:54, 8 users, load average: 0.00, 0.03, 0.00"
"date" "Fri 17 Mar 21:01:03 GMT 2017"
"uname -a" "Linux glyph01 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux"
Running a system command is more or less the same as running any other function in Kdb+.
Related
I am trying to convert a .osm.pbf file to a .osm file.
https://wiki.openstreetmap.org/wiki/Osmosis/Quick_Install_(Windows)
I followed the instructions here:
Installed Java Runtime
Downloaded osmosis and extracted it to a directory
Created a bat file containing "C:\Users\paul\Desktop\osmosis\bin\osmosis.bat"
In a dos command prompt when im in the directory of where the batch file I created is located I try:
osmosis --read-pbf c:\dir\somefile.osm.pbf --write-xml c:\dir\somefile.osm
It just runs really quickly and doesnt convert the file and gives this output:
Nov 24, 2021 4:40:20 PM org.openstreetmap.osmosis.core.Osmosis run
INFO: Osmosis Version 0.48.3
Nov 24, 2021 4:40:22 PM org.openstreetmap.osmosis.core.Osmosis run
INFO: Preparing pipeline.
Nov 24, 2021 4:40:22 PM org.openstreetmap.osmosis.core.Osmosis run
INFO: Launching pipeline execution.
Nov 24, 2021 4:40:22 PM org.openstreetmap.osmosis.core.Osmosis run
INFO: Pipeline executing, waiting for completion.
Nov 24, 2021 4:40:22 PM org.openstreetmap.osmosis.core.Osmosis run
INFO: Pipeline complete.
Nov 24, 2021 4:40:22 PM org.openstreetmap.osmosis.core.Osmosis run
INFO: Total execution time: 2297 milliseconds.
Some sources provide a .osm.bz2 or .osm.zip format which uses standard compression. You can use a program like 7zip to covert those files to a raw .osm file. This format is the easiest to convert to a raw .osm.
However if you need to convert a binary .pbf to a raw .osm I would recommend the tool OSM Convert. Download the large file support version. Unfortunately Osmosis has been unmaintained since September 2018, so try to use newer tools. There is a list of them kept here on the OpenStreetMap Wiki.
With OSM Convert I've used this command with success on Windows 10: osmconvert us-latest.osm.pbf --out-osm -o=us-latest.osm_01.osm to convert us-latest.osm.pbf to us-latest.osm_01.osm
I just upgraded 2 machines from Fedora 31 to 33, and with that upgrade, Perl went from 5.30.3 to 5.32.1.
The first thing I noticed is that GDBM_File.pm is no longer included in the Perl Core, but that was no problem.
The second thing I noticed is that GDBM write in fc33/perl5.32.1 is incredibly slow. That's a problem.
I noticed something amiss on the first machine, so I ran a little benchmark with fc31/perl5.30.3 on the second machine before doing the upgrade.
gdbm1.pl is rebuilding a db file from an ascii text file, about 33M entries. gdbm0.pl is reading the same ascii text file, and doing everything exactly the same as gdbm1.pl, except not executing the actual hash assignments "$db{...} = ...". That is the only diff. (The ascii file is around 11GB.)
FC31/Perl5.30.3:
[259] time ./gdbm0.pl 16
real 4m51.593s
user 4m49.808s
sys 0m1.306s
[260] time ./gdbm1.pl 16
real 11m39.682s
user 6m30.619s
sys 3m19.260s
FC33/Perl5.32.1:
[287] time ./gdbm0.pl 16
real 5m10.379s
user 5m8.764s
sys 0m1.299s
[288] time ./gdbm1.pl 16
real 554m48.187s
user 7m49.315s
sys 433m42.435s
Obviously it takes longer to write the DB than not: I always expect gdbm0.pl to be faster than gdbm1.pl. But the only diff btwn gdbm0 and gdbm1 is writing the DB, so the time diff is all due to that. On fc31/perl5.30.3, that diff is under 7m. On fc33/perl5.32.1, the time diff is a staggering 550m - over 9 HOURS, vs 7 MINUTES before.
I've done some web searches for anything about GDBM_File being slow in perl5.32.1, I've found nothing. I don't even know if Perl is the problem, it might be fc33, or some combination of both.
Or it could be that some C lib is missing in fc33, and GDBM_File is doing everything in native perl. I don't know where to go from here.
Update:
#davem: Okay, I have 3 machines: a, b, c. "a" is oldest and slowest, "c" is newest and fastest. Machine "a" runs ubuntu, the other two both run fedora:
Linux a 5.8.0-50-generic #56~20.04.1-Ubuntu SMP Mon Apr 12 21:46:35 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Linux b 5.11.18-200.fc33.x86_64 #1 SMP Mon May 3 15:05:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Linux c 5.11.18-200.fc33.x86_64 #1 SMP Mon May 3 15:05:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
I ran two benchmarks on each machine, first with an in-mem hash, then with a gdbm hash. The in-mem hash result gives a very rough idea of the relative single-thread performance of each machine:
[mem] time perl -e'my %h; $h{$_} = 1 for ("a" .. "zzzzz"); print "#{[scalar(keys(%h))]}\n";'
[gdbm] time perl -e'use GDBM_File; my ($h, %h); $h = "gdbm_write_test"; tie(%h, "GDBM_File", $h, GDBM_NEWDB, 0600); $h{$_} = 1 for ("a" .. "zzzzz"); print "#{[scalar(keys(%h))]}\n"; untie(%h);'
machine_a:
[mem] 12356630
real 0m29.051s
user 0m27.975s
sys 0m0.995s
[gdbm] 12356630
real 4m5.431s
user 2m2.033s
sys 1m36.209s
machine_b:
[mem] 12356630
real 0m12.101s
user 0m11.520s
sys 0m0.559s
[gdbm] 12356630
real 106m35.326s
user 1m0.607s
sys 103m48.518s
machine_c:
[mem] 12356630
real 0m9.498s
user 0m9.163s
sys 0m0.317s
[gdbm] 12356630
real 58m46.555s
user 0m39.566s
sys 48m16.447s
Update 2:
I spent a while fiddling with Perl-DB_File and Perl-BerkeleyDB as possible replacements for Perl-GDBM_File. Because I was too lazy to try to figure out how to file a bug.
False laziness, of course. I finally filed a bug just 2 days ago, and there's already a fix checked in and pending release.
#davem was exactly right, the issue was not Perl itself, but the underlying gdbm library. From the fix commit comment:
"Commit 4fb2326a4a introduced pre-reading of memory mapped
regions. While speeding up searches, it has a negative impact
on write operatons, since every remapping effectively re-reads
the entire database."
I try to convert the character "Friday, March 24, 2017 4:39:46 PM" into a date, but it does not work. dat.1 results in NA. Can anyone tell me, how to fix it.
dat <- c("Friday, March 24, 2017 4:39:46 PM")
dat.1 <- strptime(dat, format="%A, %B %d, %Y %I:%M:%S %p")
dat.1
Result
[1] NA
EDIT. K..pradeeep pointed out that the code is running on his pc. On my it does not work. I use a Macbook Air, OS X 10.12.3. Version:
platform x86_64-apple-darwin15.6.0
arch x86_64
os darwin15.6.0
system x86_64, darwin15.6.0
status
major 3
minor 4.0
year 2017
month 04
day 21
svn rev 72570
language R
version.string R version 3.4.0 (2017-04-21)
nickname You Stupid Darkness
I'm running a JMeter test plan from command line and it's currently outputting something along the lines of:
Created the tree successfully using C:\*****\TestPlan.jmx
Starting the test # Thu Oct 11 10:20:43 EDT 2012 (1349965243947)
Waiting for possible shutdown message on port 4445
Tidying up ... # Thu Oct 11 10:20:46 EDT 2012 (1349965246384)
... end of run
Is there any way to turn off this output and have the plan execute 'silently'?
Found a way to do this, by following this article http://www.robvanderwoude.com/battech_redirection.php
and appending > NUL to the command
jmeter -n -t C:\***\TestPlan.jmx -Jhostname=%1 > NUL
I followed this guide to implementing build numbers in an XCode iPhone project (guide).
I tried it and I am getting the wrong build number when NSLogging. It's not updating correctly and is always one or two numbers behind the info.plist. I need it to be the same number. Anyone know why this is happening?
i.e "[[[NSBundle mainBundle] infoDictionary] objectForKey:#"CFBuildNumber"]" is not the same as the plist's CFBuildNumber.
The script is set to run first, before copy bundle resources and everything. This is the output and info.plist numbers I get:
Application Version: 1.0 Build No: 52 Build Date: Wed Nov 10 15:10:05 CET 2010
(info.plist is build number: 54 and date: Wed Nov 10 15:10:43 CET 2010)
Application Version: 1.0 Build No: 54 Build Date: Wed Nov 10 15:10:43 CET 2010
(info.plist is build number: 55 and date: Wed Nov 10 15:12:54 CET 2010)
Application Version: 1.0 Build No: 54 Build Date: Wed Nov 10 15:10:43 CET 2010
(info.plist is build number: 56 and date: Wed Nov 10 15:13:49 CET 2010)
Application Version: 1.0 Build No: 56 Build Date: Wed Nov 10 15:13:49 CET 2010
(info.plist is build number: 57 and date:Wed Nov 10 15:14:46 CET 2010)
It seems to follow this pattern throughout. So continuing it would be 56 (real 58), 58 (real 59), 58 (real 60), 60 (real 61), 60 real (62), 62 (real 63) etc. etc.
The script (that is set to run before everything else) is:
#!/bin/bash
# Auto Increment Version Script
buildPlist="Project-Info.plist"
CFBuildNumber=$(/usr/libexec/PlistBuddy -c "Print CFBuildNumber" $buildPlist)
CFBuildNumber=$(($CFBuildNumber + 1))
/usr/libexec/PlistBuddy -c "Set :CFBuildNumber $CFBuildNumber" $buildPlist
CFBuildDate=$(date)
/usr/libexec/PlistBuddy -c "Set :CFBuildDate $CFBuildDate" $buildPlist
Because project's Info.plist processed prior to 'Run Script' phase. See 'Build results' window in XCode. To resolve this you should
1) Create new target with type "Run script only" and configure it to update version number
2) Create new target with type "Aggregate" and add to it "Version update" target and "you product" target.
So when you build "Aggregate" target, at first step - version will be updated, and at second step - your product.
I ended up using the already-copied plist file, ${TARGET_BUILD_DIR}/${INFOPLIST_PATH}, and placing the "copy bundle resources"-phase before the script runs instead. That way the number will always be synchronized.