Is the EFI shell flexible enough to loop over boot entries? - uefi

I'm trying to write an EFI shell script which deletes all boot entries (as given in bcfg dump boot), without knowing how many exist ahead-of-time.
The language provides a looping construct, patterned off that from Microsoft's shells:
for var in <set>
...
endfor
...but I'm unclear on whether there's a reasonable way to get the numeric identifiers of the boot entries from bcfg dump into the <set>.

At this moment (UEFI Shell v2.1 and UEFI v2.50) there is no way for parsing bcfg output using UEFI Shell.
The only supported method for parsing in UEFI Shell script is by using parse command, which require Standard-Format Output (it seems to be CSV). Unfortunately only 7 commands can generate SFO, by using -sfo flag as parameters. Supported commands are: ls, map, memmap, date, dh, devices, drivers.
Removing all boot options can be achieved by writing simple C application that mimic bcfg behavior. I managed to do that and sample code can be found here.
Note that removing all boot options can be dangerous in some cases and can lead to unrecoverable state of your hardware. Make sure you know what you doing.

Related

Take kernel dump on-demand from user-space without kernel debugging (Windows)

What would be the simplest and most portable way (in the sense of only having to copy a few files to the target machine, like procdump is) to generate a kernel dump that has handle information?
procdump has the -mk option which generates a limited dump file pertaining to the specified process. It is reported in WinDbg as:
Mini Kernel Dump File: Only registers and stack trace are available. Most of the commands I try (!handle, !process 0 0) fail to read the data.
Seems that officially, windbg and kd would generate dumps (which would require kernel debugging).
A weird solution I found is using livekd with -ml: Generate live dump using native support (Windows 8.1 and above only).. livekd still looks for kd.exe, but does not use it :) so I can trick it with an empty file, and does not require kernel debugging. Any idea how that works?
LiveKD uses the undocumented NtSystemDebugControl API to capture the memory dump. While you can easily find information about that API online the easiest thing to do is just use LiveKD.

Perl local libraries - Sybase

I'm going to build a extremly small script for dumping a Sybase database in perl. The problem is that Perl doesn't come with preinstalled Sybase-support. I don't have access to the servers root so I can't install any packages and I can't reach the perl-folder. The server is not configured for internet access so I have to deliver the packages "manually" thorugh FTP.
So, my question is if there are any easy ways of doing this. The only library I need is DBI::Sybase or Sybase standalone (maybe I haven't done my research enough and doesn't even need this much?) which means I would love to just be able to put the .pm file there, loading it through
use localModule
and then run my small script.
The solution has to work on both Red hat and Solaris if I understood my supervisor correctly.
Best regards
Since you are primarily concerned with dumping the database, and not data retrieval and manipulation, you could probably get by without having to use DBI::Sybase or other perl module that is not preinstalled.
Without more details, it's hard to be very specific, but here's the overview. Your perl script can execute some SQL scripts which can dump the databases.
You can either put the list of databases you wish to dump in a config file (or env file), or you can generate it dynamically by calling isql using the -b option to suppress headers, and nocount to suppress footers, and store the output in an array.
Once you have the list of databases, just loop them, running another isql command to dump each database.

Waiting for mongodb to prealloc

Does MongoDB create a file I can poll for in order to determine when prealloc is done? Right now I have a script to run rs.init(..config..), but I need to wait with triggering it until mongod is up and running.
Since tail -f | grep .. | xarg.. the log file is a bit of a flaky hack, I wondered if there is any other way to determine that mongod is done with prealloc?
We have the same problem for testing replica sets with the PHP driver. Here we use the mongo shell's ReplSetTest() functionality to get around this. You can see here how that works:
https://github.com/mongodb/mongo-php-driver/blob/master/tests/utils/myconfig.js#L9
However, I am not sure how well this works for non-test environments as the amount of options you can give are rather limited (such as, you can't set a data dir properly as things are hardcoded). All the functions and code for this is all in JavaScript at https://github.com/mongodb/mongo/blob/master/src/mongo/shell/replsettest.js — this should give you an overview how it works and allows you to rewrite it in your preferred language.
try to use inotify (i am not sure exactly), for example, if it is necessary to determine that the file is closed after writing:
[maverick#mutabor ~]$ pyinotify -e IN_CLOSE_WRITE /tmp/testfile

Customizing DB2 Command Line Processor

I have written a program in java that reads .csv files and stores them into a database table. But the performance of the storing operation is very slow. When I use DB2 Command Line Processor there is a drastic change in performance and it's very fast. So, I am trying to customize DB2 Command Line Processor according to my requirement. I searched on Google but I only found topics for how to use it. I would like to get clear on following subjects before I start.
Is "DB2 Command Line Processor" open source?
Which programming language is used?
Is there alternative like DB2 Command Line Processor with open source-code in java?
Is there a way to call DB2 Command Line Processor out of a java program?
It may be worth investigating the Java program, the slow run times may be related to how often you are commiting the data (i.e. you may running in auto-commit mode (commiting after every insert)).
Committing after every 500 insert may be a lot faster than commiting after every record
see DB2 autocommit for details on auto-commit
1) DB2 CLP (command line processor) is part of DB2. It is not open source, and it is included in all editions (Express-C, express, workgroup, extended), and in the Data Server client. This last is free to download, and install in all clients.
2) The best way to use the capabilities os DB2CLP is via scripts, such as bash scripts or windows scripts.
You can also call the db2clp from another program, such as a java application (runtime).
3) There are shells for databases with open source licence, however, you are mixing two things: a shell, that is normally a black screen where you type commands. And a driver to query a database from a program developed by yourself.
4) Again, via Runtime, http://docs.oracle.com/javase/6/docs/api/java/lang/Runtime.html
Finally, the best is to use a JDBC driver, in order to do things directly, and not with a lot of tiers. You have to check your Java code, probably the reading is not efficient. And also, check the properties of the DB2 Java driver.
One more thing, if you want the fatest, try to use LOAD to insert data in the database. It does not perform any log. You can call LOAD from a java application (remember to load the db2 environment before executing any command)

Is there a perl function similar to lsof command in linux?

I have a shell script which archives log files based on the whether the process is running or not. If the log file is not used by the process then I archive it. Until now, I'm using lsof to get the log file being used but in future, I have decided to use perl to do this function.
Is there a perl module similar to what lsof in linux can perform ?
There is a perl module, which wraps around lsof. See Unix::Lsof.
As I see it, the big problem with not using lsof is that one would need to work in a way that is independent of the operating system. Using lsof allows the perl programmer to work with a consistent application allowing for operating system independence.
To have a perl module developer to write lsof would, in effect, be writing lsof as a library and then link that into perl - which is much more work than just using the existing binary.
One could also use the fuser command, which shows the process IDs with the file handle. There is also a module which seeks to implement the same functionality. Note from the perldoc:
The way that this works is highly unlikely to work on any other OS
other than Linux and even then it may not work on other than 2.2.*
kernels.
One might try walking /proc/*/fd and looking at the file descriptors in there to see if any are pointing to the file in question. If it is known what the process ID of a running process that would be opening the log file, it would be just as easy to look at that process. Note, that this is how the fuser module works.
That said, it should be asked "why do you want to move away from lsof"?