How to specify filepath for KDB tickerplant to save data to at End of Day - kdb

I'm wondering how to specify a filepath for my tick setup to save to when .u.endofday is sent from the tickerplant. Currently, when this message is sent the RDB is saved to the working directory where the tick.q file is.
Is there away to pass in a file path so that it is saved to ../../HDB rather than ../../Tick?

In the vanilla r.q script, the tables are saved down using
.Q.hdpf[`$":",.u.x 1;`:.;x;`sym]
where the second parameter is the directory that the tables are saved to.
`:.
represents the current directory. You can change it to something else, for example `:/home/data/hdb
https://code.kx.com/q/ref/dotq/#qhdpf-save-tables
If you are using the plain r.q script, referring to
https://github.com/KxSystems/kdb-tick/blob/master/tick/r.q
There is a comment under .u.rep, suggesting to modify the 'system cd' command, where you can specify any directory you like. This will change the directory inside the r.q process. Then when .Q.hdpf is called it will save the tables to that directory. The rdb calls .u.rep on start up.
.u.rep:{(.[;();:;].)each x;if[null first y;:()];-11!y;system "cd ",1_-10_string first reverse y};
/ HARDCODE \cd if other than logdir/db
You could have
system "cd /home/data/hdb"
which will change the current directory to this location

Depending on your setup there is couple of ways to do this.
But I think the most efficient would be for you to look at the .u.end function that is called in you RDB and see what save down function is used there.
Search the place where .u.end is defined on the RDB and look at the save_down functions.
Look for .Q.dpft which is most likely or there is set command.
Documentation on the .Q.dpft:
https://code.kx.com/q/ref/dotq/#qdpft-save-table
Where the first argument that is fed in is the directory path.
So could add a directory there in the form of
hsym `$"/path/path/HDB"
Which returns
`:/path/path/HDB
as a symbol to be inserted to the function.
There might be different ways of tables being saved down, but that is most likely way it is done.
There is also different ways to choose a directory with par.txt file that is loaded in. So useful to see if par.txt file is loaded in with the .Q.par function called on the RDB.
.Q.par[`:.;.z.d;`]
if the answer is just:
`:./2020.05.09/
That means it is using the directory you launched the script in.
Here you can find some more documentation on this:
https://code.kx.com/q/kb/partition/

Related

In Scribble how can I get the path to the current file being processed?

I'm using Scribble to pull in parts of files that are stored in other files (not written in Racket). Reading the files and getting the content in works fine, but I don't know how to figure out where the file that the function is being invoked is in, so the only way to get it to work is to pass in a path relative to the root of the document, which is unpleasant.
i.e., I have a directory structure like:
...
hw/
hwN/
assignment.scrbl
template.EXT
...
And in assignment.scrbl, I want to pull in parts of template.EXT, but currently I have to write hw/hwN/template.ext. It would be a lot nicer to, as I can do with #include-section, be able to write just template.EXT, so that if I rearrange the directories I won't break all of these paths.
This is not really specific to Scribble. In Racket, you would do:
(require racket/runtime-path)
(define-runtime-path template.EXT "template.EXT")
then you can use template.EXT to refer to the file.

how to create a script that allows to use the path list as a reference for copying files in PowerShell in .bat script

I'm looking for a way to automate archiving where after I plug my two external drives I can copy all my resources. The problem is that I have different file structures on my laptop and on both external drives so I need to select specific folders to be copied. It means that I can't select one root folder and copy it straightforward. I tried to find a way to declare more than one path in the cp command and in the copy command, without success. An example path:
/my_programming_stuff
/folder1
/folder2
/folder3
/folder4
I want to select only the first 3 folders to copy them into external drive1 and external drive 2. The idea is to create a .bat file that will copy everything at once ( in the best case scenario it will be copied simultaneously on both external drives, so it will be much faster). Another problem is that there needs to be a bypass the ntfs long path limitations (max. 260 characters).
Flags that I want to use:
Copy the files and directories and all of their attributes,
including ownerships and permissions.
Recursively copy directories and their contents.
When copying files from one directory to another, only
copy files that either doesn't exist or are newer than the
existing corresponding files, in the destination
directory.
data verification (so it's certain that the copy was verified)
progression bar with time eta
Until now I was using Total Commander to do this but every day I need to pick only a few folders to be copied which takes time and is inefficient.
I have experience with Bash and PowerShell but I am not sure how to handle this topic.
Create a static batch file with robocopy commands. I think /copyall is the only switch you need to specify for all this. Other defaults should satisfy requirements.
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy
I think your time will be better spent learning how to use either FastCopy or FreeFileSynce. I used FreeFileSync some years ago but got disgusted with the it's constantly changing format of its xml file used for starting a backup, so I switched to FastCopy. But it looks like FreeFileSync may be getting their act together and I aim to do some experiments over the summer to see if I want to switch back to it.
Both can handle the long filename format issues, both can be executed by a batch file, both seem to have a lot of quality, but FreeFileSync has more features - and more bloated because of the features. But speed wise, I think FastCopy is probably one of the better products out there and very streamline in use and design.

How to automatically delete Dymolas build files after simulation?

Every time I simulate in Dymola, a number of "useless" (for me) files are created in the working directory - i.e. dsfinal.txt, dsin.txt, dslog.txt, dsmodel.c, dymosim.exe. I find it annoying as it messes up my directory.
Is there a way to select only the desired output files to be kept after the simulations, without the need of manually deleting the undesired ones?
Those are temporary, but necessary files for Dymola. As far as I know there is no option to delete them automatically. Of course you could script that, but I don't see a real point to it and those files are used by some functionality - e.g. dsfinal.txt is used when as simulation is continued.
Some notes: Those files are created in the working directory - which should contain temporary files only. The working directory can be set via the GUI using File -> Options -> Settings:
A rather common problem is, that there is a Open and a Load function in Dymola:
As the description states, Load does not influence the working directory, whereas Open sets it to the directory from which a file is opened. The latter is also true for opening files e.g. via a double-click from the explorer. So usually it is better to go with Load.
My advice would be to separate the directories in which models/packages are stored and the working directory. This way the working directories content can be fully deleted basically anytime...

Force overwrite or delete file in use (executable that currently runs)

I'm looking for solution to delete or (preferably directly) overwrite source of an exe file while it is running.
To explain further before you get it all wrong, I'll give an example:
I have an exe file on drive D:\ which I run (with previously posted question's answer, giving params to "Start in" folder on C:\Program Files\MyProgram\" so it finds its dlls.
Now after the file is running, I'd like to rewrite the file's byte stream (just like opening it in hex editor...), or at least delete it so I can copy over new exe file directly using same name.
So far the solution I'm using is that I trigger format D: command for the whole drive D:\ (which, in my case is ramdisk and thumb-drive, as I only have this exe on it, I copy it there as necessary), since that removes the file and let's me copy new file there.
Trying to use del myProgram.exe even with -force flag triggers error that access to the file is denied. Same goes if I try to overwrite the contents of the file.
Is there any alternative to do that without using the format command, as that requires to have partition drive only for the purpose?
Update: Note: MoveFileEx and similar techniques that require termination of the process or system restart/reboot are not qualified as a solution. This should be done while the process is running without further actions that can compromise the process's run state.
On a side note, when formatting the drive using the Powershell's format command, the file is gone, although if viewing the partition using Hex viewer tool, there is full binary (hex) content of the exe visible there and an be restored using just as simple as copy-paste technique. This is one of the points as to where overwriting the file contents would be preferable than deleting the file directly.
Please note: This is a knowledge and skills based question, and would therefore appreciate sparing the moral and security-concerning comments about such actions and behaviour.
For deleting/replacing/overwriting a file at least two conditions must be met:
The user performing the operation must have the required permissions to do so. This can be verified for instance via Get-Acl or icacls.
Windows must not have an open handle to the file. This can be checked for instance with tools like Process Explorer or handle. These tools can also be used to forcibly close open handles, although that's not recommended as it may cause data loss and/or damage to the files in question. I'm not sure, though, if it's actually possible to close handles to an executable without terminating the process.
Note that antivirus software is likely to interfere with this kind of operation.
The basic problem here is that Windows loads from the .EXE upon demand, it's not all read in at once.
If you destroy the original file what happens when it tries to load in a page that no longer exists?
If I had to write something of this sort I would copy the .exe to a temporary location (beware that running code from the temp directory may be prohibited), run the new .exe, terminate the old one and then do what I want to it.

Multiple pathdef files in Matlab?

Suppose two different Matlab users share a computer and they each want to be able to save and load their own Matlab paths. (Or, a single user wants to use different paths at different times.) What's the easiest way to handle this?
Should there be multiple pathdef files? Alternatively, should they each have a script that calls restoredefaultpath and then uses addpath to add new paths?
You can use the startup.m file for that.
When starting up, Matlab executes the file matlabrc.m, which is the master startup file, and is common for all users. Among other things, this file
Sets the first entry of the path as the user-folder of the current user, that is, the user that started Matlab. (This is done by calling pathdef, which in turn calls userpath); and then
Looks for a startup.m file in the path, end executes if it exists.
Therefore you can place a user-specific startup.m file in each user's folder, and Matlab will run the appropriate file depending on which user started Matlab. In that file you can set the path on a per-user basis, and do other user-related stuff.