IPython: How to wipe IPython's history selectively & securely? - ipython

I have been learning how to use the paramiko package only to discover that all I stored passwords in plain text in IPython's %hist. Not so good.
I therefore need to get rid of particular parts of what is stored in %hist. Saying that, I do not want to wipe the whole history. - Only the parts where I have been careless enough to type password = or similar chosen terms.
Thanks
Comments I don't need:
%clear only clears the session. It does not wipe the history.
Yes. I will only use xSA_keys from now on.

History is store on $(ipython locate)/profile_default/history.sqlite by default.
You can remove the file, and or do any operation you want on it (secure erase, etc..).
It's an sqlite file so you can load it with any sqlite program and do query on it.
Check in $(ipython locate)/profile_default/ and other $(ipython locate)/profile_xxx that you do not have any other history files. $(ipython locate) is usually ~/.ipython/ but can vary:
ls $(ipython locate)/profile_*/*.sqlite

One solution would be:
sqlite ~/.ipython/profile_default/history.sqlite
delete from history where source like '%password =%';

Combining #Ovidiu Ghinet's answer with #Matt's accepted answer:
sqlite $(ipython locate)/profile_*/*.sqlite || sqlite3 $(ipython locate)/profile_*/*.sqlite
Then from within sqlite console:
delete from history where source like '%password%';
select * from history; /* confirm that no passwords left */
.quit /* forgot how to exit sqlite console? ;) */

If you want to delete history from a particular session, run:
sqlite ~/.ipython/profile_default/history.sqlite
Note that on some systems, you will need to run sqlite3 rather than sqlite.
Inside SQLite:
select * from history;
# session is the first column in history
delete from history where session=XXX;
Also, here is a SQLite query that will give you the id of your last session:
select session from history order by -session limit 1;

I couldn't find the way to wipe IPython's history selectively, but I found out how to wipe data by built-in func, non-sqlite way:
%clear out #it clears output history
by the way, you can also clear in history by %clear in

If you want completely from beginning, Just go and delete the following file manually(It worked for me)
/home/user/.ipython/profile_default/history.sqlite

Related

Can I have VS Code skip opening previous workspaces one time only?

I use many VS Code workspaces throughout the day. Most of them are backed by directories on NFS-mounted drives, which are only mounted while I'm VPN'd in to my employer's network. Opening VS Code while not VPN'd in will cause all of my windows to close, leaving me with blank/empty workspaces, and then I have to set them all back up again in the morning. It only takes a few minutes to do, but I'm lazy and it's not neat; I like things neat. I know that I can start VS Code without any workspaces using the -n option, which is great, but then the next time I start up the editor for real (i.e. for work purposes), all of my workspaces need to be reopened again (see previous statement re: I'm lazy and I like things neat).
Is there a way to indicate that I want to start VS Code without any project just this one time, and then the next time I start I want all of my old workspaces to reopen as normal? Alternately, does anyone know where the state information is stored and how to edit it? I have no qualms about saving it off and then restoring it after I'm done.
Absent any miracle solution, I've at least found the correct file to manipulate: the storage.json file, which on MacOS is found at:
~/Library/Application Support/Code/storage.json
I wrote a Perl script to do the manipulation. When I want to go "offline" it reads in the JSON file, loops through the opened windows, identifies the ones I don't want, and removes them using jq, then launches VS Code. When I'm ready to go back "online", I read a backup of the original file looking for the windows I previously removed, adds them back in (also using jq), and then launches VS Code.
The Perl script is a bit rough around the edges to be posted publicly, but people might find the jq helpful. To delete, you want to identify the windows to be removed as (zero-based) indexes in the array, and then delete them with the following:
jq '. | del(.windowsState.openedWindows[1,2,5])' '/Users/me/backups/online-storage.json' >'/Users/me/Library/Application Support/Code/storage.json'
If you want to add them back in at some point, you extract the full JSON bits from the backup file, and then use the following command to append them to the back of the array:
jq '.windowsState.openedWindows += [{"backupPath":"...",...,"workspaceIdentifier": {...}}, {"backupPath":"...",...,"workspaceIdentifier": {...}}, {"backupPath":"...",...,"workspaceIdentifier": {...}}]' '/Users/me/backups/offline-storage.json' >'/Users/me/Library/Application Support/Code/storage.json'
The inserted JSON is elided for clarity; you'll want to include the full JSON strings, of course. I don't know what significance the ordering has, so pulling them out of the middle of the array and appending them to the end of the array will likely have some consequence; it's not significant for my purposes, but YMMV.

How to specify filepath for KDB tickerplant to save data to at End of Day

I'm wondering how to specify a filepath for my tick setup to save to when .u.endofday is sent from the tickerplant. Currently, when this message is sent the RDB is saved to the working directory where the tick.q file is.
Is there away to pass in a file path so that it is saved to ../../HDB rather than ../../Tick?
In the vanilla r.q script, the tables are saved down using
.Q.hdpf[`$":",.u.x 1;`:.;x;`sym]
where the second parameter is the directory that the tables are saved to.
`:.
represents the current directory. You can change it to something else, for example `:/home/data/hdb
https://code.kx.com/q/ref/dotq/#qhdpf-save-tables
If you are using the plain r.q script, referring to
https://github.com/KxSystems/kdb-tick/blob/master/tick/r.q
There is a comment under .u.rep, suggesting to modify the 'system cd' command, where you can specify any directory you like. This will change the directory inside the r.q process. Then when .Q.hdpf is called it will save the tables to that directory. The rdb calls .u.rep on start up.
.u.rep:{(.[;();:;].)each x;if[null first y;:()];-11!y;system "cd ",1_-10_string first reverse y};
/ HARDCODE \cd if other than logdir/db
You could have
system "cd /home/data/hdb"
which will change the current directory to this location
Depending on your setup there is couple of ways to do this.
But I think the most efficient would be for you to look at the .u.end function that is called in you RDB and see what save down function is used there.
Search the place where .u.end is defined on the RDB and look at the save_down functions.
Look for .Q.dpft which is most likely or there is set command.
Documentation on the .Q.dpft:
https://code.kx.com/q/ref/dotq/#qdpft-save-table
Where the first argument that is fed in is the directory path.
So could add a directory there in the form of
hsym `$"/path/path/HDB"
Which returns
`:/path/path/HDB
as a symbol to be inserted to the function.
There might be different ways of tables being saved down, but that is most likely way it is done.
There is also different ways to choose a directory with par.txt file that is loaded in. So useful to see if par.txt file is loaded in with the .Q.par function called on the RDB.
.Q.par[`:.;.z.d;`]
if the answer is just:
`:./2020.05.09/
That means it is using the directory you launched the script in.
Here you can find some more documentation on this:
https://code.kx.com/q/kb/partition/

How to load fish command history from file

Is there a way to load commands-history of fish from a file?
I like to clear my history periodically, but keep a set of useful commands always in history for easily accessing.
In bash this can be done via:
history -r file.txt
Can this be done in fish?
In my experience what you want to do isn't really necessary since a) fish only remembers the most recent instance of a command and b) generally does a really good job of using available context to provide the most appropriate entry from the command history, and c) already trims old entries once the number of saved commands reaches a limit.
But, assuming you've saved your preferred history subset to ~/.local/share/fish/fish_history.save:
builtin history clear
cp ~/.local/share/fish/fish_history.save ~/.local/share/fish/fish_history
history merge
The builtin in the first instance is to avoid the prompt asking if you really want to clear your history. Note that your saved history has to be valid YAML. It's a text file but is a little more complex than just each command on a separate line.

Matlab - Force to Retain Breakpoints

I wish to know, if there is a way to force Matlab to retain all historically placed breakpoints -red dots which enable code debugging- in the Matlab Editor/Debugger inside functions, classes, etc. from one session to another, for example, and without being deleted with clear all commands.
It would be easy for debugging huge pieces of software while changes are introduced, and because Matlab sometimes simply shut down because of internal errors.
Thanks fellows.
dbstop is the cleaner solution. Just insert it in the place you want the debugging to stop and this will not be removed until you edit or comment the line.
You need to save the breakpoints and reload them in next session. You can use dbstatus to get a structure that contains information about all breakpoints and save it into a file:
s = dbstatus('-completenames');
save FILENAME s
and later retrieve them using dbstop
load FILENAME
dbstop(s);
You can automate it by including it in startup.m and finish.m files (create them on default user path if they don't exist).

ESS[SAS] Submit region not working as expected -- each submission starts a new session

Specs: SAS 9.3, GNU emacs 23.4.1 (2012-06-05), and what I think is kind of an old version of ESS (sometime in 2001 maybe?). Unix -- I think SunOS 5.10.
I'm a fairly new emacs user and a very new user of ESS[SAS]. I have noticed that code which works when submitted as an entire file will not work when I submit it region by region. Here is an example:
libname mylib "/.";
data temp; set mylib.temp; run;
Assume that I have a directory as specified in libname and there is a SAS dataset called temp in it -- my code works properly when I submit the whole thing, but if I run just the first line and then the second, it tells me that "mylib" is not defined.
Based on the behavior of the log files -- the second submission creates a log file that overwrites the first -- I think what is going on is that it's starting a new SAS session with each submission. Why might it be doing this? (ESS does not do that with R code -- I can run a snippet to define a variable and then that variable stays defined.) I find I make fewer programming errors if I test things incrementally rather than running all my code in batch mode, and the GUI for SAS in unix leaves something to be desired, so I would really like to find a fix to this.
The only other clue I have is a pair of warnings I get on every submission:
NOTE: Unable to open SASUSER.REGSTRY. WORK.REGSTRY will be opened
instead.
NOTE: All registry changes will be lost at the end of the session.
WARNING: Unable to copy SASUSER registry to WORK registry. Because of
this,you will not see registry customizations during this session.
NOTE: Unable to open SASUSER.PROFILE. WORK.PROFILE will be opened
instead.
NOTE: All profile changes will be lost at the end of the session.
I googled those warnings separately and found they're commonly associated with a corrupt profile, but unfortunately the recommended fix did not fix anything (after deleting the iffy profile, it just restored itself and the error persisted). I am not sure whether this is related or not.
I realize this question might be out of scope for stackoverflow; if it is, a redirect to a more appropriate forum would be much appreciated.