context.storagePath is empty (except meta.json) but cached data is still available while debugging [REMOTE SSH] - visual-studio-code

I do have a really spooky issue while debugging my extension. Since I am using the workspaceState to cache information I tried to figure out where the state is usually located.
ExtensionContext.storagePath result into a path I was expecting /home/<user>/.vscode-server/data/User/workspaceStorage/a002010c26b7b33d865d62202553fe33/myname.myextension
This folder only contain a meta.json. (no other hidden files or folders)
But the strange thing is, that still the cached data is available. Any ideas where else this can be located?
I already removed the whole ".vscode-server" directory and still cached data is being loaded from somewhere else!?

I found it out by myself.
The cached data is stored on my local machine (not the remote connected computer).
So even if the ExtensionContext.storagePath is refering to a local path on the remote. It may happen that cached data is stored locally.
In my case it is a windows machine and the storagePath was:
C:\Users\<user>\AppData\Roaming\Code\User\workspaceStorage\a002010c26b7b33d865d62202553fe33
I assume this may also have something to do with its ExtensionKind

Related

What can I do with => PATH/TO/REPOSITORY folder?

I have just found the folder REPOSITORY in my local disk with the path:
C:\PATH\TO\REPOSITORY
It is huge (4GB), considering the few repositories I have and I don't have many that I have cloned.
Besides, the files that are inside are completely unknown by me like:
notocjksc.tar (200.000 KB)
pst-geo.tar (139.567 KB)
cbgreek.tar
biber-darwin-x86_64.tar
To name a few, actually I have more than 4000 files...
So I have no idea what those files are there for, as well as whether or not I could delete them, or some of them, to free some space from my disk.
Thanks
as well as whether or not I could delete them, or some of them, to free some space from my disk.
If you do not need those files, you can delete them, but it will impact only the local disk, not the remote repository.
I would check in each repository the remote URL
git remote -v
And check on the remote side if the same huge files were pushed or not.
Then you would need to decide if those files should stay in those remote repositories or not.

Docker - under Windows 10 Pro - Need to map volumes and have them work, not quietly fail

I ran various containers on Two different Windows 10 Pro machines, and thought that I had the data drives mapped correctly, but now I'm finding out that it isn't writing the data there at all. One example was Mongo db, where I mapped /mongodb/database:/data/db I upgraded docker, and when it restarted mongodb.. POOF! no data, I thought that was weird and looked in /mongodb/database and the directory is empty. Thankfully, the app is still in the development phase, and not critical that the data was lost...
the line from the docker compose file:
volumes:
- /mongodb/database:/data/db
Different machine:
I installed Gogs/gogs image, mapping the data:
docker run --name=Gogs-Git -p 10022:22 -p 10080:3000 -v /var/gogs:/Docker/Gogs-GitServer/Data gogs/gogs
Seemed to work perfectly, so I was thinking everything was fine, I pushed a Repo up to it.. and today, I looked at \Docker\Gogs-gitserver\data and no files... so where did it write the data?
I also installed TeamCity, mapping that data.. nope, it has no logs, no data...
This feature seems to just not work at all. I found a reference from 2016 saying I need to look at the 'shared' tab (below general),and check C: to be shared, but well, no, that isn't a tab, so it isn't that.
There is no way someone would write a system that just quietly wrote the data some other place, or didn't bother actually mapping it without giving an error - that would be nuts.
So, there must be some other explanation... One of the machines has Hyper-V enabled in the BIOS, the other one doesn't even support it as far as I know.
I think some of the images are Linux, and some are Windows (TeamCity I'm pretty sure is)
OK, this is interesting... If I look at the volumes, and enter one that is in use, I get this:
The Target looks about like the right path, but I'm not sure about the /backup and the /data on the last two lines, if these are supposed to be directories under that, they don't exist, but if I click on the data tab, I can see the data, it is in Docker, hidden and not shared, in spite of there being a 'target' that points at the right directory... how to I get it to start writing this data correctly to that folder??
I've not confirmed this yet with the above configuration, but I found that for other containers, I needed to specify the path as 'c:/data/MongoDb/Database' when I created the container using that as the path, it worked and I have data there now. I just need to go back and fix all these VMs so they have their data correctly...

Firebase hosting: The remote web server hosts what may be a publicly accessible .bash_history file

We host our website on firebase. We fail a security check due to the following reason:
The remote web server hosts publicly available files whose contents may be indicative of a typical bash history. Such files may contain sensitive information that should not be disclosed to the public.
The following .bash_history files are available on the remote server : - /.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /cgi-bin/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /scripts/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file.
The problem is that we don't have an easy way to get access to the hosting machine and delete these files.
Anybody knows how it can be solved?
If you are using Firebase Hosting, you should check the directory (usually public) that you are uploading via the firebase deploy command. Hosting serves only those files (plus a couple of auto-generated ones under the reserved __/ path for auto-configuration).
If you have a .bash_history, cgi-bin/.bash_history or scripts/.bash_history in that public directory, then it will be uploaded to and served by Hosting. There are no automatically served files with those name.
You can check your public directory, and update the list of files to ignore on the next deploy using the firebase.json file (see this doc). You can also download all the files that Firebase Hosting is serving for you using this script.

Rescue data from mongodb instance

recently I've got some problems with my old hard drive and windows so I switched over to a new one. I can still access any files on the old hard drive and I still have an installation of mongodb on there with some pretty important data (should have done a backup sometime).
What would be the smartest way to get this data and transfer it to my new instance? Just copying the data files over? This "old" instance is btw not running and its not possible for me to start it again.
You could get the "old" instance running again by running mongo on the system and pointing the DBPath to the folder on the old hard drive.
It looks like copying the files over is a valid option though.
See https://docs.mongodb.com/manual/core/backups/#back-up-with-cp-or-rsync

Talend: Using tfilelist to access files from a shared network path

I have a Talend job that searches a directory and then uploads it to our database.
It's something like this: dbconnection>twaitforfile>tfilelist>fileschema>tmap>db
I have a subjobok that then commits the data into the table iterates through the directory and movies files to another folder.
Recently I was instructed to change the directory to a shared network path using the same components as before (I originally thought of changing components to tftpfilelist, etc.)
My question being how to direct it to the shared network path. I was able to get it to go through using double \ but it won't read any of the new files arriving.
Thanks!
I suppose if you use tWaitForFile on the local filesystem Talend/Java will hook somehow into the folder and get a message if a new file is being put into it.
Now, since you are on a network drive first of all this is out of reach of the component. Second, the OS behind the network drive could be different.
I understand your job is running all the time, listening. You could change the behaviour to putting a tLoop first which would check the file system for new files and then proceed. There must be some delta check in how the new files get recognized.