Can Sphinx accidentally 'cache' conf files? - sphinx

I have an index I rotate as I make changes to the table or conf which had an associated wordform file where I created a wordform to map among other things:
Ft > Fort
I ended up removing that from the workdform and re-rotating and re-rotated successfully yet when I search on FT I still find Fort. I then commented out the wordform
#wordforms = /home/...
and rotated successfully and still FT finds Fort.
Lastly I changed the name of the wordform file and, you guessed it, after rotating it still finds Fort.
Is there some way this has cached the old index as no matter what I do it finds the old wordmap and uses it?

Related

Can I have VS Code skip opening previous workspaces one time only?

I use many VS Code workspaces throughout the day. Most of them are backed by directories on NFS-mounted drives, which are only mounted while I'm VPN'd in to my employer's network. Opening VS Code while not VPN'd in will cause all of my windows to close, leaving me with blank/empty workspaces, and then I have to set them all back up again in the morning. It only takes a few minutes to do, but I'm lazy and it's not neat; I like things neat. I know that I can start VS Code without any workspaces using the -n option, which is great, but then the next time I start up the editor for real (i.e. for work purposes), all of my workspaces need to be reopened again (see previous statement re: I'm lazy and I like things neat).
Is there a way to indicate that I want to start VS Code without any project just this one time, and then the next time I start I want all of my old workspaces to reopen as normal? Alternately, does anyone know where the state information is stored and how to edit it? I have no qualms about saving it off and then restoring it after I'm done.
Absent any miracle solution, I've at least found the correct file to manipulate: the storage.json file, which on MacOS is found at:
~/Library/Application Support/Code/storage.json
I wrote a Perl script to do the manipulation. When I want to go "offline" it reads in the JSON file, loops through the opened windows, identifies the ones I don't want, and removes them using jq, then launches VS Code. When I'm ready to go back "online", I read a backup of the original file looking for the windows I previously removed, adds them back in (also using jq), and then launches VS Code.
The Perl script is a bit rough around the edges to be posted publicly, but people might find the jq helpful. To delete, you want to identify the windows to be removed as (zero-based) indexes in the array, and then delete them with the following:
jq '. | del(.windowsState.openedWindows[1,2,5])' '/Users/me/backups/online-storage.json' >'/Users/me/Library/Application Support/Code/storage.json'
If you want to add them back in at some point, you extract the full JSON bits from the backup file, and then use the following command to append them to the back of the array:
jq '.windowsState.openedWindows += [{"backupPath":"...",...,"workspaceIdentifier": {...}}, {"backupPath":"...",...,"workspaceIdentifier": {...}}, {"backupPath":"...",...,"workspaceIdentifier": {...}}]' '/Users/me/backups/offline-storage.json' >'/Users/me/Library/Application Support/Code/storage.json'
The inserted JSON is elided for clarity; you'll want to include the full JSON strings, of course. I don't know what significance the ordering has, so pulling them out of the middle of the array and appending them to the end of the array will likely have some consequence; it's not significant for my purposes, but YMMV.

How to recover files that were moved to a single file?

I tried to move multiple files into a folder, but there was a mistake in my matlab code that I didn't create the folder. Now all the files were moved to a single file which cannot be opened or edited. How to recover these files?
Example of the mistake:
a=strcat('C:\Users\foldername'); % name and directory of the folder
fname=a;
% mkdir(fname); % so this command wasn't executed...
movefile('file1',fname);
movefile('file2',fname);
So now file1 and file2 were merged in file 'fname', instead of in the folder named 'fname'. How to get file1 and file2 back?
Thanks in advance!
Unfortunately, the odds may be stacked against you getting back any of the files, except for the last one. The reason why is because movefile doesn't append to an existing destination file, it overwrites it. The following will give you back your last file (by simply renaming fname):
movefile(fname, 'file2');
If you're lucky, your operating system will have options for you to restore previous versions of your files/folders. Your best bet may be to check and see if the folder containing your original files has any previous versions you can open/restore to get previous versions of 'file1' and 'file2'. For example, on my Windows machine I can right click on my default MATLAB folder, select "Properties", then select the "Previous Versions" tab, and I see this:
You can see there are a few versions I could open and copy files from if I've inadvertently deleted or overwritten anything recently. Good luck!

Open multiple subfolders within a loop

I have a folder named "Photos" that is a subfolder of the current directory. Inside that folder, there are four subfolders with names "Order1", "Order2",
"Order3", "Order4". I am trying to open these subfolders using a loop.
The following code is not working.
for i=1:4
current_path=pwd;
cd(current_path');
cd('Photos\Order%d',i);
end
There are a lot issues going on here at the same time.
The primary issue is that you are changing directories each time through the loop but you're also getting the value of the current directory (pwd) each time. The directory doesn't automatically reset to where you were when it goes back to the top of the loop. I think you expect current_path to be the folder you started in and be the same for all iterations.
You need to use sprintf or something similar to create your "OrderN" folder names. cd doesn't know what to do with the format specifier you're trying to use.
You should always use fullfile when concatenating file paths. Period.
You should use absolute paths when possible to remove the dependence upon the current directory.
Do you really need to change the working directory? If you're trying to load files within these folders, please consider using absolute file paths to the files themselves rather than changing folders.
If you are going to do this this way, please be sure to reset the path back to where it was at the end of the loop. There is nothing worse than running code and ending up in a directory that is different than where you were when you called it.
To actually make your code work, we could do something like this. But given all of my points above (specifically, 4-5), I would strongly consider a different approach.
startpath = pwd;
for k = 1:4
folder = fullfile(startpath, 'Photos', sprintf('Order%d', k));
cd(folder)
end
% Set the current directory to what it was before we started
cd(startpath)

how to use perl Archive::Zip to recursively walk archive files?

I have a small perl script that I use to search archives for members matching a name. I'd like to enhance this so that if it finds any members in the archive that are also archives (zip, jar, etc) it will then recursively scan those, looking for the original desired pattern.
I've looked through the "Archive::Zip" documentation, and I thought I saw how to do this. I noticed the "fh()" and "readFromFileHandle()" methods. However, in my testing, it appears that the "fh()" call on an archive member returns the file handle for the containing archive, not the member. Perhaps I'm doing it wrong, but I would appreciate an example of how to do this.
You can't read the contents of any sort of archive member (whether it is text, picture, or another archive) without extracting it from the archive file.
Once you have identified a member that you want to view, you must call extractMember (or, more likely, extractMemberWithoutPaths if the file is to be temporary) to extract it to a disk file. Then you can create a new Archive::Zip object and read the new file while keeping the old one open.
You will presumably want to unlink the archive file once you have catalogued its contents.
Edit
I hadn't come across the Archive::Zip::MemberRead module before. It appears you were on the right track with readFromFileHandle. I would guess that it should work like this, but it would be awkward for me to test it at present.
my $zip = Archive::Zip->new;
$zip->read('myfile.zip');
my $zipfh = Archive::Zip::MemberRead->new($zip, 'archive/path/to/member.zip');
my $newzip = Archive::Zip->new;
$newzip->readFromFileHandle($zipfh)

TFS - files ending up in wrong folder

I'm investigating this for someone else but I hope this explanation is correct:
We have a lot of files and a lot of folders in TFS source control, but two of them are these (made up names):
$/Root/Shared/...
$/Root/Solutions/...
5 files from the folder $/Root/Shared/Client/Main are now checked in, and when looking at the changeset they all says 'edit' in the change field.
But, when looking at the paths, 3 of them are checked into $/Root/Solutions/Client/Main instead of $/Root/Shared/Client/Main. The last two are at the expected location.
And it gets worse; there is no, and is not supposed to be a $/Root/Shared/Client/Main folder. When browsing source control this location does not exist. And the files are not at their original locations either, they are just gone - except for when viewing them in the changeset.
What could have happened here?? I do have the code, since I can see them in the changeset, but I don't want to lose the history by just creating them again and copying in the code.
Reading things more carefully, it sounds like you're describing files that have been renamed / moved over time.
TFS considers namespace information just as important as file contents. If you ask for an old version of $/Root, you don't just get the old version of those files, you get the old file & folder structure too, preserved exactly the way it was at that time. This design permeates the system, including the View Changeset dialog as you've seen.
The remaining question seems to be, where have my files gone? Quickest way to find out is to use a cmdlet from the Power Tools:
Get-TfsItemHistory -all .\rentest2\grand2\child2\parent\foo3.cs |
Select-TfsItem |
Format-Table -auto path, #{label="version"; expression={$_.versions[0].changesetid}}
Path version
---- -------
$/Test-ConchangoV2/rentest2/grand2/child2/parent/foo3.cs 10725
$/Test-ConchangoV2/rentest/grand2/child2/parent/foo3.cs 10148
$/Test-ConchangoV2/rentest/grand2/parent/foo3.cs 10147
$/Test-ConchangoV2/rentest/grand2/child2/foo3.cs 10146
$/Test-ConchangoV2/rentest/grand2/child/foo2.cs 10145
$/Test-ConchangoV2/rentest/grand2/parent/child/foo2.cs 10144
$/Test-ConchangoV2/rentest/grand/parent/child/foo2.cs 10143
$/Test-ConchangoV2/rentest/grand/parent/child/foo.cs 10142
I have a little GUI tool that does this and more but haven't had time to get it to a publicly usable state, unfortunately.
My guess is you have some screwy workspace mappings. Check for yourself in File -> Source Control -> Workspaces.
For best results (read: least amount of lost sleep on issues like this), create one single mapping at the highest level you need. If that's too broad, make this root mapping a "one level" mapping (using the * syntax; see docs) and then create additional recursive mappings with the same relative paths underneath it as needed.