Can I perform a find/replace driven by a CSV/Excel file? - visual-studio-code

I have to perform a find/replace across my project's files using a rename rule-set which I have in CSV format.
My rename CSV is simple and in the format from value,to value:
foo,bar
car,dog
...
zip,zip
All from and to values are exact (so no need to do weird regex).
Is there any way (even w/ an extension) to feed this CSV into VS Code and have it perform the find and replace against all files in my project?
I can of course reformat this CSV to other formats (JSON, excel, etc.) fairly easily if that helps.

You could write a simple python script to do the replacing for you.

I ended up using Batch Replace extension for VS Code.
https://marketplace.visualstudio.com/items?itemName=angelomollame.batch-replacer
Originally I had tried this extension but it wasnt working. I had an ah-ha momement as to why (i have about 500 replace rules). I also use a local history VS Code extension which creates a (massive) local history in a .history folder in the workspace. This extension was choking on processing the 10,000's of files in that (since technically its in my workspace).
Once i excluded that, it worked - though it did take ~1 min to process all my files, and during that time there is no indication that its running.

Related

how to create a script that allows to use the path list as a reference for copying files in PowerShell in .bat script

I'm looking for a way to automate archiving where after I plug my two external drives I can copy all my resources. The problem is that I have different file structures on my laptop and on both external drives so I need to select specific folders to be copied. It means that I can't select one root folder and copy it straightforward. I tried to find a way to declare more than one path in the cp command and in the copy command, without success. An example path:
/my_programming_stuff
/folder1
/folder2
/folder3
/folder4
I want to select only the first 3 folders to copy them into external drive1 and external drive 2. The idea is to create a .bat file that will copy everything at once ( in the best case scenario it will be copied simultaneously on both external drives, so it will be much faster). Another problem is that there needs to be a bypass the ntfs long path limitations (max. 260 characters).
Flags that I want to use:
Copy the files and directories and all of their attributes,
including ownerships and permissions.
Recursively copy directories and their contents.
When copying files from one directory to another, only
copy files that either doesn't exist or are newer than the
existing corresponding files, in the destination
directory.
data verification (so it's certain that the copy was verified)
progression bar with time eta
Until now I was using Total Commander to do this but every day I need to pick only a few folders to be copied which takes time and is inefficient.
I have experience with Bash and PowerShell but I am not sure how to handle this topic.
Create a static batch file with robocopy commands. I think /copyall is the only switch you need to specify for all this. Other defaults should satisfy requirements.
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy
I think your time will be better spent learning how to use either FastCopy or FreeFileSynce. I used FreeFileSync some years ago but got disgusted with the it's constantly changing format of its xml file used for starting a backup, so I switched to FastCopy. But it looks like FreeFileSync may be getting their act together and I aim to do some experiments over the summer to see if I want to switch back to it.
Both can handle the long filename format issues, both can be executed by a batch file, both seem to have a lot of quality, but FreeFileSync has more features - and more bloated because of the features. But speed wise, I think FastCopy is probably one of the better products out there and very streamline in use and design.

Can I configure Jupyter Notebook to split source files and generated files?

I really like Jupyter Notebooks.
However, working with them is cumbersome in conjunction with a source control system like git, because an ipynb-File contains the source code (what you actually write in the notebook) and the generated output text / HTML / images / metadata / ...
For example, merge conflicts are difficult to resolve now, because everything is stored in one huge file with lots of generated data.
I wonder if I can configure Jupyter to store notebooks as
A source file: For example, I imagine this to be a Markdown file where everything surrounded by three backticks (```) is interpreted as a code cell. Diffs of that file would be meaningful and merge conflicts would be simple to resolve manually.
A generated file: This contains everything else. If there is a merge conflict within this file, it can be resolved by regenerating it.
Is this possible?
For reference: There is a slightly more general version of this question which lists various efforts at adapting IPython and Jupyter to this effect, and this answer proposes to solve the problem via Git. There is a Github project with a Git filter based on that answer, and (in its edit at the end) the answer links a few similar tools like nbstripout.

use matlab to open a file with an outside program and execute 'save as'

Alright, here's what I'm dealing with (you can skip to TLDR if all you need to see is what I want to run):
I'm having an issue with file formatting for a nasty conglomeration of several ancient programs I've strung together. I have some data in .CSV format, and I need to put it into .SPC format. I've tried a set of proprietary MATLAB programs called 'GS tools' for fast and easy conversion, but fast and easy doesn't look like its gonna happen here since there are discrepancies in how .spc files are organized now and how they were organized back when my ancient programs were written.
If I could find the source code for the old programs I could probably alter the GS tools code to write my .spc files appropriately, but all I can find are broken links circa 2002 and earlier. Seeing as I don't know what my programs are looking for, I have no choice but to try resaving my data with other programs until one of them produces something workable.
I found my Cinderella program: if I open the data I have in a program called Spekwin and save the file with a .spc extension... viola! Everything else runs on those files. The problem is that I have hundreds of these files and I'd like to automate the conversion process.
I either need to extract the writing rubric Spekwin uses for .spc files (I believe that info is stored in a dll file within the program, but I'm not sure if that actually makes sense) and use it as a rule to write a file from my input data, or I need a piece of code that will open a file with Spekwin, tell Spekwin to save that file under the .spc extension, and terminate Spekwin.
TLDR: Need a command that tells the computer to open a file with a certain program, save that file under a different extension through that program (essentially open*.csv>save as>*.spc), then terminate the program.
OR--I need a way to tell MATLAB to write a file according to rules specified by a .dll, but I'm not sure I fully understand what that entails.
Of course I'm open to suggestions on other ways to handle this.

Automating Localizable.strings?

So, in my project I have 10 languages, and 10 Localizable.strings files.
I just created Localizable.strings files, a file for each language. Now they contain "key" = "value" pairs, and both keys and values are in English (default language).
My languages are all translated and stay in Excel files.
The question is, how can I insert all my languages in those files faster than just copying each word manually or writing a script for that?
Maybe there is a existing tool for this already?
Thanks.
I found an easy way to compose localizable.strings files from Excel documents.
In the Excel document, in specific columns I insert " " = " " symbols. It's easy to do for all the words by dragging Excel cell down from the corner, so that it copies stuff from that cell to all the cells you drag it to. (sorry for messy explanation)
Thus the document contains the same symbols and words as localizable.strings does.
Than I just copy everything to the text file, remove tabs, change extension to .strings.
(no comments saved unfortunately).
EDIT:
You can copy the stuff from Excel to Sublime Text, then Find & Replace tabs if any. Copy resulted stuff into proper Xcode .string file.
One application that will really save you a lot of time by automating and streamlining localization procedure is Localization Suite. I do not know if they support importing from excel (to save you time transferring your string pairs) but it's free and seems like a complete solution.
I had an internal script at work for doing that tasks in iOS and Android, and I've just opensourced it as a Gem. You can take a look at it here: http://github.com/mrmans0n/localio
It can open spreadsheets from Google Drive and local Excel files as well, like requested.
You just would have to install the gem
gem install localio
And have a custom DSL file in your project directory, called Locfile, with the info referring to your project and the localization files. An example in your case, where an Excel file is used, could be as simple as:
platform :ios
source :xls, :path => 'YourExcelFileGoesInHere.xls'
output_path 'Resources/Localizables/'
The .xls file should have a certain format, that probably is very similar to what you have right now. You just have to clone the contents of this one and fill it with your translations: https://docs.google.com/spreadsheet/ccc?key=0AmX_w4-5HkOgdFFoZ19iSUlRSERnQTJ4NVZiblo2UXc
Hope this helps.
Here are the steps i followed:
change the extension of .strings to .txt on windows
open excel and go to File > Open
Choose the file to open. This should present an import wizard
Follow the steps and specify the delimiting character as =
You're done

Pipe multiple files into a zip file

I have several files in a GridFS Document Store and what I'd like to do is to pipe this data into a zip file via stdin in NodeJS. So that I will end up with a zip file containing all these files.
Now my question is how can I give the files a valid filename inside of the zip file. I think I need to emulate/fake a file header containing the filename?
Any help is appreciated!
Thanks
I had problems when writing zip files with Node.js not long ago. I ended up doing something similar to what is described in Zip archives in node.js
I can't help you directly with your problem, but at least I hope I can point out some things:
Don't try to use node-archive. Even if the description says it allows to create zip files, the moment I read the source code (since documentation is unexistant) I realized that's just a lie. It only exposes methods for reading.
Using zip by spawning a process, like recommended on the provided link, seems to be the best way. Something that would work is copying the files to a local folder with whatever name you desire and then calling the zip command, just to delete the files afterwards.
The other option, which seems ok, is to use zipper (https://github.com/rubenv/zipper, although better just use npm). The reason I'm not really wishing to use it is because there's not that much flexibility, it seems to have been done in a day and it hasn't been modified since the first commit, so I'm not sure it will receive maintenance (sure, you could just fork it...).
I swear the day I have an entire free weekend with no work I will write a freaking module that does this as complete as possible. It's silly that there isn't and it shouldn't be that much struggle. blablablarant.
Edit:
Not sure if it was there before, but now I've been using the node-compress module (also using gzippo). It works fine.