Pipe multiple files into a zip file - mongodb

I have several files in a GridFS Document Store and what I'd like to do is to pipe this data into a zip file via stdin in NodeJS. So that I will end up with a zip file containing all these files.
Now my question is how can I give the files a valid filename inside of the zip file. I think I need to emulate/fake a file header containing the filename?
Any help is appreciated!
Thanks

I had problems when writing zip files with Node.js not long ago. I ended up doing something similar to what is described in Zip archives in node.js
I can't help you directly with your problem, but at least I hope I can point out some things:
Don't try to use node-archive. Even if the description says it allows to create zip files, the moment I read the source code (since documentation is unexistant) I realized that's just a lie. It only exposes methods for reading.
Using zip by spawning a process, like recommended on the provided link, seems to be the best way. Something that would work is copying the files to a local folder with whatever name you desire and then calling the zip command, just to delete the files afterwards.
The other option, which seems ok, is to use zipper (https://github.com/rubenv/zipper, although better just use npm). The reason I'm not really wishing to use it is because there's not that much flexibility, it seems to have been done in a day and it hasn't been modified since the first commit, so I'm not sure it will receive maintenance (sure, you could just fork it...).
I swear the day I have an entire free weekend with no work I will write a freaking module that does this as complete as possible. It's silly that there isn't and it shouldn't be that much struggle. blablablarant.
Edit:
Not sure if it was there before, but now I've been using the node-compress module (also using gzippo). It works fine.

Related

how to move file after reading the file in ibm datastage

i have 1 folder which has 4 files, they are sales_jan, sales_feb, debt_jan, debt_feb.I created specific job for each sales and debt. The thing is, if i already run the job previously for sales_jan only and then there comes sales_feb after that, i dont wanna repeat reading the sales_jan again, i only want to read the newest file added that hasn't been processed. For reading the file, i pass the pattern of the specific file (ex. sales_*) but if i use it like that, then the stage will reprocessed the sales_jan again although it already has. I want to move the file already been read into another folder. How do i exactly do it in ibm datastage? if there's no way to do it, what's your suggestion for my problem. Any ideas would be appreciated.
The easiest solution is to use an after-job subroutine (ExecSH on Linux/UNIX, ExecDOS on Windows) to move the file to a different location.
Since you're using wildcards for the Sequential File stage, you're going to have to be a bit more clever in handling a situation where your job processes only some of the files. I would prefer to write this using a loop in a sequence, processing one file at a time, so that the move can be handled per-file.
you might make a flag for every file which already read by your job. For example add a maxdate field for each file. When the first file max date is less than the second file or new file Then read the latest file. It can be done by using simple linux command in sequence or tranformer. Just like Ray mentioned before

Opening .py files with micropython on TI Nspire

I uploaded Fabian Vogt's micropython port to my TI Nspire CX CAS, together with a couple of *.py.tns files to try. I can't find a way to load/launch those files.
As micropython does not include the os module, I can't use os.chdir to change the current directory and load the *.py files from the python shell. I tried from python shell: open("documents/mydirectory/myfile")
with different extensions .py or .py.tns, without success.
I don't think the Nspire has anything like the terminal commmand line either.
Thanks for your help,
There are 2 ways that you could do this, one easy way and one tedious way.
1. Map .py to micropython in your ndless.cfg
(ndless.cfg should be at /documents/ndless/ndless.cfg)
Like so:
ext.xxx=program-name
ext.xxx=program-name
ext.txt=nTxt
ext.py=micropython
ext.xxx=program-name
ext.xxx=program-name
You can edit this file either by copying it back and forth from your computer using TiLP or the official software, or you can edit it on-calc using nTxt. (This requires a bit of fiddling with making a copy of ndless.cfg so that the mappings still exist to open the copied file ndless.txt).
Ndless should come with a standard ndless.cfg containing basic bindings for nTxt and a few popular emulators. If you don't have one, get the standard one here. It will scan all directories (at least /documents/*, AFAIK) for programs. I've found that removing lines related to programs not on your Nspire will decrease load time.
2. Proper way to run a file in Python
To run a file in Python, you should do something like this:
with open("/documents/helloworld.py.tns","r") as file:
exec(file.read())
This will properly close the file after executing, which I've noticed is quite important on the Nspire, as leaving files open has given me trouble before. Of course, if you'd like, you can do exec(open("...","r").read()) and then handle closing the file yourself, but be warned: bad things can happen if you forget.
Also, you must remember to add the leading / and the .tns extension, or else strange things will happen, especially with writing to files.
That's about it! Feel free to ask more questions if needed, I'll be watching the ti-nspire tag.
(Just realized this question is quite old, but I guess it still might be helpful for others who end up on empty questions months later while trying to figure something out :P)

use matlab to open a file with an outside program and execute 'save as'

Alright, here's what I'm dealing with (you can skip to TLDR if all you need to see is what I want to run):
I'm having an issue with file formatting for a nasty conglomeration of several ancient programs I've strung together. I have some data in .CSV format, and I need to put it into .SPC format. I've tried a set of proprietary MATLAB programs called 'GS tools' for fast and easy conversion, but fast and easy doesn't look like its gonna happen here since there are discrepancies in how .spc files are organized now and how they were organized back when my ancient programs were written.
If I could find the source code for the old programs I could probably alter the GS tools code to write my .spc files appropriately, but all I can find are broken links circa 2002 and earlier. Seeing as I don't know what my programs are looking for, I have no choice but to try resaving my data with other programs until one of them produces something workable.
I found my Cinderella program: if I open the data I have in a program called Spekwin and save the file with a .spc extension... viola! Everything else runs on those files. The problem is that I have hundreds of these files and I'd like to automate the conversion process.
I either need to extract the writing rubric Spekwin uses for .spc files (I believe that info is stored in a dll file within the program, but I'm not sure if that actually makes sense) and use it as a rule to write a file from my input data, or I need a piece of code that will open a file with Spekwin, tell Spekwin to save that file under the .spc extension, and terminate Spekwin.
TLDR: Need a command that tells the computer to open a file with a certain program, save that file under a different extension through that program (essentially open*.csv>save as>*.spc), then terminate the program.
OR--I need a way to tell MATLAB to write a file according to rules specified by a .dll, but I'm not sure I fully understand what that entails.
Of course I'm open to suggestions on other ways to handle this.

Use Powershell to copy Long Filename files

Unfortunatly for me we have a legacy file system which has been around since the dinosaurs roamed. Over time, files have been moved/copied to a location which has generated a folder + file structure which is too long to simply copy from one location to another. (long file name)
What I see as a solution is to use powershell to map a drive to each file location, copy it then recurse for each.
e.g.
net use w: \newlocation\
net use x: \silly long file path\folder1\
xcopy x:\file.txt w:\folder1
then use for each to copy the same structure as it was originally.
I know it seems long winded but I cannot think of a better way forward. I've tried Robocopy which is supposed to avoid this problem but through much testing.. Nope. Still fails to copy.
Can you please suggest a good method for doing this?
Not sure if you are still looking for an answer but you can try the program called Beyond Compare which should work for you.
Honestly, I didn't spend too much time with it as it has bunch of options but I know people who successfully use it every day for files with long file names.

How to replace a file inside a zip on iOS?

I need to replace a file on a zip using iOS. I tried many libraries with no results. The only one that kind of did the trick was zipzap (https://github.com/pixelglow/zipzap) but this one is no good for me, because what really do is re-zip the file again with the change and besides of this process be to slow for me, also do something that loads the whole file on memory and make my application crash.
PS: If this is not possible or way to complicated, I can settle for rename or delete an specific file.
You need to find a framework where you can modify how data is read and written. You would then use some form of mmap to essentially read and write small chunks. Searching on NSData and mmap resulted in this Post, however you can use mmap from the posix level too. Ps it will be slower than using pure memory no way around that.
Got it WORKING!! JXZip (https://github.com/JanX2/JXZip) has made exactly what I need, they link to libzip (http://www.nih.at/libzip/) that is a fully equiped library for working with ZIP files and JXZip have all the necessary Objective-C wrapper code. Thanks for all the replys.
For archive purposes, as the author of zipzap:
Actually zipzap does exactly what you want. If you replace an entry within a zip file, zipzap will do the minimum necessary to update it: it will skip writing all entries before the replaced entry, then write out the entry, then write out all entries after the replaced entry without recompressing. At the moment, it does require sufficient memory for the entries after the replaced entry though.