mail reader that uses gzipped archives from http instead of an nntp server - mailing-list

Is there a mail client that can be configured to use gzipped archives of mailing lists directly from where the mailing list is hosted rather than a central nntp server ?
nntp is either not free, or slow in my experience.

i guess i can open the unzipped gz file as a text file in emacs.
turn on rmail-mode.
M-x undigestify-rmail-message
and rmail is ready to go...
now i just need to right a download (wget or downloadthemall) and
unzip script followed by
concatenating the mail files...
gzip -d *.gz ; cat *.txt > allinone.txt
then can view in emacs as above or move into thunderbird local directory, for easy viewing / searching.

Related

Using p4 zip and unzip to export files from one perforce server to another

I was trying to export files along with their revision history inside my depot folder from 2015.2 to 2019 perforce server.Also , I would want perforce to create new user on my new server corresponding to the commiter/submitter on my original 2015 repo.
Perforce replicate looked like overkill for my current task and then I came across this read on perforce's website that mentioned P4 zip.
This looked like it will solve my problem, but the article has a few issues I could not understand.
Let's say I am moving data from server1_ip:port --> server2_ip:port
I am currently following these steps
Making zip of folder to be copied using
p4 remote my_remote_spec , setting
Address: server1_ip:port
DepotMap://depot/... //depot2/...
p4 -p server1_ip:port zip -o test.zip -r my_remote_spec -A //depot/.... But on this step I get permission denied error. This is weird to me because the user although not super/admin has access to files i ask to get zipped.
Also, when i did try with a super user, i could not find test.zip even though i was not prompted any errors.
Isn't the above command supposed to generate a zip file inside the directory which i run it from?
Is the unzip command supposed to be run after a p4 login from user of second server?
Lastly, from the document why is a third port , 1667 mentioned in the transfer of files from server running on 1666 and 1777.
on this step I get permission denied error. This is weird to me because the user although not super/admin has access to files i ask to get zipped.
This is expected:
C:\Perforce\test>p4 help zip
zip -- Package a set of files and their history for use by p4 unzip
...
The zip command requires super permission granted by p4 protect.
Isn't the above command supposed to generate a zip file inside the directory which i run it from?
Similar to p4 admin checkpoint, the zip file is written to the server machine (relative to the server root, if you don't specify an absolute path), rather than being transferred to the local client directory. This is not explicitly stated in the documentation (which seems like an oversight), but if you look in the root directory of the server where you ran the zip, you should find your test.zip there.
Is the unzip command supposed to be run after a p4 login from user of second server?
Yes, any time you run a command against a particular server, you will need to be logged in to that server. In the case of p4 unzip you will need at least admin permission on the second server.
Lastly, from the document why is a third port , 1667 mentioned in the transfer of files from server running on 1666 and 1777.
I'm pretty sure that's a typo; whoever wrote the article started off using ports 1666 and 1777, changed their mind halfway through, and didn't proofread. :)

How to clone/copy files that contain in your name, reserved characters of a server to a local storage using Wget?

I can not download files from a server of my work because the names of the files have reserved characters (error not controlled by the company and by the erroneous named by the clients that uploads attachments) and for some reason the 404 error even though the files exist on the server, by the way I use wget for this task.
This is the executing line that starts the download (list.txt contains url lines from the server to the file in question- example: https://example.com/files/122301/8+.pdf)
wget.exe -x -i "C:\clon\list.txt" -P "C:\clon\destino" -nv -o "C:\clon\log.txt"
I do not know the functionality of the parameters given in wget in addition to the source / destination routes such as the log but some files contain '}' or '+' in their file names and therefore (I think) the missing files are not downloaded ( I have 93% downloaded from all files)
Examples of files including these characters:
/FC04-6198}+.pdf
/8+.pdf
/PT05+2236.pdf
Try placing these parameters "--content-disposition" or "--restrict-file-names" but nothing.
I expect to get a way to ignore the reserved characters to be able to download them.

What am I screwing up trying to download particular file types with wget?

I am attempting to regularly archive a few file types hosted on a community website where our admin has been MIA for years, in case he dies or just stops paying for the hosting.
I am able to download all of the files I need using wget -r -np -nd -e robots=off -l 0 URL but this leaves me with about 60,000 extra files to waste time both downloading and deleting.
I am really only looking for files with the extensions "tbt" and "zip". When I add in -A tbt,zip to the input, wget then only downloads a single file, "index.html.tmp". It immediately deletes this file because it doesn't match the file type specified, and then the process stops entirely, with wget announcing that it is finished. It does not attempt to download any of the other files that it grabs when the -A flag is not included.
What am I doing wrong? Why does specifying file types in the way that I did cause it to finish after only looking at one file?
Possibly you're hitting the same problem I've hit when trying to do something similar. When using --accept, wget determines whether a links refers to a file or directory based on whether or not it ends with a /.
For example, say I have a directory named files, and a web page that has:
Lots o' files!
If I were to request this with wget -r, then I wget would happily GET /files, see that it was an HTML document containing a bunch of links, and continue to download those links.
However, if I add -A zip to my command line, and run wget with --debug, I see:
appending ‘http://localhost:8080/files’ to urlpos.
[...]
Deciding whether to enqueue "http://localhost:8080/files".
http://localhost:8080/files (files) does not match acc/rej rules.
Decided NOT to load it.
In other words, wget thinks this is a file (no trailing /) and it doesn't match our acceptance criteria, so it gets rejected.
If I modify the remote file so that it looks like...
Lots o' files!
...then wget will follow the link and download files as desired.
I don't think there's a great solution to this problem if you need to use wget. As I mentioned in my comment, there are other tools available that may handle this situation more gracefully.
It's also possible you're experiencing a different issue; the output of adding --debug to your command line clarify things in that case.
I also experienced this issue, on a page where all the download links looked something like this: filedownload.ashx?name=file.mp3. The solution was to match for both the linked file, and the downloaded file. So my wget accept flag looked like this: -A 'ashx,mp3'. I also used the --trust-server-names flag. This catches all the .ashx that are linked in the webpage, then when wget does the second check, all the mp3 files that were downloaded will stay.
As an alternative to --trust-server-names, you may also find the --content-disposition flag helpful. Both flags help rename the file that gets downloaded from filedownload.ashx?name=file.mp3 to just file.mp3.

How can I view output of tshark -V via Wireshark or similar?

Recently updated my Wireshark on a server, and lost the ability to use -R and -w from the CLI. Since I'm tracing SIP and RTP calls, I need to use -R and not -f.
I found out using -V is very useful (shows the packet tree on screen) and then I can redirect the output to a file. Unfortunately I'm not able to open that file through Wireshark to view properly (contains too muh text to easily scroll through).
I tried using -x t add the hex dump (removed -V), but still that is not openable through Wireshark when copying the text file to my PC.
Any ideas how I can trace using -R (with or without -V), copy the file to my PC and still be able to read it through Wireshark? I don't have issues to convert the file to a readable format.. Just need anything to view the files and share them :)
Thanks all,
//M

Zipped data getting loss during copying from FTP to windows

I am trying to copy some zipped file from FTP to my local system (Windows). The transfer mode is default mode (ASCII). File is getting copied, I am not getting any problem during transfer.
The problem is that the size of file on FTP to the one which is copied on my local system is different.
FTP_file_size -> 12,812,085
Copied_file_size->12,551
Above files should be the same.
Now I am not able to figure it out what is wrong going with transfer.
For script which i am using please refer :
Why am I getting "File not found" errors with this Perl script using Net::FTP?
You have to use the binary (type "I") mode to transfer. Otherwise the FTP client translates line-ending characters to the local convention (on Windows: CR-LF) which would corrupt the ZIP format.