looking for failures of webhooks - ghe-webhook-logs does not accept the "after" time - github

I was looking around how to find out which webhook deliveries failed.
Notification on failed GitHub WebHooks? talks about the ghe-webhook-logs utility.
I tried it. However, the output of
ghe-webhook-logs -f -l 1000
ghe-webhook-logs -f -l 1000 -a 2022-06-28
ghe-webhook-logs -f -l 1000 -a '2022-06-28 04:50:17'
is identical.
It looks like it always takes time 00:00:00, regardless of whether I set the time or not.
IOW, it always gives me the first 1000 records from midnight of the given day, and I cannot figure out how to get the output from later webhooks.
Anybody else encountered that problem?
Anybody has an idea to work around it?

Related

Is there a way to tag a job in lsf when you create it, something you can search on later on?

Is there a way to tag a job in lsf with a user specified value... something I can search for later?
Let's say I create a job and I want to find it among all the other jobs I might have running. I don't know it's job_id, I don't know it's state, etc... . But I do know that when I created it, I tagged it with a value that I was hoping I could search on. So in theory...
lsf bsub -q xyz -P abc -tag daves_job_mon_aug_31
You can name a job with -J:
bsub -q xyz -P abc -J daves_job_mon_aug_31 ...
Which will show up in bjobs output under the JOB_NAME column. If the name is too long it'll get truncated in the ouput but you can override that behavior using bjobs -w to show wide output.

There seems to be a bug with Tshark's -z conv,ip

I've tried a lot to list ip conversations in a .cap file with Tshark. I can do this easily with *wireshark -> statistics -> conversations -> "ipv4" lable*, so I guess it's also easy to do so with Tshark:
tshark -n -r "d:\test\test.cap" -z conv,ip,"ip.len>50" -t ad
BUT, After all message printed, tshark crashed : Tshark has stopped working.
Is there really a bug with tshark? ... or with me?
You can use the option -q:
When reading a capture file, or when capturing and not saving to a file, don't print packet information; this is useful if you're using a -z option to calculate statistics and don't want the packet information printed, just the statistics.
tshark -r test2905a.pcap -q -z conv,ip,"ip.len>50"
See the man-page for more information.
I've found something! The problem is that I use -t ad expression:
ad absolute with date: The absolute date, displayed as YYYY-MM-DD, and time, as local time in your time zone, is the actual time and date
the packet was captured
When I change to -t r expression:
r relative: The relative time is the time elapsed between the first
packet and the current packet
tshark won't crash, and the ralative time is a negative number, like "-6063.000000"!
So I guess -t ad is the arch-criminal. However, when I use -z conv,tcp,[filter], Tshark won't crash.
!!!!!!! -z conv,ip,[filter] + -t ad + negative number time = BUG??!!

Defragmentation with TSHARK

I want to capture messages of Diameter protocol (over SCTP) by tshark on the screen, expanded.
First, I couldn't find what to write after switch '-f' to filter only diameter messages, but then I found the switch '-R' which accepted 'diameter'.
So, currently my command seems like:
tshark -i el0 -R diameter -V
This is all fine, at least, until the packets are small enough..
However, for bigger packets, I get the error [Unreassembled Packet: DIAMETER]
[Expert Info (Warn/Reassemble): Unreassembled Packet (Exception occurred)]
[Message: Unreassembled Packet (Exception occurred)],
and the packets are indeed not reassembled in the output.
I was googling for solution, and found that the below modification might do the defragmentation:
tshark -i el0 -R diameter -V -o ip.defragment:TRUE
But it just doesn't help.
Any simple solution for this problem? (It is also ok to process the defragmentation afterwards somehow..)
Finally I have found it!
In wireshark there is a checkbox for several protocol related options, in particular, for diameter defragmentation you need to mark the checkbox
Reassemble fragmented SCTP user messages
to get the long diameter messages properly displayed.
Each of these protocol options has its own tshark correspondent parameter, here you have to use -o sctp.reassembly:TRUE.
(For general, look for the file preferences belonging to wireshark.)
So, what method finally worked is
First capture all (sctp) messages regularly:
tshark -i EL0 -f sctp -w raw_capture.pcap
Then, if it is done, process the file by a further tshark command:
tshark -r raw_capture.pcap -R diameter -o sctp.reassembly:TRUE -V

inserting system date into subject field of ssmtp email

I'd like to know the syntax for inserting the current date/time into the subject line of an email sent by ssmtp.
I've got a cronjob emailing the tail of my syslog whenever the system reboots.
Here is the the cronjob:
#reboot tail -1000 /var/log/syslog | mail -s "the system rebooted, here's the syslog" address#gmail.com &> /dev/null
Is there a simple way of inserting the system date into the subject line field? I haven't found a way to add it.
You should be able to make it print the current date and time by inserting $(date) into your subject line string.
Try:
#reboot tail -1000 /var/log/syslog | mail -s "$(date): the system
rebooted, here's the syslog" address#gmail.com &> /dev/null

Multiple simultaneous downloads using Wget?

I'm using wget to download website content, but wget downloads the files one by one.
How can I make wget download using 4 simultaneous connections?
Use the aria2:
aria2c -x 16 [url]
# |
# |
# |
# ----> the number of connections
http://aria2.sourceforge.net
Wget does not support multiple socket connections in order to speed up download of files.
I think we can do a bit better than gmarian answer.
The correct way is to use aria2.
aria2c -x 16 -s 16 [url]
# | |
# | |
# | |
# ---------> the number of connections here
Official documentation:
-x, --max-connection-per-server=NUM: The maximum number of connections to one server for each download. Possible Values: 1-16 Default: 1
-s, --split=N: Download a file using N connections. If more than N URIs are given, first N URIs are used and remaining URLs are used for backup. If less than N URIs are given, those URLs are used more than once so that N connections total are made simultaneously. The number of connections to the same host is restricted by the --max-connection-per-server option. See also the --min-split-size option. Possible Values: 1-* Default: 5
Since GNU parallel was not mentioned yet, let me give another way:
cat url.list | parallel -j 8 wget -O {#}.html {}
I found (probably)
a solution
In the process of downloading a few thousand log files from one server
to the next I suddenly had the need to do some serious multithreaded
downloading in BSD, preferably with Wget as that was the simplest way
I could think of handling this. A little looking around led me to
this little nugget:
wget -r -np -N [url] &
wget -r -np -N [url] &
wget -r -np -N [url] &
wget -r -np -N [url]
Just repeat the wget -r -np -N [url] for as many threads as you need...
Now given this isn’t pretty and there are surely better ways to do
this but if you want something quick and dirty it should do the trick...
Note: the option -N makes wget download only "newer" files, which means it won't overwrite or re-download files unless their timestamp changes on the server.
Another program that can do this is axel.
axel -n <NUMBER_OF_CONNECTIONS> URL
For baisic HTTP Auth,
axel -n <NUMBER_OF_CONNECTIONS> "user:password#https://domain.tld/path/file.ext"
Ubuntu man page.
A new (but yet not released) tool is Mget.
It has already many options known from Wget and comes with a library that allows you to easily embed (recursive) downloading into your own application.
To answer your question:
mget --num-threads=4 [url]
UPDATE
Mget is now developed as Wget2 with many bugs fixed and more features (e.g. HTTP/2 support).
--num-threads is now --max-threads.
I strongly suggest to use httrack.
ex: httrack -v -w http://example.com/
It will do a mirror with 8 simultaneous connections as default. Httrack has a tons of options where to play. Have a look.
As other posters have mentioned, I'd suggest you have a look at aria2. From the Ubuntu man page for version 1.16.1:
aria2 is a utility for downloading files. The supported protocols are HTTP(S), FTP, BitTorrent, and Metalink. aria2 can download a file from multiple sources/protocols and tries to utilize your maximum download bandwidth. It supports downloading a file from HTTP(S)/FTP and BitTorrent at the same time, while the data downloaded from HTTP(S)/FTP is uploaded to the BitTorrent swarm. Using Metalink's chunk checksums, aria2 automatically validates chunks of data while downloading a file like BitTorrent.
You can use the -x flag to specify the maximum number of connections per server (default: 1):
aria2c -x 16 [url]
If the same file is available from multiple locations, you can choose to download from all of them. Use the -j flag to specify the maximum number of parallel downloads for every static URI (default: 5).
aria2c -j 5 [url] [url2]
Have a look at http://aria2.sourceforge.net/ for more information. For usage information, the man page is really descriptive and has a section on the bottom with usage examples. An online version can be found at http://aria2.sourceforge.net/manual/en/html/README.html.
wget cant download in multiple connections, instead you can try to user other program like aria2.
use
aria2c -x 10 -i websites.txt >/dev/null 2>/dev/null &
in websites.txt put 1 url per line, example:
https://www.example.com/1.mp4
https://www.example.com/2.mp4
https://www.example.com/3.mp4
https://www.example.com/4.mp4
https://www.example.com/5.mp4
try pcurl
http://sourceforge.net/projects/pcurl/
uses curl instead of wget, downloads in 10 segments in parallel.
They always say it depends but when it comes to mirroring a website The best exists httrack. It is super fast and easy to work. The only downside is it's so called support forum but you can find your way using official documentation. It has both GUI and CLI interface and it Supports cookies just read the docs This is the best.(Be cureful with this tool you can download the whole web on your harddrive)
httrack -c8 [url]
By default maximum number of simultaneous connections limited to 8 to avoid server overload
use xargs to make wget working in multiple file in parallel
#!/bin/bash
mywget()
{
wget "$1"
}
export -f mywget
# run wget in parallel using 8 thread/connection
xargs -P 8 -n 1 -I {} bash -c "mywget '{}'" < list_urls.txt
Aria2 options, The right way working with file smaller than 20mb
aria2c -k 2M -x 10 -s 10 [url]
-k 2M split file into 2mb chunk
-k or --min-split-size has default value of 20mb, if you not set this option and file under 20mb it will only run in single connection no matter what value of -x or -s
You can use xargs
-P is the number of processes, for example, if set -P 4, four links will be downloaded at the same time, if set to -P 0, xargs will launch as many processes as possible and all of the links will be downloaded.
cat links.txt | xargs -P 4 -I{} wget {}
I'm using gnu parallel
cat listoflinks.txt | parallel --bar -j ${MAX_PARALLEL:-$(nproc)} wget -nv {}
cat will pipe a list of line separated URLs to parallel
--bar flag will show parallel execution progress bar
MAX_PARALLEL env var is for maximum no of parallel download, use it carefully, default here is current no of CPUs
tip: use --dry-run to see what will happen if you execute command.
cat listoflinks.txt | parallel --dry-run --bar -j ${MAX_PARALLEL} wget -nv {}
make can be parallelised easily (e.g., make -j 4). For example, here's a simple Makefile I'm using to download files in parallel using wget:
BASE=http://www.somewhere.com/path/to
FILES=$(shell awk '{printf "%s.ext\n", $$1}' filelist.txt)
LOG=download.log
all: $(FILES)
echo $(FILES)
%.ext:
wget -N -a $(LOG) $(BASE)/$#
.PHONY: all
default: all
Consider using Regular Expressions or FTP Globbing. By that you could start wget multiple times with different groups of filename starting characters depending on their frequency of occurrence.
This is for example how I sync a folder between two NAS:
wget --recursive --level 0 --no-host-directories --cut-dirs=2 --no-verbose --timestamping --backups=0 --bind-address=10.0.0.10 --user=<ftp_user> --password=<ftp_password> "ftp://10.0.0.100/foo/bar/[0-9a-hA-H]*" --directory-prefix=/volume1/foo &
wget --recursive --level 0 --no-host-directories --cut-dirs=2 --no-verbose --timestamping --backups=0 --bind-address=10.0.0.11 --user=<ftp_user> --password=<ftp_password> "ftp://10.0.0.100/foo/bar/[!0-9a-hA-H]*" --directory-prefix=/volume1/foo &
The first wget syncs all files/folders starting with 0, 1, 2... F, G, H and the second thread syncs everything else.
This was the easiest way to sync between a NAS with one 10G ethernet port (10.0.0.100) and a NAS with two 1G ethernet ports (10.0.0.10 and 10.0.0.11). I bound the two wget threads through --bind-address to the different ethernet ports and called them parallel by putting & at the end of each line. By that I was able to copy huge files with 2x 100 MB/s = 200 MB/s in total.
Call Wget for each link and set it to run in background.
I tried this Python code
with open('links.txt', 'r')as f1: # Opens links.txt file with read mode
list_1 = f1.read().splitlines() # Get every line in links.txt
for i in list_1: # Iteration over each link
!wget "$i" -bq # Call wget with background mode
Parameters :
b - Run in Background
q - Quiet mode (No Output)
If you are doing recursive downloads, where you don't know all of the URLs yet, wget is perfect.
If you already have a list of each URL you want to download, then skip down to cURL below.
Multiple Simultaneous Downloads Using Wget Recursively (unknown list of URLs)
# Multiple simultaneous donwloads
URL=ftp://ftp.example.com
for i in {1..10}; do
wget --no-clobber --recursive "${URL}" &
done
The above loop will start 10 wget's, each recursively downloading from the same website, however they will not overlap or download the same file twice.
Using --no-clobber prevents each of the 10 wget processes from downloading the same file twice (including full relative URL path).
& forks each wget to the background, allowing you to run multiple simultaneous downloads from the same website using wget.
Multiple Simultaneous Downloads Using curl from a list of URLs
If you already have a list of URLs you want to download, curl -Z is parallelised curl, with a default of 50 downloads running at once.
However, for curl, the list has to be in this format:
url = https://example.com/1.html
-O
url = https://example.com/2.html
-O
So if you already have a list of URLs to download, simply format the list, and then run cURL
cat url_list.txt
#https://example.com/1.html
#https://example.com/2.html
touch url_list_formatted.txt
while read -r URL; do
echo "url = ${URL}" >> url_list_formatted.txt
echo "-O" >> url_list_formatted.txt
done < url_list.txt
Download in parallel using curl from list of URLs:
curl -Z --parallel-max 100 -K url_list_formatted.txt
For example,
$ curl -Z --parallel-max 100 -K url_list_formatted.txt
DL% UL% Dled Uled Xfers Live Qd Total Current Left Speed
100 -- 2512 0 2 0 0 0:00:01 0:00:01 --:--:-- 1973
$ ls
1.html 2.html url_list_formatted.txt url_list.txt