Github Enterprise Server comes with a utility:
ghe-logs-tail
this tails all GHE Server logs simultaneously and prints the flow to the console for the user to view.
When trying to grep through these through, eg. for the "error" string like so:
ghe-logs-tail | grep --line-buffered -i "error"
The console will not print the flow and instead includes an error that some log files could not be opened for some reason.
$ ghe-logs-tail | grep --line-buffered -i "error"
/var/log/github/audit.log /var/log/github/auth.log /var/log/github/exceptions.log /var/log/github/gitauth.log /var/log/github/personal.log /var/log/github/production.log /var/log/github/resqued.log /var/log/github/unicorn.log /var/log/enterprise-manage/login_attempts.log /var/log/enterprise-manage/unicorn.log /var/log/github/auth.log /var/log/auth.log /var/log/nginx/alambic.assets.error.log /var/log/nginx/alambic.assets.log /var/log/nginx/alambic.avatars.error.log /var/log/nginx/alambic.avatars.log /var/log/nginx/alambic.error.log /var/log/nginx/alambic.log /var/log/nginx/avatars.error.log /var/log/nginx/avatars.log /var/log/nginx/credits.error.log /var/log/nginx/credits.log /var/log/nginx/enterprise-manage.error.log /var/log/nginx/enterprise-manage.log /var/log/nginx/error.log /var/log/nginx/gist.error.log /var/log/nginx/gist.log /var/log/nginx/github.error.log /var/log/nginx/github.log /var/log/nginx/pages.error.log /var/log/nginx/pages.log /var/log/nginx/raw.error.log /var/log/nginx/raw.log /var/log/nginx/render.error.log /var/log/nginx/render.log /var/log/nginx/static-maintenance.error.log /var/log/nginx/static-maintenance.log /var/log/nginx/storage.error.log /var/log/nginx/storage.log /data/user/common/ghe-config.log /var/log/syslog /var/log/dmesg /var/log/mysql/*.log /var/log/redis/redis.log /var/log/haproxy.log
==> /var/log/nginx/alambic.assets.error.log <==
==> /var/log/nginx/alambic.avatars.error.log <==
==> /var/log/nginx/alambic.error.log <==
==> /var/log/nginx/avatars.error.log <==
==> /var/log/nginx/credits.error.log <==
==> /var/log/nginx/enterprise-manage.error.log <==
==> /var/log/nginx/error.log <==
==> /var/log/nginx/gist.error.log <==
==> /var/log/nginx/github.error.log <==
tail: cannot open '/var/log/mysql/*.log' for reading==> /var/log/nginx/pages.error.log <==
==> /var/log/nginx/raw.error.log <==
: No such file or directory==> /var/log/nginx/render.error.log <==
==> /var/log/nginx/static-maintenance.error.log <==
==> /var/log/nginx/storage.error.log <==
tail: cannot open '/var/log/redis/redis.log' for reading: No such file or directory
What is the correct way to do this with a native tool like grep/awk?
I would obviously like to have all lines that contain the matching string printed out in real time to the console.
you can filter out grep errors by routing them to the null device (blackhole)
... | grep ... 2>/dev/null
Related
I am really lost what exactly I can do to fix this error.
I am running snakemake to perform some post alignment quality checks.
My code looks like this:
SAMPLES = ["Exome_Tumor_sorted_mrkdup_bqsr", "Exome_Norm_sorted_mrkdup_bqsr",
"WGS_Tumor_merged_sorted_mrkdup_bqsr", "WGS_Norm_merged_sorted_mrkdup_bqsr"]
rule all:
input:
expand("post-alignment-qc/flagstat/{sample}.txt", sample=SAMPLES),
expand("post-alignment-qc/CollectInsertSizeMetics/{sample}.txt", sample=SAMPLES),
expand("post-alignment-qc/CollectAlignmentSummaryMetrics/{sample}.txt", sample=SAMPLES),
expand("post-alignment-qc/CollectGcBiasMetrics/{sample}_summary.txt", samples=SAMPLES) # this is the problem causing line
rule flagstat:
input:
bam = "align/{sample}.bam"
output:
"post-alignment-qc/flagstat/{sample}.txt"
log:
err='post-alignment-qc/logs/flagstat/{sample}_stderr.err'
shell:
"samtools flagstat {input} > {output} 2> {log.err}"
rule CollectInsertSizeMetics:
input:
bam = "align/{sample}.bam"
output:
txt="post-alignment-qc/CollectInsertSizeMetics/{sample}.txt",
pdf="post-alignment-qc/CollectInsertSizeMetics/{sample}.pdf"
log:
err='post-alignment-qc/logs/CollectInsertSizeMetics/{sample}_stderr.err',
out='post-alignment-qc/logs/CollectInsertSizeMetics/{sample}_stdout.txt'
shell:
"gatk CollectInsertSizeMetrics -I {input} -O {output.txt} -H {output.pdf} 2> {log.err}"
rule CollectAlignmentSummaryMetrics:
input:
bam = "align/{sample}.bam",
genome= "references/genome/ref_genome.fa"
output:
txt="post-alignment-qc/CollectAlignmentSummaryMetrics/{sample}.txt",
log:
err='post-alignment-qc/logs/CollectAlignmentSummaryMetrics/{sample}_stderr.err',
out='post-alignment-qc/logs/CollectAlignmentSummaryMetrics/{sample}_stdout.txt'
shell:
"gatk CollectAlignmentSummaryMetrics -I {input.bam} -O {output.txt} -R {input.genome} 2> {log.err}"
rule CollectGcBiasMetrics:
input:
bam = "align/{sample}.bam",
genome= "references/genome/ref_genome.fa"
output:
txt="post-alignment-qc/CollectGcBiasMetrics/{sample}_metrics.txt",
CHART="post-alignment-qc/CollectGcBiasMetrics/{sample}_metrics.pdf",
S="post-alignment-qc/CollectGcBiasMetrics/{sample}_summary.txt"
log:
err='post-alignment-qc/logs/CollectGcBiasMetrics/{sample}_stderr.err',
out='post-alignment-qc/logs/CollectGcBiasMetrics/{sample}_stdout.txt'
shell:
"gatk CollectGcBiasMetrics -I {input.bam} -O {output.txt} -R {input.genome} -CHART = {output.CHART} "
"-S {output.S} 2> {log.err}"
The error message says the following:
WildcardError in line 9 of Snakefile:
No values given for wildcard 'sample'.
File "Snakefile", line 9, in <module>
In my code above I have indicated the problem causing line. When I simply remove this line everything runs perfekt. I am really confused, because I pretty much copy and pasted each rule, and this is the only rule that causes any problems.
If someone could point out what I did wrong, I would be very thankful!
Cheers!
Seems like it could be a spelling mistake - in the highlighted line, you write samples=SAMPLES, but the wildcard is called {sample} without the "s".
Facts:
Rootless podman works perfectly for uid 1480
Rootless podman fails for uid 2088
CentOS 7
Kernel 3.10.0-1062.1.2.el7.x86_64
podman version 1.4.4
Almost the entire environment has been removed between the two
The filesystem for /tmp is xfs
The capsh output of the two users is identical but for uid / username
Both UIDs have identical entries in /etc/sub{u,g}id files
The $HOME/.config/containers/storage.conf is the default and is identical between the two with the exception of the uids. The storage.conf is below for reference.
I wrote the following shell script to demonstrate just how similar an environment the two are operating in:
#!/bin/sh
for i in 1480 2088; do
sudo chroot --userspec "$i":10 / env -i /bin/sh <<EOF
echo -------------- $i ----------------
/usr/sbin/capsh --print
grep "$i" /etc/subuid /etc/subgid
mkdir /tmp/"$i"
HOME=/tmp/"$i"
export HOME
podman --root=/tmp/"$i" info > /tmp/podman."$i"
podman run --rm --root=/tmp/"$i" docker.io/library/busybox printf "\tCOMPLETE\n"
echo -----------END $i END-------------
EOF
sudo rm -rf /tmp/"$i"
done
Here's the output of the script:
$ sh /tmp/podman-fail.sh
[sudo] password for functional:
-------------- 1480 ----------------
Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36
Securebits: 00/0x0/1'b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=1480(functional)
gid=10(wheel)
groups=0(root)
/etc/subuid:1480:100000:65536
/etc/subgid:1480:100000:65536
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob 7c9d20b9b6cd done
Copying config 19485c79a9 done
Writing manifest to image destination
Storing signatures
ERRO[0003] could not find slirp4netns, the network namespace won't be configured: exec: "slirp4netns": executable file not found in $PATH
COMPLETE
-----------END 1480 END-------------
-------------- 2088 ----------------
Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36
Securebits: 00/0x0/1'b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=2088(broken)
gid=10(wheel)
groups=0(root)
/etc/subuid:2088:100000:65536
/etc/subgid:2088:100000:65536
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob 7c9d20b9b6cd done
Copying config 19485c79a9 done
Writing manifest to image destination
Storing signatures
ERRO[0003] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
ERRO[0003] Error pulling image ref //busybox:latest: Error committing the finished image: error adding layer with blob "sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Failed
Error: unable to pull docker.io/library/busybox: unable to pull image: Error committing the finished image: error adding layer with blob "sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Here's the storage.conf for the 1480 uid. It's identical except s/1480/2088/:
[storage]
driver = "vfs"
runroot = "/run/user/1480"
graphroot = "/tmp/1480/.local/share/containers/storage"
[storage.options]
size = ""
remap-uids = ""
remap-gids = ""
remap-user = ""
remap-group = ""
ostree_repo = ""
skip_mount_home = ""
mount_program = ""
mountopt = ""
[storage.options.thinpool]
autoextend_percent = ""
autoextend_threshold = ""
basesize = ""
blocksize = ""
directlvm_device = ""
directlvm_device_force = ""
fs = ""
log_level = ""
min_free_space = ""
mkfsarg = ""
mountopt = ""
use_deferred_deletion = ""
use_deferred_removal = ""
xfs_nospace_max_retries = ""
You can see there's basically no difference between the two podman info outputs for the users:
$ diff -u /tmp/podman.1480 /tmp/podman.2088
--- /tmp/podman.1480 2019-10-17 22:41:21.991573733 -0400
+++ /tmp/podman.2088 2019-10-17 22:41:26.182584536 -0400
## -7,7 +7,7 ##
Distribution:
distribution: '"centos"'
version: "7"
- MemFree: 45654056960
+ MemFree: 45652697088
MemTotal: 67306323968
OCIRuntime:
package: containerd.io-1.2.6-3.3.el7.x86_64
## -24,7 +24,7 ##
kernel: 3.10.0-1062.1.2.el7.x86_64
os: linux
rootless: true
- uptime: 30h 17m 50.23s (Approximately 1.25 days)
+ uptime: 30h 17m 54.42s (Approximately 1.25 days)
registries:
blocked: null
insecure: null
## -35,14 +35,14 ##
- quay.io
- registry.centos.org
store:
- ConfigFile: /tmp/1480/.config/containers/storage.conf
+ ConfigFile: /tmp/2088/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: vfs
GraphOptions: null
- GraphRoot: /tmp/1480
+ GraphRoot: /tmp/2088
GraphStatus: {}
ImageStore:
number: 0
- RunRoot: /run/user/1480
- VolumePath: /tmp/1480/volumes
+ RunRoot: /run/user/2088
+ VolumePath: /tmp/2088/volumes
I refuse to believe there's an if (2088 == uid) { abort(); } or similar nonsense somewhere in podman's source code. What am I missing?
Does podman system migrate fix there might not be enough IDs available in the namespace for you?
It did for me and others:
https://github.com/containers/libpod/issues/3421
AFAICT, sub-UID and GID ranges should not overlap between users. For reference, here is what the useradd manpage has to say about the matter:
SUB_GID_MIN (number), SUB_GID_MAX (number), SUB_GID_COUNT
(number)
If /etc/subuid exists, the commands useradd and newusers
(unless the user already have subordinate group IDs)
allocate SUB_GID_COUNT unused group IDs from the range
SUB_GID_MIN to SUB_GID_MAX for each new user.
The default values for SUB_GID_MIN, SUB_GID_MAX,
SUB_GID_COUNT are respectively 100000, 600100000 and 65536.
SUB_UID_MIN (number), SUB_UID_MAX (number), SUB_UID_COUNT
(number)
If /etc/subuid exists, the commands useradd and newusers
(unless the user already have subordinate user IDs) allocate
SUB_UID_COUNT unused user IDs from the range SUB_UID_MIN to
SUB_UID_MAX for each new user.
The default values for SUB_UID_MIN, SUB_UID_MAX,
SUB_UID_COUNT are respectively 100000, 600100000 and 65536.
The key word is unused.
CentOS 7.6 does not suport rootless buildah by default - see https://github.com/containers/buildah/pull/1166 and https://www.redhat.com/en/blog/preview-running-containers-without-root-rhel-76
I would like to extract all lines between INFO:root:id is and one line after the INFO:root:newId.
Can anyone advise how I can achieve this?
Currently I'm using
sed -n '/INFO:root:id is/,/INFO:root:newId/p' 1/python.log
and I'm trying to figure out how to print one line after the second pattern match.
INFO:root:id is
INFO:root:16836211
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): abc.hh.com
DEBUG:urllib3.connectionpool:https://abc.hh.com:443 "POST /api/v2/import/row.json HTTP/1.1" 201 4310
INFO:root:newId
INFO:root:35047536
INFO:root:id is
INFO:root:46836211
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): abc.hh.com
DEBUG:urllib3.connectionpool:https://abc.hh.com:443 "POST /api/v2/import/row.json HTTP/1.1" 201 4310
INFO:root:newId
INFO:root:55547536
If I am understanding question correctly
$ seq 10 | sed -n '/3/,/5/{/5/N;p;}'
3
4
5
6
/3/ is starting regex and /5/ is ending regex
/5/N get additional line for ending regex
tested on GNU sed, syntax might differ for other versions
With awk
$ seq 10 | awk '/3/{f=1} f; /5/{f=0; if((getline a)>0) print a}'
3
4
5
6
Unclear whether you want only the first set of lines after a match or all matches.
If you want the first set between the matching patterns, it is easy if you use /INFO:root:id/ for your end match as well and then use head -n -1 to print everything but the last line.
$ sed -n '/INFO:root:id is/,/INFO:root:id/p' test.txt | head -n -1
INFO:root:id is
INFO:root:16836211
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): abc.hh.com
DEBUG:urllib3.connectionpool:https://abc.hh.com:443 "POST /api/v2/import/row.json HTTP/1.1" 201 4310
INFO:root:newId
INFO:root:35047536
Just use flags to indicate when you've found the beginning and ending regexps and print accordingly:
$ seq 10 | awk 'e{print buf $0; buf=""; b=e=0} /3/{b=1} b{buf = buf $0 ORS; if (/5/) e=1}'
3
4
5
6
Note that this does not have the potential issue of printing lines when there's only the beginning or ending regexp present but not both. The other answers, including your currently accepted answer, have that problem:
$ seq 10 | sed -n '/3/,/27/{/27/N;p;}'
3
4
5
6
7
8
9
10
$ seq 10 | awk '/3/{f=1} f; /27/{f=0; if((getline a)>0) print a}'
3
4
5
6
7
8
9
10
$ seq 10 | awk 'e{print buf $0; buf=""; b=e=0} /3/{b=1} b{buf = buf $0 ORS; if (/27/) e=1}'
$
Note that the script I posted correctly didn't print anything because a block of text beginning with 3 and ending with 27 was not present in the input.
I'm using log4perl and I want to record all FATAL events in the separate file.
Here is my script:
#!/usr/bin/perl
use strict;
use warnings FATAL => 'all';
use Log::Log4perl qw(get_logger);
Log::Log4perl::init('log4perl.conf');
my $l_aa = get_logger('AA');
$l_aa->fatal('fatal');
my $l_bb = get_logger('BB');
$l_bb->info('info');
And here is my config file:
## What to log
log4perl.logger = FATAL, FatalLog
log4perl.logger.BB = INFO, MainLog
## Logger MainLog
log4perl.appender.MainLog = Log::Log4perl::Appender::File
log4perl.appender.MainLog.filename = log4perl_main.log
log4perl.appender.MainLog.layout = PatternLayout
log4perl.appender.MainLog.layout.ConversionPattern = \
[%d{yyyy-MM-dd HH:mm:ss}] %p - %c - %m%n
## Logger FatalLog
log4perl.appender.FatalLog = Log::Log4perl::Appender::File
log4perl.appender.FatalLog.filename = log4perl_fatal.log
log4perl.appender.FatalLog.layout = PatternLayout
log4perl.appender.FatalLog.layout.ConversionPattern = \
[%d{yyyy-MM-dd HH:mm:ss}] %p - %c - %m%n
I'm expecting that with this setup the file log4perl_fatal.log will get only FATAL-level events. But here is what I get after running the script:
$ tail -f *log
==> log4perl_fatal.log <==
[2014-04-13 08:41:22] FATAL - AA - fatal
[2014-04-13 08:41:22] INFO - BB - info
==> log4perl_main.log <==
[2014-04-13 08:41:22] INFO - BB - info
Why I'm getting INFO-level event in log4perl_fatal.log?
How can I recordy only FATAL-level events in the separate file?
PS Here is a GitHub repo with this script & config.
Your conf file has the following line:
log4perl.logger = FATAL, FatalLog
what you need is the following:
log4perl.logger.AA = FATAL, FatalLog
Otherwise, the FatalLog becomes a catch-all for both loggers, instead of isolated to this instance:
my $l_aa = get_logger('AA');
This is the question that is coverd in log4perl FAQ — https://metacpan.org/pod/Log::Log4perl::FAQ#How-can-I-collect-all-FATAL-messages-in-an-extra-log-file
In the example log4perl_fatal.log gets INFO level events because of appender additivity.
To fix it this line should be added to config file:
log4perl.appender.FatalLog.Threshold = FATAL
Then the output files get the expected output:
$ tail log4perl*log
==> log4perl_fatal.log <==
[2014-05-04 20:00:39] FATAL - AA - fatal
==> log4perl_main.log <==
[2014-05-04 20:00:39] INFO - BB - info
i'm trying to automate interactive ssh calls (see note 1) as follows:
SSHBINARY = '/usr/bin/ssh'
ASKPASS = '/home/mz0/checkHost/askpass.py'
def sshcmd(host,port,user,password,cmd):
env0 = {'SSH_ASKPASS': ASKPASS, 'DISPLAY':':9999'}
ssh = subprocess.Popen([SSHBINARY,"-T","-p %d" % port,
"-oStrictHostKeyChecking=no", "-oUserKnownHostsFile=/dev/null",
"%s#%s" % (user,host), cmd],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=env0,
preexec_fn=os.setsid
)
error = ssh.stderr.readlines()
result = ssh.stdout.readlines()
return error,result
host = 'localhost'
port = 22
user = 'try1'
password = '1try' # unused, hardcoded in ASKPASS
cmd = 'ls'
result1, error1 = sshcmd(host,port,user,password,cmd)
if result1 : print "OUT: %s" % result1
if error1 : print "ERR: %s" % error1
It turns i'm doing something stupid since i get this:
OUT: ["Warning: Permanently added 'localhost' (RSA) to the list of known hosts.\r\n"]
ERR: ['['Desktop\n', ..., 'Videos\n']']
Obviously stdout and stderr are swapped (note 2). Can you kindly point me my error?
note 1: i'm well aware of password-less ssh, dangers of ignoring host key etc. and hate requests on automating interactive ssh as much as you.
note 2: running the same command in shell confirms that stdout and stderr are swapped in my code
ssh -o 'UserKnownHostsFile /dev/null' -o 'StrictHostKeyChecking no' \
try1#localhost ls > ssh-out 2> ssh-error
return error,result
and
result1, error1 = sshcmd(...)
Just swap either of those around.