SwiftLint warning: Configuration contains invalid keys: ["function_level"] - swift

Jus updated pods and getting warning
warning: Configuration contains invalid keys:
["function_level"]
changed .swiftlint.yml according to CHANGELOG
inside: .swiftlint.yml
colon:
severity: error
line_length:
ignores_comments: true
warning: 260
error: 300
type_body_length:
warning: 300
error: 500
file_length:
warning: 800
error: 1000
function_parameter_count:
warning: 20
error: 30
function_body_length:
warning: 120
error: 150
cyclomatic_complexity:
warning: 40
error: 50
disabled_rules:
- implicit_getter
- redundant_string_enum_value
nesting:
type_level:
warning: 3
error: 6
function_level:
warning: 5
error: 10
#implicitly_unwrapped_optional:
# severity: warning
force_unwrapping:
severity: error
vertical_whitespace:
severity: error
#optional_try:
# severity: error
force_try:
severity: error
type_name:
min_length: 3
max_length: 60
error: 80
identifier_name:
min_length: 1
max_length: 60
excluded:
- id
# Disable rules from the default enabled set.
disabled_rules:
- trailing_whitespace
- implicit_getter
- redundant_string_enum_value
- switch_case_alignment
# Enable rules not from the default set.
opt_in_rules:
# - function_default_parameter_at_end
- empty_count
# - index_at_zero
- legacy_constant
# - implicitly_unwrapped_optional
- force_unwrapping
# Acts as a whitelist, only the rules specified in this list will be enabled. Can not be specified alongside disabled_rules or opt_in_rules.
only_rules:
# This is an entirely separate list of rules that are only run by the analyze command. All analyzer rules are opt-in, so this is the only configurable rule list (there is no disabled/whitelist equivalent).
analyzer_rules:
# This section is for defining custom rules
# custom_rules:
# constant_zero:
# included: ".*\\.swift"
# excluded: ".*Test\\.swift"
# name: "Constant Zero"
# regex: "(^0$)"
# match_kinds:
# - number
# message: "Use .zero instead of 0"
# severity: warning
# optional_try:
# included: ".*\\.swift"
# excluded: ".*Test\\.swift"
# name: "Optional Try"
# regex: ".*try?.*"
# match_kinds:
# - idiomatic
# message: "Optional tries should be avoided."
# severity: warning
# paths to ignore during linting. Takes precedence over `included`.
excluded:
- Carthage
- Pods

Related

The "prometheusrules" is invalid: - Error

I am trying to run this prometheus rules out of a yaml file and I am getting this error. I have serach all over the web and since it is quite long and as well very specific to elastic. I have not been able to find any sort of documentation, help or advise. I would really appreciate any guidance or someone pointing me at the right direction.
kubectl apply -f prometheus-es-rules.yaml -n centralized-logging
The "prometheusrules" is invalid:
* : 43:11: group "elastic", rule 1, "elasticsearch_filesystem_data_used_percent": could not parse expression: 1:71: parse error: unexpected identifier "elasticsearch" in label matching, expected "," or "}"
* : 43:11: group "elastic", rule 2, "elasticsearch_filesystem_data_free_percent": could not parse expression: 1:72: parse error: unexpected identifier "elasticsearch" in label matching, expected "," or "}"
* : 43:11: group "elastic", rule 3, "ElasticsearchTooFewNodesRunning": could not parse expression: 1:68: parse error: unexpected identifier "elasticsearch" in label matching, expected "," or "}"
* : 43:11: group "elastic", rule 4, "ElasticsearchHeapTooHigh": could not parse expression: 1:59: parse error: unexpected identifier "elasticsearch" in label matching, expected "," or "}"
This is my promethus-rules.yaml file:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
role: alert-rules
app: elastic
name: prometheus-rules
spec:
groups:
- name: elastic
rules:
- record: elasticsearch_filesystem_data_used_percent
expr: |
100 * (elasticsearch_filesystem_data_size_bytes{service="{{ template "elasticsearch-exporter.fullname" . }}"} - elasticsearch_filesystem_data_free_bytes{service="{{ template "elasticsearch-exporter.fullname" . }}"})
/ elasticsearch_filesystem_data_size_bytes{service="{{ template "elasticsearch-exporter.fullname" . }}"}
- record: elasticsearch_filesystem_data_free_percent
expr: 100 - elasticsearch_filesystem_data_used_percent{service="{{ template "elasticsearch-exporter.fullname" . }}"}
- alert: ElasticsearchTooFewNodesRunning
expr: elasticsearch_cluster_health_number_of_nodes{service="{{ template "elasticsearch-exporter.fullname" . }}"} < 3
for: 5m
labels:
severity: critical
annotations:
description: There are only {{ "{{ $value }}" }} < 3 ElasticSearch nodes running
summary: ElasticSearch running on less than 3 nodes
- alert: ElasticsearchHeapTooHigh
expr: |
elasticsearch_jvm_memory_used_bytes{service="{{ template "elasticsearch-exporter.fullname" . }}", area="heap"} / elasticsearch_jvm_memory_max_bytes{service="{{ template "elasticsearch-exporter.fullname" . }}", area="heap"}
> 0.9
for: 15m
labels:
severity: critical
annotations:
description: The heap usage is over 90% for 15m
summary: ElasticSearch node {{ "{{ $labels.node }}" }} heap usage is high
- alert: ElasticNoAvailableSpace
expr: es_fs_path_free_bytes * 100 / es_fs_path_total_bytes < 10
for: 10m
labels:
severity: critical
annotations:
summary: Instance {{$labels.instance}}
description: Elasticsearch reports that there are only {{ $value }}% left on {{ $labels.path }} at {{$labels.instance}}. Please check it
- alert: NumberOfPendingTasks
expr: es_cluster_pending_tasks_number > 0
for: 5m
labels:
severity: warning
annotations:
summary: Instance {{ $labels.instance }}
description: Number of pending tasks for 10 min. Cluster works slowly
I am not sure if this is an indentation problem?

Why Does Podman Report "Not enough IDs available in namespace" with different UIDs?

Facts:
Rootless podman works perfectly for uid 1480
Rootless podman fails for uid 2088
CentOS 7
Kernel 3.10.0-1062.1.2.el7.x86_64
podman version 1.4.4
Almost the entire environment has been removed between the two
The filesystem for /tmp is xfs
The capsh output of the two users is identical but for uid / username
Both UIDs have identical entries in /etc/sub{u,g}id files
The $HOME/.config/containers/storage.conf is the default and is identical between the two with the exception of the uids. The storage.conf is below for reference.
I wrote the following shell script to demonstrate just how similar an environment the two are operating in:
#!/bin/sh
for i in 1480 2088; do
sudo chroot --userspec "$i":10 / env -i /bin/sh <<EOF
echo -------------- $i ----------------
/usr/sbin/capsh --print
grep "$i" /etc/subuid /etc/subgid
mkdir /tmp/"$i"
HOME=/tmp/"$i"
export HOME
podman --root=/tmp/"$i" info > /tmp/podman."$i"
podman run --rm --root=/tmp/"$i" docker.io/library/busybox printf "\tCOMPLETE\n"
echo -----------END $i END-------------
EOF
sudo rm -rf /tmp/"$i"
done
Here's the output of the script:
$ sh /tmp/podman-fail.sh
[sudo] password for functional:
-------------- 1480 ----------------
Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36
Securebits: 00/0x0/1'b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=1480(functional)
gid=10(wheel)
groups=0(root)
/etc/subuid:1480:100000:65536
/etc/subgid:1480:100000:65536
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob 7c9d20b9b6cd done
Copying config 19485c79a9 done
Writing manifest to image destination
Storing signatures
ERRO[0003] could not find slirp4netns, the network namespace won't be configured: exec: "slirp4netns": executable file not found in $PATH
COMPLETE
-----------END 1480 END-------------
-------------- 2088 ----------------
Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36
Securebits: 00/0x0/1'b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=2088(broken)
gid=10(wheel)
groups=0(root)
/etc/subuid:2088:100000:65536
/etc/subgid:2088:100000:65536
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob 7c9d20b9b6cd done
Copying config 19485c79a9 done
Writing manifest to image destination
Storing signatures
ERRO[0003] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
ERRO[0003] Error pulling image ref //busybox:latest: Error committing the finished image: error adding layer with blob "sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Failed
Error: unable to pull docker.io/library/busybox: unable to pull image: Error committing the finished image: error adding layer with blob "sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Here's the storage.conf for the 1480 uid. It's identical except s/1480/2088/:
[storage]
driver = "vfs"
runroot = "/run/user/1480"
graphroot = "/tmp/1480/.local/share/containers/storage"
[storage.options]
size = ""
remap-uids = ""
remap-gids = ""
remap-user = ""
remap-group = ""
ostree_repo = ""
skip_mount_home = ""
mount_program = ""
mountopt = ""
[storage.options.thinpool]
autoextend_percent = ""
autoextend_threshold = ""
basesize = ""
blocksize = ""
directlvm_device = ""
directlvm_device_force = ""
fs = ""
log_level = ""
min_free_space = ""
mkfsarg = ""
mountopt = ""
use_deferred_deletion = ""
use_deferred_removal = ""
xfs_nospace_max_retries = ""
You can see there's basically no difference between the two podman info outputs for the users:
$ diff -u /tmp/podman.1480 /tmp/podman.2088
--- /tmp/podman.1480 2019-10-17 22:41:21.991573733 -0400
+++ /tmp/podman.2088 2019-10-17 22:41:26.182584536 -0400
## -7,7 +7,7 ##
Distribution:
distribution: '"centos"'
version: "7"
- MemFree: 45654056960
+ MemFree: 45652697088
MemTotal: 67306323968
OCIRuntime:
package: containerd.io-1.2.6-3.3.el7.x86_64
## -24,7 +24,7 ##
kernel: 3.10.0-1062.1.2.el7.x86_64
os: linux
rootless: true
- uptime: 30h 17m 50.23s (Approximately 1.25 days)
+ uptime: 30h 17m 54.42s (Approximately 1.25 days)
registries:
blocked: null
insecure: null
## -35,14 +35,14 ##
- quay.io
- registry.centos.org
store:
- ConfigFile: /tmp/1480/.config/containers/storage.conf
+ ConfigFile: /tmp/2088/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: vfs
GraphOptions: null
- GraphRoot: /tmp/1480
+ GraphRoot: /tmp/2088
GraphStatus: {}
ImageStore:
number: 0
- RunRoot: /run/user/1480
- VolumePath: /tmp/1480/volumes
+ RunRoot: /run/user/2088
+ VolumePath: /tmp/2088/volumes
I refuse to believe there's an if (2088 == uid) { abort(); } or similar nonsense somewhere in podman's source code. What am I missing?
Does podman system migrate fix there might not be enough IDs available in the namespace for you?
It did for me and others:
https://github.com/containers/libpod/issues/3421
AFAICT, sub-UID and GID ranges should not overlap between users. For reference, here is what the useradd manpage has to say about the matter:
SUB_GID_MIN (number), SUB_GID_MAX (number), SUB_GID_COUNT
(number)
If /etc/subuid exists, the commands useradd and newusers
(unless the user already have subordinate group IDs)
allocate SUB_GID_COUNT unused group IDs from the range
SUB_GID_MIN to SUB_GID_MAX for each new user.
The default values for SUB_GID_MIN, SUB_GID_MAX,
SUB_GID_COUNT are respectively 100000, 600100000 and 65536.
SUB_UID_MIN (number), SUB_UID_MAX (number), SUB_UID_COUNT
(number)
If /etc/subuid exists, the commands useradd and newusers
(unless the user already have subordinate user IDs) allocate
SUB_UID_COUNT unused user IDs from the range SUB_UID_MIN to
SUB_UID_MAX for each new user.
The default values for SUB_UID_MIN, SUB_UID_MAX,
SUB_UID_COUNT are respectively 100000, 600100000 and 65536.
The key word is unused.
CentOS 7.6 does not suport rootless buildah by default - see https://github.com/containers/buildah/pull/1166 and https://www.redhat.com/en/blog/preview-running-containers-without-root-rhel-76

CHECK failed while writing custom loss layer in Caffe

I get the following error while running a test case for a custom built loss function in caffe. This loss function uses Reshape layer on the bottom blob to this layer (This is mentioned in the LayerSetUp() method of the Custom loss function.
Error:
F0704 01:49:54.075613 16977 blob.cpp:145] Check failed: diff_
*** Check failure stack trace: ***
# 0x7f627a1965cd google::LogMessage::Fail()
# 0x7f627a198433 google::LogMessage::SendToLog()
# 0x7f627a19615b google::LogMessage::Flush()
# 0x7f627a198e1e google::LogMessageFatal::~LogMessageFatal()
# 0x7f62754dd96b caffe::Blob<>::mutable_cpu_diff()
# 0x500e8a caffe::CustomLossLayerTest_TestRead_Test<>::TestBody()
# 0x940693 testing::internal::HandleExceptionsInMethodIfSupported<>()
# 0x939caa testing::Test::Run()
# 0x939df8 testing::TestInfo::Run()
# 0x939ed5 testing::TestCase::Run()
# 0x93b1af testing::internal::UnitTestImpl::RunAllTests()
# 0x93b4d3 testing::UnitTest::Run()
# 0x46f5fd main
# 0x7f6274811830 __libc_start_main
# 0x477229 _start
# (nil) (unknown)
Makefile:532: recipe for target 'runtest' failed
make: *** [runtest] Aborted (core dumped)
What could be the possible reason ?

Error : H5LTfind_dataset(file_id, dataset_name_) Failed to find HDF5 dataset label

I want to use HDF5 file to input my data and labels in my CNN.
I created the hdf5 file with matlab.
Here is my code:
h5create(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/train/image',[522 775 3 numFrames]);
h5create(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/train/anno',[522 775 3 numFrames]);
h5create(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/label',[1 numFrames]);`
h5write(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/train/image',images);
h5write(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/train/anno',anno);
h5write(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/label',label);`
Where image, anno are 4D unit8 and label is a 1x85 unit16 vector.
When I display my .h5 file I got this:
HDF5 uNetDataSet.h5
Group '/'
Group '/home'
Group '/home/alexandra'
Group '/home/alexandra/Documents'
Group '/home/alexandra/Documents/my-u-net'
Group '/home/alexandra/Documents/my-u-net/warwick_dataset'
Group '/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset'
Dataset 'label'
Size: 1x85
MaxSize: 1x85
Datatype: H5T_IEEE_F64LE (double)
ChunkSize: []
Filters: none
FillValue: 0.000000
Group '/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/train'
Dataset 'anno'
Size: 522x775x3x85
MaxSize: 522x775x3x85
Datatype: H5T_IEEE_F64LE (double)
ChunkSize: []
Filters: none
FillValue: 0.000000
Dataset 'image'
Size: 522x775x3x85
MaxSize: 522x775x3x85
Datatype: H5T_IEEE_F64LE (double)
ChunkSize: []
Filters: none
FillValue: 0.000000`
When I read the label dataset with h5read it works.
But when I try to train my network I got this error:
I0713 09:47:18.620510 4278 layer_factory.hpp:77] Creating layer loadMydata
I0713 09:47:18.620535 4278 net.cpp:91] Creating Layer loadMydata
I0713 09:47:18.620550 4278 net.cpp:399] loadMydata -> label
I0713 09:47:18.620580 4278 net.cpp:399] loadMydata -> anno
I0713 09:47:18.620600 4278 net.cpp:399] loadMydata -> image
I0713 09:47:18.620622 4278 hdf5_data_layer.cpp:79] Loading list of HDF5 filenames from: /home/alexandra/Documents/my-u-net/my_data.txt
I0713 09:47:18.620656 4278 hdf5_data_layer.cpp:93] Number of HDF5 files: 1
F0713 09:47:18.621317 4278 hdf5.cpp:14] Check failed: H5LTfind_dataset(file_id, dataset_name_) Failed to find HDF5 dataset label
*** Check failure stack trace: ***
# 0x7f2edf557daa (unknown)
# 0x7f2edf557ce4 (unknown)
# 0x7f2edf5576e6 (unknown)
# 0x7f2edf55a687 (unknown)
# 0x7f2edf908597 caffe::hdf5_load_nd_dataset_helper<>()
# 0x7f2edf907365 caffe::hdf5_load_nd_dataset<>()
# 0x7f2edf9579fe caffe::HDF5DataLayer<>::LoadHDF5FileData()
# 0x7f2edf956818 caffe::HDF5DataLayer<>::LayerSetUp()
# 0x7f2edf94fcbc caffe::Net<>::Init()
# 0x7f2edf950b45 caffe::Net<>::Net()
# 0x7f2edf91d08a caffe::Solver<>::InitTrainNet()
# 0x7f2edf91e18c caffe::Solver<>::Init()
# 0x7f2edf91e4ba caffe::Solver<>::Solver()
# 0x7f2edf930ed3 caffe::Creator_SGDSolver<>()
# 0x40e67e caffe::SolverRegistry<>::CreateSolver()
# 0x40794b train()
# 0x40590c main
# 0x7f2ede865f45 (unknown)
# 0x406041 (unknown)
# (nil) (unknown)
Aborted (core dumped)
In my .prototxt file :
layer {
top: 'label'
top:'anno'
top: 'image'
name: 'loadMydata'
type: "HDF5Data"
hdf5_data_param { source: '/home/alexandra/Documents/my-u-net/my_data.txt' batch_size: 1 }
include: { phase: TRAIN }
}
I don't know where I did something wrong, if anyone could help me it would be great !
your hdf5 file 'uNetDataSet.h5' does not have label in it.
What you have instead is '/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/label' - I hope you can spot the difference.
Try creating the dataset with
h5create(['uNetDataSet.h5'],'/image',[522 775 3 numFrames]);
h5create(['uNetDataSet.h5'],'/anno',[522 775 3 numFrames]);
h5create(['uNetDataSet.h5'],'/label',[1 numFrames]);
Please see this answer for more details. Also note that you might need to permute the input data before saving it to hdf5 using matlab.

How to record FATAL events to separate file with log4perl

I'm using log4perl and I want to record all FATAL events in the separate file.
Here is my script:
#!/usr/bin/perl
use strict;
use warnings FATAL => 'all';
use Log::Log4perl qw(get_logger);
Log::Log4perl::init('log4perl.conf');
my $l_aa = get_logger('AA');
$l_aa->fatal('fatal');
my $l_bb = get_logger('BB');
$l_bb->info('info');
And here is my config file:
## What to log
log4perl.logger = FATAL, FatalLog
log4perl.logger.BB = INFO, MainLog
## Logger MainLog
log4perl.appender.MainLog = Log::Log4perl::Appender::File
log4perl.appender.MainLog.filename = log4perl_main.log
log4perl.appender.MainLog.layout = PatternLayout
log4perl.appender.MainLog.layout.ConversionPattern = \
[%d{yyyy-MM-dd HH:mm:ss}] %p - %c - %m%n
## Logger FatalLog
log4perl.appender.FatalLog = Log::Log4perl::Appender::File
log4perl.appender.FatalLog.filename = log4perl_fatal.log
log4perl.appender.FatalLog.layout = PatternLayout
log4perl.appender.FatalLog.layout.ConversionPattern = \
[%d{yyyy-MM-dd HH:mm:ss}] %p - %c - %m%n
I'm expecting that with this setup the file log4perl_fatal.log will get only FATAL-level events. But here is what I get after running the script:
$ tail -f *log
==> log4perl_fatal.log <==
[2014-04-13 08:41:22] FATAL - AA - fatal
[2014-04-13 08:41:22] INFO - BB - info
==> log4perl_main.log <==
[2014-04-13 08:41:22] INFO - BB - info
Why I'm getting INFO-level event in log4perl_fatal.log?
How can I recordy only FATAL-level events in the separate file?
PS Here is a GitHub repo with this script & config.
Your conf file has the following line:
log4perl.logger = FATAL, FatalLog
what you need is the following:
log4perl.logger.AA = FATAL, FatalLog
Otherwise, the FatalLog becomes a catch-all for both loggers, instead of isolated to this instance:
my $l_aa = get_logger('AA');
This is the question that is coverd in log4perl FAQ — https://metacpan.org/pod/Log::Log4perl::FAQ#How-can-I-collect-all-FATAL-messages-in-an-extra-log-file
In the example log4perl_fatal.log gets INFO level events because of appender additivity.
To fix it this line should be added to config file:
log4perl.appender.FatalLog.Threshold = FATAL
Then the output files get the expected output:
$ tail log4perl*log
==> log4perl_fatal.log <==
[2014-05-04 20:00:39] FATAL - AA - fatal
==> log4perl_main.log <==
[2014-05-04 20:00:39] INFO - BB - info