I have built core-image-selinux image and flashed on raspberry pi3.
root#raspberrypi3:~# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: permissive
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: requested (insecure)
Max kernel policy version: 33
root#raspberrypi3:~# id -Z
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
When i am trying to run my application i am getting the following error
type=SELINUX_ERR msg=audit(1661751160.368:117): op=security_compute_sid invalid_context="unconfined_u:unconfined_r:myapp_t:s0-s0:c0.c1023" scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tcontext=system_u:object_r:myapp_exec_t:s0 tclass=process
My policy code:
$ cat userapp.te
policy_module(userapp, 1.0.0)
require {
type unconfined_t;
class process transition;
}
type myapp_t;
type myapp_exec_t;
domain_type(myapp_t)
domain_entry_file(myapp_t, myapp_exec_t)
type_transition unconfined_t myapp_exec_t : process myapp_t;
$ cat userapp.fc
/usr/bin/userapp -- gen_context(system_u:object_r:myapp_exec_t,s0)
Related
After running ddev start i cannot run magento commands from outside of the container.
% ddev magento
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "/mnt/ddev_config/.global_commands/web/magento": permission denied: unknown
Failed to run magento : exit status 126
the obove mentioned path does exist inside the container.
ddev exec magento works.
ddev composer works.
name: myproject
type: magento2
docroot: pub
php_version: "7.4"
webserver_type: nginx-fpm
router_http_port: "80"
router_https_port: "443"
xdebug_enabled: false
additional_hostnames: []
additional_fqdns: []
mariadb_version: "10.3"
mysql_version: ""
use_dns_when_possible: true
composer_version: ""
web_environment: []
You don't mention your environment but I imagine you're on macOS with Docker and have enabled experimental settings. Please turn them off... they don't really work right yet. See macOS DDEV drush command Permission denied (Experimental docker settings)
I am trying to setup an LXC container (debian) as a Kubernetes node.
I am so far that the only thing in the way is the kubeadm init script...
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.4.44-2-pve/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/5.4.44-2-pve\n", err: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
After some research I figured out that I probably need to add the following: linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
But adding this to /etc/pve/lxc/107.conf doesn't do anything.
Does anybody have a clue how to add the linux kernel modules?
To allow load with modprobe any modules inside privileged proxmox lxc container, you need add this options to container config:
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: proc:rw sys:rw
lxc.mount.entry: /lib/modules lib/modules none bind 0 0
before that, you must first create the /lib/modules folder inside the container
I'm not sure what guide you are following but assuming that you have the required kernel modules on the host, this would do it:
lxc config set my-container linux.kernel_modules overlay
You can follow this guide from K3s too. Basically:
lxc config edit k3s-lxc
and
config:
linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
raw.lxc: lxc.mount.auto=proc:rw sys:rw
security.privileged: "true"
security.nesting: "true"
✌️
For the fix ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file run from the host:
pct set $VMID --mp0 /usr/lib/modules/$(uname -r),mp=/lib/modules/$(uname -r),ro=1,backup=0
For the fix [ERROR SystemVerification]: failed to parse kernel config run from the host:
pct push $VMID /boot/config-$(uname -r) /boot/config-$(uname -r)
Where $VMID is your container id.
Above error occured when installing kubernetes using kubespray.
The installtion fails and through journal -xe i see the follow:
` node1 systemd[1]: Starting Docker Application Container Engine...
-- Subject: Unit docker.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has begun starting up.
Dec 09 23:37:01 node1 dockerd[8296]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: lo
Dec 09 23:37:01 node1 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 09 23:37:01 node1 systemd[1]: Failed to start Docker Application Container Engine.
how do I troubleshoot to fix the issue? Is there something that I am missing looking into?
The json file is as follows
[root#k8s-master01 kubespray]# cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
the docker.yml file is as follows:
cat inventory/sample/group_vars/all/docker.yml
---
## Uncomment this if you want to force overlay/overlay2 as docker storage driver
## Please note that overlay2 is only supported on newer kernels
# docker_storage_options: -s overlay2
## Enable docker_container_storage_setup, it will configure devicemapper driver on Centos7 or RedHat7.
docker_container_storage_setup: false
## It must be define a disk path for docker_container_storage_setup_devs.
## Otherwise docker-storage-setup will be executed incorrectly.
# docker_container_storage_setup_devs: /dev/vdb
## Uncomment this if you have more than 3 nameservers, then we'll only use the first 3.
docker_dns_servers_strict: false
# Path used to store Docker data
docker_daemon_graph: "/var/lib/docker"
## Used to set docker daemon iptables options to true
docker_iptables_enabled: "false"
# Docker log options
# Rotate container stderr/stdout logs at 50m and keep last 5
docker_log_opts: "--log-opt max-size=50m --log-opt max-file=5"
# define docker bin_dir
docker_bin_dir: "/usr/bin"
# keep docker packages after installation; speeds up repeated ansible provisioning runs when '1'
# kubespray deletes the docker package on each run, so caching the package makes sense
docker_rpm_keepcache: 0
## An obvious use case is allowing insecure-registry access to self hosted registries.
## Can be ipaddress and domain_name.
## example define 172.19.16.11 or mirror.registry.io
# docker_insecure_registries:
# - mirror.registry.io
# - 172.19.16.11
## Add other registry,example China registry mirror.
# docker_registry_mirrors:
# - https://registry.docker-cn.com
# - https://mirror.aliyuncs.com
## If non-empty will override default system MountFlags value.
## This option takes a mount propagation flag: shared, slave
## or private, which control whether mounts in the file system
## namespace set up for docker will receive or propagate mounts
## and unmounts. Leave empty for system default
# docker_mount_flags:
## A string of extra options to pass to the docker daemon.
## This string should be exactly as you wish it to appear.
docker_options: >-
the setup.cfg file is as below
[root#k8s-master01 kubespray]# cat setup.cfg
[metadata]
name = kubespray
summary = Ansible modules for installing Kubernetes
description-file =
README.md
author = Kubespray
author-email = smainklh#gmail.com
license = Apache License (2.0)
home-page = https://github.com/kubernetes-sigs/kubespray
classifier =
License :: OSI Approved :: Apache Software License
Development Status :: 4 - Beta
Intended Audience :: Developers
Intended Audience :: System Administrators
Intended Audience :: Information Technology
Topic :: Utilities
[global]
setup-hooks =
pbr.hooks.setup_hook
[files]
data_files =
usr/share/kubespray/playbooks/ =
cluster.yml
upgrade-cluster.yml
scale.yml
reset.yml
remove-node.yml
extra_playbooks/upgrade-only-k8s.yml
usr/share/kubespray/roles = roles/*
usr/share/kubespray/library = library/*
usr/share/doc/kubespray/ =
LICENSE
README.md
usr/share/doc/kubespray/inventory/ =
inventory/sample/inventory.ini
etc/kubespray/ =
ansible.cfg
etc/kubespray/inventory/sample/group_vars/ =
inventory/sample/group_vars/etcd.yml
etc/kubespray/inventory/sample/group_vars/all/ =
inventory/sample/group_vars/all/all.yml
inventory/sample/group_vars/all/azure.yml
inventory/sample/group_vars/all/coreos.yml
inventory/sample/group_vars/all/docker.yml
inventory/sample/group_vars/all/oci.yml
inventory/sample/group_vars/all/openstack.yml
[wheel]
universal = 1
[pbr]
skip_authors = True
skip_changelog = True
[bdist_rpm]
group = "System Environment/Libraries"
requires =
ansible
python-jinja2
python-netaddr
Take look on that you have defined in deamon.json file storage driver:
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
At the same time in docker.yaml file you didn't enable storage driver options :
## Uncomment this if you want to force overlay/overlay2 as docker storage driver
## Please note that overlay2 is only supported on newer kernels
# docker_storage_options: -s overlay2
Please uncomment docker_storage_options: -s overlay2 line.
make sure you have followed every steps from this tutorial.
I am trying to deploy the zalenium helm chart in my newly deployed aks Kuberbetes (1.9.6) cluster in Azure. But I don't get it to work. The pod is giving the log below:
[bram#xforce zalenium]$ kubectl logs -f zalenium-zalenium-hub-6bbd86ff78-m25t2 Kubernetes service account found. Copying files for Dashboard... cp: cannot create regular file '/home/seluser/videos/index.html': Permission denied cp: cannot create directory '/home/seluser/videos/css': Permission denied cp: cannot create directory '/home/seluser/videos/js': Permission denied Starting Nginx reverse proxy... Starting Selenium Hub... ..........08:49:14.052 [main] INFO o.o.grid.selenium.GridLauncherV3 - Selenium build info: version: '3.12.0', revision: 'unknown' 08:49:14.120 [main] INFO o.o.grid.selenium.GridLauncherV3 - Launching Selenium Grid hub on port 4445 ...08:49:15.125 [main] INFO d.z.e.z.c.k.KubernetesContainerClient - Initialising Kubernetes support ..08:49:15.650 [main] WARN d.z.e.z.c.k.KubernetesContainerClient - Error initialising Kubernetes support. io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [zalenium-zalenium-hub-6bbd86ff78-m25t2] in namespace: [default] failed. at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62) at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:206) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:162) at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.(KubernetesContainerClient.java:87) at de.zalando.ep.zalenium.container.ContainerFactory.createKubernetesContainerClient(ContainerFactory.java:35) at de.zalando.ep.zalenium.container.ContainerFactory.getContainerClient(ContainerFactory.java:22) at de.zalando.ep.zalenium.proxy.DockeredSeleniumStarter.(DockeredSeleniumStarter.java:59) at de.zalando.ep.zalenium.registry.ZaleniumRegistry.(ZaleniumRegistry.java:74) at de.zalando.ep.zalenium.registry.ZaleniumRegistry.(ZaleniumRegistry.java:62) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at java.lang.Class.newInstance(Class.java:442) at org.openqa.grid.web.Hub.(Hub.java:93) at org.openqa.grid.selenium.GridLauncherV3$2.launch(GridLauncherV3.java:291) at org.openqa.grid.selenium.GridLauncherV3.launch(GridLauncherV3.java:122) at org.openqa.grid.selenium.GridLauncherV3.main(GridLauncherV3.java:82) Caused by: javax.net.ssl.SSLPeerUnverifiedException: Hostname kubernetes.default.svc not verified: certificate: sha256/OyzkRILuc6LAX4YnMAIGrRKLmVnDgLRvCasxGXDhSoc= DN: CN=client, O=system:masters subjectAltNames: [10.0.0.1] at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:308) at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:268) at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:160) at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:256) at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:134) at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:113) at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:125) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) at io.fabric8.kubernetes.client.utils.ImpersonatorInterceptor.intercept(ImpersonatorInterceptor.java:56) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) at io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:107) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200) at okhttp3.RealCall.execute(RealCall.java:77) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:379) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:313) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:296) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:770) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:195) ... 16 common frames omitted 08:49:15.651 [main] INFO d.z.e.z.c.k.KubernetesContainerClient - About to clean up any left over selenium pods created by Zalenium Usage: [options] Options: --debug, -debug : enables LogLevel.FINE. Default: false --version, -version Displays the version and exits. Default: false -browserTimeout in seconds : number of seconds a browser session is allowed to hang while a WebDriver command is running (example: driver.get(url)). If the timeout is reached while a WebDriver command is still processing, the session will quit. Minimum value is 60. An unspecified, zero, or negative value means wait indefinitely. -matcher, -capabilityMatcher class name : a class implementing the CapabilityMatcher interface. Specifies the logic the hub will follow to define whether a request can be assigned to a node. For example, if you want to have the matching process use regular expressions instead of exact match when specifying browser version. ALL nodes of a grid ecosystem would then use the same capabilityMatcher, as defined here. -cleanUpCycle in ms : specifies how often the hub will poll running proxies for timed-out (i.e. hung) threads. Must also specify "timeout" option -custom : comma separated key=value pairs for custom grid extensions. NOT RECOMMENDED -- may be deprecated in a future revision. Example: -custom myParamA=Value1,myParamB=Value2 -host IP or hostname : usually determined automatically. Most commonly useful in exotic network configurations (e.g. network with VPN) Default: 0.0.0.0 -hubConfig filename: a JSON file (following grid2 format), which defines the hub properties -jettyThreads, -jettyMaxThreads : max number of threads for Jetty. An unspecified, zero, or negative value means the Jetty default value (200) will be used. -log filename : the filename to use for logging. If omitted, will log to STDOUT -maxSession max number of tests that can run at the same time on the node, irrespective of the browser used -newSessionWaitTimeout in ms : The time after which a new test waiting for a node to become available will time out. When that happens, the test will throw an exception before attempting to start a browser. An unspecified, zero, or negative value means wait indefinitely. Default: 600000 -port : the port number the server will use. Default: 4445 -prioritizer class name : a class implementing the Prioritizer interface. Specify a custom Prioritizer if you want to sort the order in which new session requests are processed when there is a queue. Default to null ( no priority = FIFO ) -registry class name : a class implementing the GridRegistry interface. Specifies the registry the hub will use. Default: de.zalando.ep.zalenium.registry.ZaleniumRegistry -role options are [hub], [node], or [standalone]. Default: hub -servlet, -servlets : list of extra servlets the grid (hub or node) will make available. Specify multiple on the command line: -servlet tld.company.ServletA -servlet tld.company.ServletB. The servlet must exist in the path: /grid/admin/ServletA /grid/admin/ServletB -timeout, -sessionTimeout in seconds : Specifies the timeout before the server automatically kills a session that hasn't had any activity in the last X seconds. The test slot will then be released for another test to use. This is typically used to take care of client crashes. For grid hub/node roles, cleanUpCycle must also be set. -throwOnCapabilityNotPresent true or false : If true, the hub will reject all test requests if no compatible proxy is currently registered. If set to false, the request will queue until a node supporting the capability is registered with the grid. -withoutServlet, -withoutServlets : list of default (hub or node) servlets to disable. Advanced use cases only. Not all default servlets can be disabled. Specify multiple on the command line: -withoutServlet tld.company.ServletA -withoutServlet tld.company.ServletB org.openqa.grid.common.exception.GridConfigurationException: Error creating class with de.zalando.ep.zalenium.registry.ZaleniumRegistry : null at org.openqa.grid.web.Hub.(Hub.java:97) at org.openqa.grid.selenium.GridLauncherV3$2.launch(GridLauncherV3.java:291) at org.openqa.grid.selenium.GridLauncherV3.launch(GridLauncherV3.java:122) at org.openqa.grid.selenium.GridLauncherV3.main(GridLauncherV3.java:82) Caused by: java.lang.ExceptionInInitializerError at de.zalando.ep.zalenium.registry.ZaleniumRegistry.(ZaleniumRegistry.java:74) at de.zalando.ep.zalenium.registry.ZaleniumRegistry.(ZaleniumRegistry.java:62) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at java.lang.Class.newInstance(Class.java:442) at org.openqa.grid.web.Hub.(Hub.java:93) ... 3 more Caused by: java.lang.NullPointerException at java.util.TreeMap.putAll(TreeMap.java:313) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.withLabels(BaseOperation.java:411) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.withLabels(BaseOperation.java:48) at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.deleteSeleniumPods(KubernetesContainerClient.java:393) at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.initialiseContainerEnvironment(KubernetesContainerClient.java:339) at de.zalando.ep.zalenium.container.ContainerFactory.createKubernetesContainerClient(ContainerFactory.java:38) at de.zalando.ep.zalenium.container.ContainerFactory.getContainerClient(ContainerFactory.java:22) at de.zalando.ep.zalenium.proxy.DockeredSeleniumStarter.(DockeredSeleniumStarter.java:59) ... 11 more ...........................................................................................................................................................................................GridLauncher failed to start after 1 minute, failing... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 182 100 182 0 0 36103 0 --:--:-- --:--:-- --:--:-- 45500
A describe pod gives:
Warning Unhealthy 4m (x12 over 6m) kubelet, aks-agentpool-93668098-0 Readiness probe failed: HTTP probe failed with statuscode: 502
Zalenium Image Version(s):
dosel/zalenium:3
If using Kubernetes, specify your environment, and if relevant your manifests:
I use the templates as is from https://github.com/zalando/zalenium/tree/master/docs/k8s/helm
I guess it has to do something with rbac because of this part
"Error initialising Kubernetes support. io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [zalenium-zalenium-hub-6bbd86ff78-m25t2] in namespace: [default] failed. at "
I created a clusterrole and clusterrolebinding for the service account zalenium-zalenium that is automatically created by the Helm chart.
kubectl create clusterrole zalenium --verb=get,list,watch,update,delete,create,patch --resource=pods,deployments,secrets
kubectl create clusterrolebinding zalenium --clusterrole=zalnium --serviceaccount=zalenium-zalenium --namespace=default
Issue had to do with Azure's AKS and Kubernetes. It has been fixed.
See github issue 399
If the typo, mentioned by Ignacio, is not the case why don't you just do
kubectl create clusterrolebinding zalenium --clusterrole=cluster-admin serviceaccount=zalenium-zalenium
Note: no need to specify a namespace if creating cluster role binding, as it is cluster wide (role binding goes with namespaces).
What's wrong in my playbook to stop journald from logging ?
/var/log/messages is not updated any more!
I can reproduce the issue with this simple isolated playbook :
- name: Reproduce journald hang
hosts: test
vars:
- keys:
- kp_XXX.pub
- kp_YYY.pub
tasks:
- name: Verify journalctl before
command: journalctl --verify
- name: Add SSH keys
authorized_key: user=cloud key="{{ item }}"
with_file: keys
- name: Verify journalctl after
command: journalctl --verify
The key XXX will make journald hang, but not the YYY one.
Reproduced on a CentOs 7.2 with ansible 1.9.4
File kp_XXX.pub:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCLd9k03Hvf3QVL8+dYd1KZY9p1ju/RkxHr+t6l6YbMcMfYcLHW6lsNIw2aLC7qpRopQPe/prQZkbXQBy8sYzNUcVtohPTD/V6wX7RXDCiVME9uUztY96Wust1Uc4Z28DhWyC55WFKhetGzfyxK+hMrtORnzdruo/bxHKmGu3rT5HYquB8SlPN/cSG/7itwy6QkXsqzmQUbEvaLPZNwU7qd9LiySFxsbhI2vJz+FiBS+CzkoTKOSZt60I0jRs4wIjXOZjQApcgddGa2ls3vq5HH39Xdr66+PnRU/rrRpaMTrcOTLPzzeWUQoF8VbkSiDXsI8ds+M842DKAT0DFVXnR kp_XXX
File kp_YYY.pub:
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQBM9IuDxubRnbFh1e1dvFSKE91vrME5h/nQMsZo1Bmt8FXIQ7wJdNh+ANLYyQA7Q0tiXD1n97QQ9r89iwHFEUZVSXc7VM01AE27N45ybfLmLwtNm+kny6ncoPy7+MHcOQS9Ra56u6Bi6xXUc7vM4pL2iB/m0GnUSmECZZ5EVuOpMeJltf04/+PldQGOqxp9BzVF8XKEPlW5uc6UBesPCoHpR9lGA5UIIYq1sUDIGBy3T7FEXu8KhiHrtb1wuDGJU62SqR/fxgJDypjAtedm41TFcTZOMqTR29KYCKC4OjRaUTu7kf4rWq7/HWJViK2NLaeoy9xyG1BUUTqrGY6qRo85 kp_YYY
Although both keys are added, the third task fails and journald hangs:
TASK: [Verify journalctl after] ***********************************************
failed: [test] => {"changed": true, "cmd": ["journalctl", "--verify"], "delta": "0:00:00.008351", "end": "2016-03-31 12:49:58.669585", "rc": 1, "start": "2016-03-31 12:49:58.661234", "warnings": []}
stderr: 248668: invalid object
File corruption detected at /run/log/journal/cf9b563caf2bc11cab56d6a504ff6a29/system.journal:248668 (of 8388608 bytes, 28%).
FAIL: /run/log/journal/cf9b563caf2bc11cab56d6a504ff6a29/system.journal (Cannot assign requested address)
dmesg logs plenty of thes lines:
[ 761.806277] systemd-journald[366]: Failed to write entry (27 items, 719 bytes), ignoring: Cannot assign requested address
[ 761.807514] systemd-journald[366]: Failed to write entry (23 items, 633 bytes), ignoring: Cannot assign requested address
[ 761.859245] systemd-journald[366]: Failed to write entry (24 items, 718 bytes), ignoring: Cannot assign requested address
When verifying journald files with journalctl --verify:
248668: invalid object
File corruption detected at /run/log/journal/cf9b563caf2bc11cab56d6a504ff6a29/system.journal:248668 (of 8388608 bytes, 28%).
FAIL: /run/log/journal/cf9b563caf2bc11cab56d6a504ff6a29/system.journal (Cannot assign requested address)
Is this an error from me, from ansible, or centos?
How can it be fixed?
Found these 2 related links:
https://access.redhat.com/solutions/2117311?tour=6#comments
Claudio's blog
open the console!
get to be root
examine the output of "journalctl -xn" and "dmesg"
if you get this or something similar
[ 12.713167] systemd-journald[113]: Failed to write entry (9 items, 245 bytes), ignoring: Cannot assign requested address
verify the journals with "journalctl --verify"
check if the logs are within sizes "journalctl --disk-usage"
by then you have an idea of the corrupted logs - they show up iin red, no way you can ignore them
create a new folder move these files.
do a journalctl --verify re run of the verify and to make sure
reboot