I can able to mount system partition.
I could not able to mount root partition.
getting error like this :
1|console:/ # mount -o rw,remount /
[ 3640.420613] EXT4-fs (dm-0): couldn't mount RDWR because of unsupported optional features (4000)
[ 3640.434479] EXT4-fs (dm-0): couldn't mount RDWR because of unsupported optional features (4000)
'/dev/block/dm-0' is read-only
console:/ # [ 3903.028999] WLDEV-ERROR)
Related
I am trying to install kubernetes with kubeadm in my laptop which has Ubuntu 16.04. I have disabled swap, since kubelet does not work with swap on. The command I used is :
swapoff -a
I also commented out the reference to swap in /etc/fstab.
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=1d343a19-bd75-47a6-899d-7c8bc93e28ff / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
#UUID=d0200036-b211-4e6e-a194-ac2e51dfb27d none swap sw 0 0
I confirmed swap is turned off by running the following:
free -m
total used free shared buff/cache available
Mem: 15936 2108 9433 954 4394 12465
Swap: 0 0 0
When I start kubeadm, I get the following error:
kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I also tried restarting my laptop, but I get the same error. What could the reason be?
below was the root cause.
detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
you need to update the docker cgroup driver.
follow the below fix
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# Restart Docker
systemctl daemon-reload
systemctl restart docker
you could try kubeadm reset , then kubeadm init --ignore-preflight-errors Swap .
first try with sudo
sudo swapoff -a
then check if there's anything swapped
cat /proc/swaps
and
free -h
I am trying to deploy a k8s cluster in openstack rocky but after long time it fails. I've checked orchestration stack and see that kube_minions resources never completes. Checking the logs output for all the instances created:
[ 196.817505] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 215.082433] random: crng init done
Fedora 27 (Atomic Host)
Kernel 4.14.18-300.fc27.x86_64 on an x86_64 (ttyS0)
host-10-0-0-3 login: [ 691.438618] bridge: filtering via
arp/ip/ip6tables is no longer available by default. Update your scripts
to load br_netfilter if you need this.
[ 691.516277] Bridge firewalling registered
[ 692.149217] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[ 701.932912] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
Checking deeply in the instances I've found that in master node can not start heat-agent-service...
_prefix=docker.io/openstackmagnum/
atomic install --storage ostree --system --system-package no --set
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-
agent docker.io/openstackmagnum/heat-container-agent:rocky-stable
systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-
agent.service not found.
2019-04-04 14:57:40,238 - util.py[WARNING]: Failed running
/var/lib/cloud/instance/scripts/part-013 [5]´
my jod id is
2019-02-01_06_50_27-10838491598599390366
this is dataflow batch job. with template.
here is the dataflow error code.
2019-02-01 23:51:02.647 JST
EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities
2019-02-01 23:51:02.659 JST
EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities
2019-02-01 23:51:02.699 JST
EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities
2019-02-01 23:51:02.699 JST
EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities
2019-02-01 23:51:02.700 JST
EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities
2019-02-01 23:51:02.710 JST
EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities
2019-02-01 23:51:02.937 JST
EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities
2019-02-01 23:51:03.387 JST
EXT4-fs (dm-0): couldn't mount as ext3 due to feature incompatibilities
2019-02-01 23:51:10.509 JST
Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system
2019-02-01 23:51:10.511 JST
Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
{
insertId: "s=51b724ba020b4384acc382e634e62cbc;i=568;b=879cba75f5cd4eff82751e8f30ef312b;m=9a91b9;t=580d6461241e4;x=6549465094b7bc54"
jsonPayload: {…}
labels: {…}
logName: "projects/fluted-airline-109810/logs/dataflow.googleapis.com%2Fkubelet"
receiveTimestamp: "2019-02-01T14:51:18.883283433Z"
resource: {…}
severity: "ERROR"
timestamp: "2019-02-01T14:51:10.511494Z"
}
2019-02-01 23:51:10.560 JST
[ContainerManager]: Fail to get rootfs information unable to find data for container /
2019-02-01 23:51:10.577 JST
Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system
2019-02-01 23:51:10.580 JST
Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
2019-02-01 23:51:10.608 JST
[ContainerManager]: Fail to get rootfs information unable to find data for container /
2019-02-01 23:51:10.645 JST
Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system
2019-02-01 23:51:10.646 JST
Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
2019-02-01 23:51:10.694 JST
[ContainerManager]: Fail to get rootfs information unable to find data for container /
2019-02-01 23:51:10.749 JST
Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system
2019-02-01 23:51:10.751 JST
Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
2019-02-01 23:51:10.775 JST
Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system
2019-02-01 23:51:10.777 JST
Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
2019-02-01 23:51:10.785 JST
[ContainerManager]: Fail to get rootfs information unable to find data for container /
2019-02-01 23:51:10.809 JST
Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system
2019-02-01 23:51:10.811 JST
Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
2019-02-01 23:51:10.816 JST
[ContainerManager]: Fail to get rootfs information unable to find data for container /
2019-02-01 23:51:10.857 JST
[ContainerManager]: Fail to get rootfs information unable to find data for container /
2019-02-01 23:51:10.929 JST
Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system
2019-02-01 23:51:10.931 JST
Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
2019-02-01 23:51:10.966 JST
[ContainerManager]: Fail to get rootfs information unable to find data for container /
2019-02-01 23:51:11.214 JST
Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system
2019-02-01 23:51:11.216 JST
Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
2019-02-01 23:51:11.254 JST
[ContainerManager]: Fail to get rootfs information unable to find data for container /
2019-02-01 23:51:15.619 JST
PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
2019-02-01 23:51:15.793 JST
PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
2019-02-01 23:51:15.974 JST
PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
2019-02-01 23:51:16.264 JST
PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Is the gs:// bucket accessible by the service account for this job?
Hi im trying to resize a disk for a pod in my kubernetes cluster,following the steps on the docs, i ssh in to the instance node to follow the steps, but its giving me an error
sudo growpart /dev/sdb 1
WARN: unknown label
failed [sfd_dump:1] sfdisk --unit=S --dump /dev/sdb
/dev/sdb: device contains a valid 'ext4' signature; it is strongly recommended to wipe the device with wipefs(8)
if this is unexpected, in order to avoid possible collisions
sfdisk: failed to dump partition table: Success
FAILED: failed to dump sfdisk info for /dev/sdb
i try running the commands from inside the pod but doesnt even locate the disk even tho its there
root#rc-test-r2cfg:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 59G 2.5G 56G 5% /
/dev/sdb 49G 22G 25G 47% /var/lib/postgresql/data
root#rc-test-r2cfg:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 96G 0 disk /var/lib/postgresql/data
sda 8:0 0 60G 0 disk
└─sda1 8:1 0 60G 0 part /etc/hosts
root#rc-test-r2cfg:/# growpart /dev/sdb 1
FAILED: /dev/sdb: does not exist
where /dev/sdb is the disk location
This can now be easily done by updating the storage specification directly of the Persistent Volume Claim. See these posts for reference:
https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/
https://dev.to/bzon/resizing-persistent-volumes-in-kubernetes-like-magic-4f96 (GKE example)
I'm attempting a whole-planet OSM data import on an AWS EC2. During or possibly after the "Ways" processing i receive the following message:
"Failed to read from node cache: Input/output error"
The EC2 has the following specs:
type: i3.xlarge
memory: 30.5 Gb
vCPUs: 4
Postgresql: v9.5.6
PostGIS: 2.2
In addition to the root volume, I have mounted 900GB SSD and a 2TB HHD (high throughput). The Postgresql data directory is on the HHD. I have commanded the osm2pgsql to write the flat-nodes file the SSD.
Here is my osm2pgsql command:
osm2pgsql -c -d gis --number-processes 4 --slim -C 20000 --flat-nodes /data-cache/flat-node-cache/flat.nodes /data-postgres/planet-latest.osm.pbf
I run the above command as user renderaccount that is a member of the following groups renderaccount ubuntu postgres. The flat-nodes file appears to be successfully created at /data-cache/flat-node-cache/flat.nodes and has this profile:
ubuntu#ip-172-31-25-230:/data-cache/flat-node-cache$ ls -l
total 37281800
-rw------- 1 renderaccount renderaccount 38176555024 Apr 13 05:45 flat.nodes
Has anyone run into and or resolved this? I suspect maybe a permissions issue? I notice now that since this last osm2pgsql failure, the mounted SSD that is the destination of the flat-nodes file has been converted to a "read-only" file system - which sounds like may happen when there are i/o errors on the mounted volume(?).
Also, does osm2pgsql write to a log that I could acquire additional info?
UPDATE: dmesg output:
[ 6206.884412] blk_update_request: I/O error, dev nvme0n1, sector 66250752
[ 6206.890813] EXT4-fs warning (device nvme0n1): ext4_end_bio:329: I/O error -5 writing to inode 14024706 (offset 10871640064 size 8388608 starting block 8281600)
[ 6206.890817] Buffer I/O error on device nvme0n1, logical block 8281344
After researching the above output, it appears it might be a bug in Ubuntu 16.04. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1668129?comments=all
This was an error with Ubuntu 16.04 writing to the volume nvme0n1. Solved by this https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1668129/comments/29