How can I find out the cluster size of an exfat partition? - exfat

How can I find out the cluster size of an exfat partition?
It appears that fsutil only has a command for ntsf partition
.

This is how I do it on my Ubuntu with exfat-utils package installed.
$ sudo dumpexfat /dev/sdb1
Volume label
Volume serial number 0xb631210e
FS version 1.0
Sector size 512
Cluster size 32768
Sectors count 1953523120
Free sectors 1953276800
Clusters count 30520069
Free clusters 30519950
First sector 0
FAT first sector 128
FAT sectors count 238528
First cluster sector 238656
Root directory cluster 120
Volume state 0x0002
FATs count 1
Drive number 0x80
Allocated space 0%

Related

CentOs Partition Resize

I'm struggling with resizing a CentOs Partition on a Server. I found some steps, but I'm not sure which circumstances I face and whats the correct approach and i definitely cannot mess that up.
The space should already be available, but the partition is not resized as far as I can tell.
The goal is to extend the partition /dev/sdb1 from 197GB to 1TB
Below are the "lsblk", "df -h" and "fdisk -l" results which should show my current situation.
[ ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 1G 0 part /boot
├─sda2 8:2 0 3.7G 0 part [SWAP]
└─sda3 8:3 0 45.3G 0 part /
sdb 8:16 0 1T 0 disk
└─sdb1 8:17 0 1024G 0 part /var/www/vhosts
sdc 8:32 0 50G 0 disk
└─sdc1 8:33 0 50G 0 part /var/lib/psa
sr0 11:0 1 680M 0 rom
[ ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 12M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda3 45G 7.0G 36G 17% /
/dev/sda1 976M 135M 775M 15% /boot
/dev/sdc1 50G 53M 47G 1% /var/lib/psa
/dev/sdb1 197G 126G 62G 68% /var/www/vhosts
tmpfs 1.6G 0 1.6G 0% /run/user/0
[ ~]# fdisk -l
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009c4b4
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 9910271 3905536 82 Linux swap / Solaris
/dev/sda3 9910272 104855551 47472640 83 Linux
Disk /dev/sdb: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x8e948ef1
Device Boot Start End Blocks Id System
/dev/sdb1 2048 2147483647 1073740800 83 Linux
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x7677284e
Device Boot Start End Blocks Id System
/dev/sdc1 2048 104857599 52427776 83 Linux
I found this answer here on an external page, but I'm not familiar with the commands and cannot tell, if thats the right way to go (if allowed I can paste the url). Partition Paths have not beed update to mine.
There are three steps to make:
alter your partition table so sda2 ends at end of disk
reread the partition table (will require a reboot)
resize your LVM pv using pvresize
Step 1 - Partition table Run fdisk
/dev/sda. Issue p to print your current partition table and copy that
output to some safe place. Now issue d followed by 2 to remove the
second partition. Issue n to create a new second partition. Make sure
the start equals the start of the partition table you printed earlier.
Make sure the end is at the end of the disk (usually the default).
Issue t followed by 2 followed by 8e to toggle the partition type of
your new second partition to 8e (Linux LVM).
Issue p to review your new partition layout and make sure the start of
the new second partition is exactly where the old second partition
was.
If everything looks right, issue w to write the partition table to
disk. You will get an error message from partprobe that the partition
table couldn't be reread (because the disk is in use).
Step 2 Reboot your system This step is neccessary so the partition table gets
re-read.
Step 3 Resize the LVM PV After your system rebooted invoke pvresize
/dev/sda2. Your Physical LVM volume will now span the rest of the
drive and you can create or extend logical volumes into that space.
The question is, is that the right way to increase the partition size without loosing any data on it for a CentOs System?
Thank you
As you can see the partition
sdb 8:16 0 1T 0 disk
└─sdb1 8:17 0 1024G 0 part /var/www/vhosts
is already 1TB. So you need to extend the filesystem. If your filesystem is ext4 you can use command:
resize2fs /var/www/vhosts
if your filesystem is xfs you can use command:
xfs_growfs /var/www/vhosts

Postgresql cannot archive WAL, after reach above max_wal_size

I have deployed PostgreSQL(v13) using crunchydata k8s operator.
currently I found max_wal_size is 1GB using:
DB=# show max_wal_size;
max_wal_size
--------------
1GB
(1 row)
But the current /pgdata/pg13_wal size is 4.9Gi. Why PostgreSQL cannot archive wal and reduce wal size.
Logs
Backup start location: 0/0
Backup end location: 0/0
End-of-backup record required: no
wal_level setting: logical
wal_log_hints setting: on
max_connections setting: 100
max_worker_processes setting: 8
max_wal_senders setting: 10
max_prepared_xacts setting: 0
max_locks_per_xact setting: 64
track_commit_timestamp setting: off
Maximum data alignment: 8
Database block size: 8192
Blocks per segment of large relation: 131072
WAL block size: 8192
Bytes per WAL segment: 16777216
Maximum length of identifiers: 64
Maximum columns in an index: 32
Maximum size of a TOAST chunk: 1996
Size of a large-object chunk: 2048
Date/time type storage: 64-bit integers
Float8 argument passing: by value
Data page checksum version: 1
The max_wal_size is Maximum size to let the WAL grow during automatic checkpoints. This is a soft limit; WAL size can exceed max_wal_size under special circumstances, such as heavy load, a failing archive_command, or a high wal_keep_size setting. If this value is specified without units, it is taken as megabytes. The default is 1 GB. Increasing this parameter can increase the amount of time needed for crash recovery. This parameter can only be set in the postgresql.conf file or on the server command line.

Rescue partition in Yocto image

I need to create an image using Yocto that includes a rescue partition. That is, another root partition which is selectable in grub menu.
I am currently creating an image which I dd to our target Intel board's internal SSD chip after booting from USB. This is working, but I now need to duplicate the current root partition as a "rescue" partition for in case something ever goes wrong with the root partition.
So currently I have a working wks (kickstart) file with 5 partitions defined as:
part /boot --source bootimg-efi --sourceparams="loader=grub-efi" --ondisk sda --label boot --active --align 1024
part / --source rootfs --ondisk sda --fstype=ext4 --label root --align 1024 --use-uuid --extra-space 1024
part /rescue --source rootfs --ondisk sda --fstype=ext4 --label rescue --align 1024 --use-uuid --extra-space 1024
part swap --ondisk sda --size 1024 --label swap --fstype=swap
part /home --ondisk sda --fstype=ext4 --label home --align 1024 --use-uuid --size=1024
bootloader --configfile="grub.cfg"
The grub configuration has two menu options and I am able to boot from the default first menu option.
menuentry 'boot'{
linux /bzImage root=/dev/sda2 3 console=ttyS2,115200n8 rootfstype=ext4
}
menuentry 'rescue'{
linux /bzImage root=/dev/sda3 3 console=ttyS2,115200n8 rootfstype=ext4
}
The grub configuration selects the second or the third partition as root. This works with the fixed order of partitions in the wks file.
My problem is that if I select the 'rescue' partition in grub, the system starts to boot but the Linux kernel fails to find a root partition. Here is the last kernel messages before the kernel panic:
:<snip>
:
ata1.00: ATA-8: 32GB NANDrive, D A431F4, max UDMA/133
ata1.00: 62533296 sectors, multi 0: LBA48
ata1.00: configured for UDMA/133
scsi 0:0:0:0: Direct-Access ATA 32GB NANDrive 31F4 PQ: 0 ANSI: 5
sd 0:0:0:0: [sda] 62533296 512-byte logical blocks: (32.0 GB/29.8 GiB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sd 0:0:0:0: Attached scsi generic sg0 type 0
sda: sda1 sda2 sda3 sda4 < sda5 sda6 >
:
:<snip>
:
List of all partitions:
0100 16384 ram0
(driver?)
0101 16384 ram1
(driver?)
0102 16384 ram2
(driver?)
0103 16384 ram3
(driver?)
0800 31266648 sda
driver: sd
0801 24571 sda1 9f293dfb-01
0802 4151722 sda2 9f293dfb-02
0803 4151722 sda3 9f293dfb-03
0804 1 sda4
0805 1048576 sda5 9f293dfb-05
0806 1048576 sda6 9f293dfb-06
No filesystem could mount root, tried:
ext4
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,3)
:
:
I suspect Yocto does something when building the image which I just need to control but am unable to find the source of this problem.
Any help is appreciated

Partition's size in df -h is totally different than the size in /proc/partitions [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm using buildroot to build a custom linux system for my raspi A+.
Using genimage, I've created two partitions on a 1 GB sdcard. The first partion is the boot partition. It's vfat and it is 32 MB. The second partition is ext4, it is the rootfs and it is 512 MB.
Once I boot my raspi with the newly burned sdcard and that I type df -h I get this in the output:
Filesystem Size Used Available Use% Mounted on
/dev/root 17.1M 14.0M 1.8M 89% /
devtmpfs 200.6M 0 200.6M 0% /dev
tmpfs 200.7M 0 200.7M 0% /dev/shm
tmpfs 200.7M 0 200.7M 0% /tmp
tmpfs 200.7M 4.0K 200.7M 0% /run
as you can see, /dev/root is 17.1 MB instead of 512 MB.
Then, I issue cat /proc/partitions:
major minor #blocks name
1 0 4096 ram0
1 1 4096 ram1
1 2 4096 ram2
1 3 4096 ram3
1 4 4096 ram4
1 5 4096 ram5
1 6 4096 ram6
1 7 4096 ram7
1 8 4096 ram8
1 9 4096 ram9
1 10 4096 ram10
1 11 4096 ram11
1 12 4096 ram12
1 13 4096 ram13
1 14 4096 ram14
1 15 4096 ram15
179 0 969728 mmcblk0
179 1 32768 mmcblk0p1
179 2 524288 mmcblk0p2
We clearly see that the sdcard (mmcblk0) is 1 GB, the boot partition (mmcblk0p1) is 32 MB and the rootfs partition (mmcblk0p2) is 512 MB.
So, to convince myself that the mmcblk0p2 partition may have been imporperly mounted, I mount it again with mount -t ext4 -o rw /dev/mmcblk0p2 /mnt and then I issue df -h again:
Filesystem Size Used Available Use% Mounted on
/dev/root 17.1M 14.0M 1.8M 89% /
devtmpfs 200.6M 0 200.6M 0% /dev
tmpfs 200.7M 0 200.7M 0% /dev/shm
tmpfs 200.7M 0 200.7M 0% /tmp
tmpfs 200.7M 4.0K 200.7M 0% /run
/dev/mmcblk0p2 17.1M 14.0M 1.8M 89% /mnt
Again, I see that mmcblk0p2 size is 17.1 MB.
So, my question is Why is cat /proc/partitions returning the expected size for my rootfs partition while df -h returns a totally different and bogus size ?
TL;DR: set BR2_TARGET_ROOTFS_EXT2_BLOCKS to 524288.
You have to distinguish the partition from the filesystem on the partition.
The partition sizes and offsets are specified in the partition table, and you can view them with cat /proc/partitions. Paritions are created with a tool like fdisk (or when you're using Buildroot, it's often created by genimage).
The filesystem size is specified in the filesystem superblock, a piece of metadata that specifies the size of the filesystem, any options (e.g. if journalling is used), cluster sizes, etc. This is created by a tool like mke2fs. When you use mke2fs directly on a partition, it will use the full space of the partition for the filesystem, which is typically what you want. However, when you create the filesystem before partitioning the SD card (as is often the case when you generate an image with e.g. Buildroot), you have to specify the size to mke2fs (cfr. the man page: the second argument is blocks-count).
In Buildroot, you typically create an image as a file and don't write directly to the SD card. That is because the size of the SD card is not known a priori, and because you have to be root to be able to write the SD card. Therefore, there is no way for Buildroot to know how large the ext4 filesystem should be when you create the filesystem. Before the 2017.05 release of Buildroot, it would try to estimate how large the filesystem should be to fit everything, and create a filesystem of exactly that size. You are probably in that situation.
To fix this, you should set the configuration variable BR2_TARGET_ROOTFS_EXT2_BLOCKS to 524288 (= 512MB in 1024-byte blocks). Or if you use Buildroot more recent than the 2017.05 release, set BR2_TARGET_ROOTFS_EXT2_SIZE to 512M (the new option is in bytes but allows suffixes K, M, G).

Two postgresql server with same configuration, different performance

I got two identical servers, in both is installed postgresql server version 9.0.4 with the same configuration. If I launch a .sql file that performs about 5k inserts, on the first one it takes a couple of seconds, on the second one it takes 1 minute and 30 seconds.
If I set synchronous_commit, speed dramatically reduces (as expected), and the performances of the two servers are comparable. But if I set synchronous_commit to on, on one server the insert script execution time increases of less than one second, on the other one it increases too much, as I said in the first period.
Any idea about this difference in performances? Am I missing some configuration?
Update: tried a simple disk test: time sh -c "dd if=/dev/zero of=ddfile bs=8k count=200000 && sync"
fast server output:
1638400000 bytes (1.6 GB) copied, 1.73537 seconds, 944 MB/s
real 0m32.009s
user 0m0.018s
sys 0m2.298s
slow server output:
1638400000 bytes (1.6 GB) copied, 4.85727 s, 337 MB/s
real 0m35.045s
user 0m0.019s
sys 0m2.221s
Common features (both servers):
SATA, RAID1, controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller, distribution: linux centOS. mount -v output:
/dev/md2 on / type ext3 (rw)
proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md1 on /boot type ext3 (rw)
fast server: kernel 2.6.18-238.9.1.el5 #1 SMP
Disk /dev/sda: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sda1 3906 4209029 2102562 fd Linux raid autodetect
/dev/sda2 4209030 4739174 265072+ fd Linux raid autodetect
/dev/sda3 4739175 1465144064 730202445 fd Linux raid autodetect
slow server: kernel 2.6.32-71.29.1.el6.x86_64 #1 SMP
Disk /dev/sda: 750.2 GB, 750156374016 bytes
64 heads, 32 sectors/track, 715404 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006ffc4
Device Boot Start End Blocks Id System
/dev/sda1 2048 4194303 2096128 fd Linux raid autodetect
/dev/sda2 4194304 5242879 524288 fd Linux raid autodetect
/dev/sda3 5242880 1465147391 729952256 fd Linux raid autodetect
Could it be useful to address the performance issue?
I suppose your slow server with newer kernel has working barriers. This is good, as otherwise you can loose data in case of a power failure. But it is of course slower than running with write cache enabled and without barriers, aka running with scissors.
You can check if barriers are enabled using mount -v — search for barrier=1 in output. You can disable barriers for your filesystem (mount -o remount,barrier=0 /) to speed up, but then you risk data corruption.
Try to do your 5k inserts in one transaction — Postgres won't have to write to disk on every row inserted. The theoretical limit for number of transactions per second wound be comparable to disk rotational speed (7200rpm disk ≈ 7200/60 tps = 120 tps) as a disk can only write to a sector once per rotation.
To me this sounds like in the "fast" server there is a write cache enbled for the harddisk(s), whereas in the slow server the harddisk(s) are really writing the data when PG writes it (by calling fsync)