How can I list files in exFAT file system from image file? - exfat

I have created a disk image using FKT Imager of a USB stick with exFAT partition. I want to make a list of all folders and files on the image.
My exFAT starts at LBA=6146048 with Sector size=512 that gives me offset=3146776576. Below is a screenshot of the exFAT Main Boot Region with RootDirectoryCluster in red as described at
https://www.ntfs.com/exfat-boot-sector.htm.
96 (0x60) 4 RootDirectoryCluster
exFAT Main Boot Region
So from here I get this:
firstClusterOfRootDirectoryDec=4 HexLittleEndian = 04000000 HexBigEndian=00000004
However when I look at the offset 4 clusters below I only see 0's. My offset is then (6146048512)+(4512)=3146778624.

Related

How to convert an OS bin file(with a custom bootloader) to iso file which can be burnt into CD or booted by USB? [duplicate]

This question already has answers here:
Creating a bootable ISO image with custom bootloader
(2 answers)
Closed 2 years ago.
I have finished writing my simple Operating System and I want to test it on a real hardware(PC), not bochs or qemu. My OS has a custom bootloader and a kernel and I used cat to concat them into one single bin file. But I spent hours on finding out a way to convert the bin file to a bootable iso file but failed each time. According to OSDev.org, I think I need to use genisoimage(mkisofs) to do the convert, but I don't know exactly how this command works, I finally outputted a iso file but this one is not working.(I think I used the wrong command, can someone explain a little bit more to me?)
Other Approaches I tried:
Directly burn the bin file to a CD. Error: Missing Operating System.
Convert the bin file to ISO using winbin2iso and other windows platform software. Error: Could not boot. Not even in qemu.
Also, what is El-Torito?
When a CD is being booted, the firmware checks the CD for a "boot catalogue" in the CD's meta-data. This is a list of entries with (up to) one entry for each kind of computer; so that it's possible to create a single CD that works for 80x86 BIOS and 80x86 UEFI (and PowerPC and Sparc and ...).
For 80x86 there are 4 different types of entries:
UEFI. The rest of the entry tells the firmware the starting sector and size of a FAT file system image. UEFI figures out which file it wants from the FAT file system based on converting the architecture into a file name (e.g. for 80x86 it'll probably want the file \EFI\BOOT\BOOTX64.EFI).
"No emulation, 80x86 BIOS". The rest of the entry tells the firmware the starting sector and size of a boot loader. The boot loader can be any size you like (up to about 639 KiB); and the "device number" the BIOS tells you (in dl) will be for the CD drive itself, so you can load more stuff from the same disk using it.
"Hard disk emulation, 80x86 BIOS". The rest of the entry tells the firmware the starting sector and size of a disk image. The disk image should have an MBR with a BPB and partition/s, with an active partition pointing to where the operating system's boot loader is at the start of its partition. In this case the BIOS will create a fake "device 0x80" (from info in the BPB in the MBR) and mess up the device numbers for any real hard drives (e.g. the first hard drive which would've been "device 0x80" will become "device 0x81" instead, etc). Note that this is inefficient because sectors on a CD are 2048 bytes but the BIOS will emulate 512 byte sectors, so every time you try to read a 512 byte sector the BIOS will actually read 2048 bytes and throw the "wrong" 75% of the data away. It's also very annoying (trying to squeeze anything good in 512 bytes is impossible). It's mostly only used for obsolete junk (e.g. MS-DOS).
"Floppy disk emulation, 80x86 BIOS". The rest of the entry tells the firmware the starting sector and size of a disk image. The disk image should have a boot loader in the first sector with a BPB. In this case the BIOS will create a fake "device 0x00" (from info in the BPB in the MBR) and mess up the device numbers for any real floppy drives. Just like hard disk emulation, this is inefficient, even more "very annoying" (because it also limits the OS to the size of a floppy which is nowhere near enough space), and only intended for obsolete junk.
The best way to deal with CDs is to write 2 new boot loaders (one for "no emulation, 80x86 BIOS" and another for UEFI); then use any tool you like to create the appropriate structures on the CD (e.g. genisoimage with the -no-emul-boot option for your "no emulation, 80x86 BIOS" boot loader plus some other option for UEFI that doesn't seem to exist!?).
Note that it's easy to write your own utility that is "more clever". You'd want to create a FAT file system (for UEFI) and an ISO9660 file system (for the rest of the operating system's file - help/docs, drivers, etc), but most tools won't create the FAT file system for you, and it's possible for files in both file systems (FAT and ISO9660) to use the same sectors (so that the same files appear in both file systems without costing twice as much disk space). Something like this would probably only take you 1 week to write yourself (and you'll learn a lot about CDs and ISO9660 that you're going to have to learn eventually anyway). The relevant documentation (for booting from CD, ISO9660 file systems, FAT file systems, and UEFI) are all easily obtained online.
Also, what is El-Torito?
El-Torito is another name for the "Bootable CD-ROM Specification" that describes the structures needed on a CD to make it bootable.

FIrestore: Finding large files or directories

I have a few hundred folders in my Firestorage bucket, each containing a couple of small images. However, my storage size is almost 80GB!
Is there a way to find the culprit files or folder? I can't seem to find a way to view folder sizes or get a list of the top largest files without causes a huge number of reads.
By using the gsutil command, you could try using the DU command:
gsutil du -sh YOUR_BUCKET/YOUR_DIRECTORY
The -s flag will give you only the size of the directory, if you remove it you will also see the size of the files inside.
The -h flag prints object sizes in human-readable format (e.g., 1 KiB, 234 MiB, 2GiB, etc.)
You can then find out which files are the biggest.

How to speed up write using MMC in U-Boot?

I'm trying to use U-Boot to copy a big (2 GiB) image from the network to the SD card. This image is a filesystem; hence I'm using the mmc subsystem.
I created many chunks of this image, 64 MiB each, so the process goes like this:
Download next chunk using TFTP
Write sub-chunks to the sd card using mmc
Go to 1
The problem is that writing to the sd card is really slow. It takes several minutes for a chunk of 4 MiB. I have tried with different sizes and it is all the same -- pretty slow.
I'm using a Raspberry Pi 2 and Samsung micro SD cards (class 10).
The command I use for writing is like:
mmc write 0x1600000 0xFF000 0x02
For me, this means, taking from the memory address 0x1600000, read 0x02 blocks of 512 bytes and write them to the sd card starting at block 0xFF000
Am I using the wrong command? Is there a way to speed up the process? The U-Boot driver for is slow?
Note: Yesterday night I copied an image of 1.3 GiB. It took 16 hours.
Editing:
Git repository git://git.denx.de/u-boot.git
commit ae765f3a8243faa39d4a32ba2baede638e40c768
Compilation:
make rpi_2_defconfig
make all
As of this writing, the current release of U-Boot (v2016.03) has the dcache disabled on RPi 2. So things are in fact just slow. Also there are currently patches being reviewed which would enable the dcache and speed this up. There's expected to be at least one more version of these patches due to a problem with the LCD one but more testers are welcome and encouraged. You can get the current series (v2) here:
https://patchwork.ozlabs.org/project/uboot/list/?submitter=1212&state=7&q=v2&delegate=3651 and note that the patch that is 0/5 is the fix for the LCD issue and thus why I'm expected a clean v3 to be submitted.
I am hopeful that the changes will be able to be merged for the v2016.05 release.

Matlab not able to read in large file?

I have a data file (6.3GB) that I'm attempting to work on in MATLAB, but I'm unable to get it to load, and I think it may be a memory issue. I've tried loading in a smaller "sample" file (39MB) and that seems to work, but my actual file won't load at all. Here's my code:
filename = 'C://Users/Andrew/Documents/filename.mat';
load(filename);
??? Error using ==> load
Can't read file C://Users/Andrew/Documents/filename.mat.
exist(filename);
EDU>> ans = 2
Well, at least the file exists. When I check the memory...
memory
Maximum possible array: 2046 MB (2.146e+009 bytes) *
Memory available for all arrays: 3442 MB (3.609e+009 bytes) **
Memory used by MATLAB: 296 MB (3.103e+008 bytes)
Physical Memory (RAM): 8175 MB (8.572e+009 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
So since I have enough RAM, do I need to increase the maximum possible array size? If so, how can I do that without adding more RAM?
System specifics: I'm running 64-bit Windows, 8GB of RAM, MATLAB Version 7.10.0.499 (R2010a). I think I can't update to a newer version since I'm on a student license.
As the size might be the issue, you could try load('fileName.mat', 'var1'); load('fileName.mat', 'var2'); etc. For this, you'll have to know the variable names though.
An option would be to use the matfile object to load/index directly into the file instead of loading into ram.
doc matfile
But one limitation is that you can not index directly into a struct. So you would need to find a friend to convert the struct in your mat file and save it with the version option
save(filename, variables, '-v7.3')
May be you can load part by part your data to do your stuff using load part of variables from mat file. You must have matlab 7.3 or newer.
From your file path I can see you are using Windows. Matlab is only 32 bit for Windows and Linux (there is no 64 bit for these OSes at least for older releases, please see my edit), which means you are limited to <4GB ram total for a single application (no matter how much you have in your system), this is a 32 bit application issue so there is nothing you can do to remedy it. Interestingly the Mac version is 64 bit and you can use as much ram as you want (in my computer vision class we often used my mac to do our big video projects because windows machines would just say "out of memory")
As you can see from your memory output you can only have ~3.4GB total for matrix storage, this is far less than the 6.3GB file. You'll also notice, you can only use ~2GB for one particular matrix (that number changes as you use more memory).
Typically when working with large files you can read the file line by line, rather than loading the entire file into memory. But since this is a .mat file that likely wouldn't work. If the file contains multiple variables maybe separate them each into their own individual files that are small enough to load
The take home message here is you can't read the entire file at once unless you hop onto a Mac with enough RAM. Even then the size for a single matrix is still likely less than 6.3GB
EDIT
Current Matlab student versions can be purchased in 64 bit for all OSes as of 2014 see here so a newer release of Matlab might allow you to read the entire file at once. I should also add there has been a 64 bit version before 2014, but not for the student license

How to check if there is enough free space inside directory on Linux

I want to check all availalble space in directory A using 'stat'. Then I want to check the size of directory B using 'du' and if directory A has enough free space, then i want to copy B into A.
The question is what arguments I need to pass to the 'stat' and 'du' commands so that they will return their output in the same format (nodes, bytes, etc...)
On Linux there is no limit to the files contained in a directory, there isn't even a limit on how many files can be placed in a directory. This can all be found in the linux manpages.
Iff the device that A is on is different from the one B is on, you may be curious about how much available space there is left on A's device. For that you use:
stat --file-system A B