Why are there junk "files" in my Fat32 SD rood dir - sd-card

I have been building a program to access files on a Fat32 micro sd card. However, when I run my program or open my SD card with a hex viewer to the root directory I can see my files but there are also a bunch of junk mixed in, why is that?

Directory data is organized in 32 byte records. This is nice, because any sector holds
exactly 16 records, and no directory record will ever cross a sector boundry. There
are four types of 32-byte directory records.
Normal record with short filename - Attrib is normal
Long filename text - Attrib has all four type bits set
Unused - First byte is 0xE5
End of directory - First byte is zero
Unused directory records are a result of deleting files. The first byte is overwritten
with 0xE5, and later when a new file is created it can be reused. At the end of the directory is a record that begins with zero. All other records will be non-zero in their
first byte, so this is an easy way to determine when you have reached the end of the
directory.
Records that do not begin with 0xE5 or zero are actual directory data, and the format
can be determined by checking the Attrib byte. For now, we are only going to be
concerned with the normal directory records that have the old 8.3 short filename
format. In FAT32, all files and subdirectories have short names, even if the user gave
the file a longer name, so you can access all files without needing to decode the long
filename records (as long as your code simply ignores them).

Related

How to hash a filename down to a small number or digit for output processing

I am not a Perl programmer but I've inherited existing code that is going to a directory, finding all files iren that folder and subfolder (usually JPG or Office files) and then converting this into a single file to use to load into a SQL Server database. The customer has about 500,000 of these files.
It takes about 45 mins to create the file and then another 45 mins for SQL to load the data. Crudely, it's doing about 150 per second which is reasonable but time is the issue for the job. There are many reasons I don't want to use other techniques so please don't suggest other options unless closely aligned to this process.
What I was considering is to improve speed by running something like 10 processes concurrent. Each process would get passed another argument (0-9). Each process would go to the directory and find all files as it is currently doing but for each file found, it would hash or kludge the filename down to a single digit (0-9) and if that matched the supplied argument, the process would process that file and write it out to it's unique file stream.
Then I would have 10 output files at the end. I doubt that the SQL Server side could be improved as I would have to load to separate tables and then merge in the database and as these are BLOB objects, will not be fast.
So I am looking for some basic code or clues on what functions to use in Perl to take a variable (the file name $File) and generate a single 0 to 9 value based on that. It is probably done by getting the ascii value of each char, then adding these together to get a long number, then add these individual numbers together and eventually you'll get an answer.
Any clues or suggested techniques?
Here's an easy one to implement suggested in the unpack function documentation:
sub string_to_code {
# convert an arbitrary string to a digit from 0-9
my ($string) = #_;
return unpack("%32W*",$string) % 10;
}

Where are Filenames and other File Properties stored in OS's?

I was wondering some time ago, where are filenames and modification dates stored in Operating System.
For instance, when you create a text file in Windows, and you give it a name, when you look at the binary form using a tool like Frhed, there won't be anything (besides the text content)
Is there a folder with all files names and dates?
Supposing your friend sends you a text file, how do you get the filename (and other file properties) in your computer?
Complete description of what you are asking cannot be covered In a single SO answer, if you really want to understand details then I suggest you pick a good operating system book and read file management section.
A very simple and general description is as follows.
At the very basic level the operating system (file system to be specific) will use two types of data structures to store your file.
• Data structure to store information related to file (meta data)
• Date structure to store the actual data of file that you see ( text,image,sound)
In UNIX world the first data structure is called an Inode, it contains information related to file such as owner, permission, time created, time modified, size, pointer to the data blocks that store the actual data of file.
Every file has its own Inode which contains data associated with that file. Note that Inode doesn’t contain the actual file data.
actual file data is stored in Data blocks.
So in summary for every file you create, operating system will create a data structure which will contain all the related data.
The operating system stores the attributes of the file on the disk. The actual disk structure depends upon the operating system.
Is there a folder with all files names and dates?
The Windoze disk structure is NTFS. It has a master file table with information about all the files on the disk.
There are effectively two structures that work cooperatively. Directories define the tree structure holding files. The master file table all the files. It is not a folder with all the files but rather an internal data structure. Generally users cannot see the MFT.
If the disk gets hosed, recovery software will go to the master file table. That allows restoring the files but not their location within the directory structure.
Supposing your friend sends you a text file, how do you get the filename (and other file properties) in your computer?
That is something entirely different from the first question. Email messages encode the file name of a attachments. Your mail program uses that name to create local copies of the file.

Hash Calculation of a single file within disk image

I have got an assignment to calculate hash of a file from its disk image and then match it with simple pdf version hash. I have calculated starting and ending addresses of file from data section of FAT32 following FAT table linked list implementation. Now is there any utility or software available to which I input disk image file and starting and ending addresses and it outputs hash value of specified data?
Hex Editor's select block option worked for me.

Reading the Superblock

I know that in Unix (specifically, Mac OS X) the superblock stores information about the layout of data on the disk, including the disk addresses at which the inodes begin and end. I want to scan the list of inodes in my program to look for deleted files. How can I find the disk address at which the inodes begin? I have looked at the statfs command but it does not provide this information.
Since you mention Mac OS X, let's assume you mean to do this for HFS+ only. The Wikipedia page provides some information about possible ways to start, for instance it says this about the on-disk layout:
Sectors 0 and 1 of the volume are HFS boot blocks. These are identical to the boot blocks in an HFS volume. They are part of the HFS wrapper.
Sector 2 contains the Volume Header equivalent to the Master Directory Block in an HFS volume. The Volume Header stores a wide variety of data about the volume itself, for example the size of allocation blocks, a timestamp that indicates when the volume was created or the location of other volume structures such as the Catalog File or Extent Overflow File. The Volume Header is always located in the same place.
The Allocation File which keeps track of which allocation blocks are free and which are in use. It is similar to the Volume Bitmap in HFS, each allocation block is represented by one bit. A zero means the block is free and a one means the block is in use. The main difference with the HFS Volume Bitmap, is that the Allocation File is stored as a regular file, it does not occupy a special reserved space near the beginning of the volume. The Allocation File can also change size and does not have to be stored contiguously within a volume.
It becomes more complicated, after that. Read up on B* trees, for instance.
I'm no Mac OS user, but it would surprise me if there weren't already tools written to scan for deleted files, perhaps some are open source and could provide a more concrete starting point?
You'll have quite some trouble to find deleted files because there's not much left on the disk to find when you delete a file.
If you delete a file on a FAT (or UDF) file system, its directory entry simply gets marked as "deleted", with most of the dir entry still intact.
On HFS volumes, due to their use of B-Trees, deleted edits must be removed from the directory or else searching for items wouldn't work any more efficiently (well, this argument may be a bit weak, but fact is that deleted entries get removed and overwritten).
So, unless the deletion took place by writing over a directory sector by accident, or by re-initializing the volume, you'll not find much.

How do you deal with lots of small files?

A product that I am working on collects several thousand readings a day and stores them as 64k binary files on a NTFS partition (Windows XP). After a year in production there is over 300000 files in a single directory and the number keeps growing. This has made accessing the parent/ancestor directories from windows explorer very time consuming.
I have tried turning off the indexing service but that made no difference. I have also contemplated moving the file content into a database/zip files/tarballs but it is beneficial for us to access the files individually; basically, the files are still needed for research purposes and the researchers are not willing to deal with anything else.
Is there a way to optimize NTFS or Windows so that it can work with all these small files?
NTFS actually will perform fine with many more than 10,000 files in a directory as long as you tell it to stop creating alternative file names compatible with 16 bit Windows platforms. By default NTFS automatically creates an '8 dot 3' file name for every file that is created. This becomes a problem when there are many files in a directory because Windows looks at the files in the directory to make sure the name they are creating isn't already in use. You can disable '8 dot 3' naming by setting the NtfsDisable8dot3NameCreation registry value to 1. The value is found in the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\FileSystem registry path. It is safe to make this change as '8 dot 3' name files are only required by programs written for very old versions of Windows.
A reboot is required before this setting will take effect.
NTFS performance severely degrades after 10,000 files in a directory. What you do is create an additional level in the directory hierarchy, with each subdirectory having 10,000 files.
For what it's worth, this is the approach that the SVN folks took in version 1.5. They used 1,000 files as the default threshold.
The performance issue is being caused by the huge amount of files in a single directory: once you eliminate that, you should be fine. This isn't a NTFS-specific problem: in fact, it's commonly encountered with user home/mail files on large UNIX systems.
One obvious way to resolve this issue, is moving the files to folders with a name based on the file name. Assuming all your files have file names of similar length, e.g. ABCDEFGHI.db, ABCEFGHIJ.db, etc, create a directory structure like this:
ABC\
DEF\
ABCDEFGHI.db
EFG\
ABCEFGHIJ.db
Using this structure, you can quickly locate a file based on its name. If the file names have variable lengths, pick a maximum length, and prepend zeroes (or any other character) in order to determine the directory the file belongs in.
I have seen vast improvements in the past from splitting the files up into a nested hierarchy of directories by, e.g., first then second letter of filename; then each directory does not contain an excessive number of files. Manipulating the whole database is still slow, however.
I have run into this problem lots of times in the past. We tried storing by date, zipping files below the date so you don't have lots of small files, etc. All of them were bandaids to the real problem of storing the data as lots of small files on NTFS.
You can go to ZFS or some other file system that handles small files better, but still stop and ask if you NEED to store the small files.
In our case we eventually went to a system were all of the small files for a certain date were appended in a TAR type of fashion with simple delimiters to parse them. The disk files went from 1.2 million to under a few thousand. They actually loaded faster because NTFS can't handle the small files very well, and the drive was better able to cache a 1MB file anyway. In our case the access and parse time to find the right part of the file was minimal compared to the actual storage and maintenance of stored files.
You could try using something like Solid File System.
This gives you a virtual file system that applications can mount as if it were a physical disk. Your application sees lots of small files, but just one file sits on your hard drive.
http://www.eldos.com/solfsdrv/
If you can calculate names of files, you might be able to sort them into folders by date, so that each folder only have files for a particular date. You might also want to create month and year hierarchies.
Also, could you move files older than say, a year, to a different (but still accessible) location?
Finally, and again, this requires you to be able to calculate names, you'll find that directly accessing a file is much faster than trying to open it via explorer. For example, saying
notepad.exe "P:\ath\to\your\filen.ame"
from the command line should actually be pretty quick, assuming you know the path of the file you need without having to get a directory listing.
One common trick is to simply create a handful of subdirectories and divvy up the files.
For instance, Doxygen, an automated code documentation program which can produce tons of html pages, has an option for creating a two-level deep directory hierarchy. The files are then evenly distributed across the bottom directories.
Aside from placing the files in sub-directories..
Personally, I would develop an application that keeps the interface to that folder the same, ie all files are displayed as being individual files. Then in the application background actually takes these files and combine them into a larger files(and since the sizes are always 64k getting the data you need should be relatively easy) To get rid of the mess you have.
So you can still make it easy for them to access the files they want, but also lets you have more control how everything is structured.
Having hundreds of thousands of files in a single directory will indeed cripple NTFS, and there is not really much you can do about that. You should reconsider storing the data in a more practical format, like one big tarball or in a database.
If you really need a separate file for each reading, you should sort them into several sub directories instead of having all of them in the same directory. You can do this by creating a hierarchy of directories and put the files in different ones depending on the file name. This way you can still store and load your files knowing just the file name.
The method we use is to take the last few letters of the file name, reversing them, and creating one letter directories from that. Consider the following files for example:
1.xml
24.xml
12331.xml
2304252.xml
you can sort them into directories like so:
data/1.xml
data/24.xml
data/1/3/3/12331.xml
data/2/5/2/4/0/2304252.xml
This scheme will ensure that you will never have more than 100 files in each directory.
Consider pushing them to another server that uses a filesystem friendlier to massive quantities of small files (Solaris w/ZFS for example)?
If there are any meaningful, categorical, aspects of the data you could nest them in a directory tree. I believe the slowdown is due to the number of files in one directory, not the sheer number of files itself.
The most obvious, general grouping is by date, and gives you a three-tiered nesting structure (year, month, day) with a relatively safe bound on the number of files in each leaf directory (1-3k).
Even if you are able to improve the filesystem/file browser performance, it sounds like this is a problem you will run into in another 2 years, or 3 years... just looking at a list of 0.3-1mil files is going to incur a cost, so it may be better in the long-term to find ways to only look at smaller subsets of the files.
Using tools like 'find' (under cygwin, or mingw) can make the presence of the subdirectory tree a non-issue when browsing files.
Rename the folder each day with a time stamp.
If the application is saving the files into c:\Readings, then set up a scheduled task to rename Reading at midnight and create a new empty folder.
Then you will get one folder for each day, each containing several thousand files.
You can extend the method further to group by month. For example, C:\Reading become c:\Archive\September\22.
You have to be careful with your timing to ensure you are not trying to rename the folder while the product is saving to it.
To create a folder structure that will scale to a large unknown number of files, I like the following system:
Split the filename into fixed length pieces, and then create nested folders for each piece except the last.
The advantage of this system is that the depth of the folder structure only grows as deep as the length of the filename. So if your files are automatically generated in a numeric sequence, the structure is only is deep is it needs to be.
12.jpg -> 12.jpg
123.jpg -> 12\123.jpg
123456.jpg -> 12\34\123456.jpg
This approach does mean that folders contain files and sub-folders, but I think it's a reasonable trade off.
And here's a beautiful PowerShell one-liner to get you going!
$s = '123456'
-join (( $s -replace '(..)(?!$)', '$1\' -replace '[^\\]*$','' ), $s )