Is there a standard for flash memory's real capacity? - capacity

For example...
16GB USB flash drive's real capacity is almost 14.8~15.4 GiB.
these are different by each manufaturers or models.
In this situation....
How can I expect real minimum capacity?(GiB or Sectors)
Is there a standard? or de facto?

Back in the days, the size for hard-drives were always in gibibytes (1000 mibibytes) instead of gigabytes. That accounts for 7,4% size difference when comparing a gigabyte with a gibibyte.

Related

How far is it safe to regularly write to the ESP32 flash, considering the flash MTBF?

What would be the best practice for writing to the flash on a regular basis.
Considering the hardware I am working on is supposed to have 10 to 20 years longevity, what would be your recommandation? For example, is it ok I write some state variables every 15 minutes thru Preferences?
That depends on
number of erase cycles your Flash supports,
size of the NVS partition where you store data and
size and structure of the data that you store.
Erase cycles mean how many times a single sector of Flash can be erased before it's no longer guaranteed to work. The number is found in the datasheet of the Flash chip that you use. It's usually 10K or 100K.
Preferences library uses the ESP-IDF NVS library. This requires an NVS partition to store data, the size of which determines how many Flash sectors get reserved for this purpose. Every time you store a value, NVS writes the data together with its own overhead (total of 32 bytes for primitive data types like ints and floats, more for strings and blobs) into the current Flash sector. When the current sector is full, it erases the next sector and proceeds to write there; thereby using up sectors in a round robin fashion as write requests come in.
If we assume that your Flash has 100K erase cycles, size your NVS partition is 128 KiB and you store a set of 8 primitive values (any int or float) every 15 minutes:
Each store operation uses 8 * 32 = 256 bytes (32 B per data value).
You can repeat that operation 131072 / 256 = 512 times before you've written to every sector of your 128KiB NVS partition (i.e. erased every sector once)
You can repeat that cycle 100K times so you can do 512 * 100000 = 51200000 or roughly 5.1M store operations before you've erased every sector its permitted maximum number of times.
Considering the interval of 15 minutes creates 365 * 24 * 4 = 35040 operations per year, you'd have 51200000 / 35040 = 1461 years until Flash is dead.
Obviously, if your Flash chip is rated at 10K erase cycles, it drops to only 146 years.
There's probably some NVS overhead in there somewhere that I didn't account for, and the Flash erase cycle ratings are not 100% reliable so I'd cut it in half for good measure - I would expect 700 or 70 years in real life.
If you store non-primitive values (strings, blobs) then the estimate changes based on the length of that data. I don't know to calculate the exact Flash space used by those but I'd guess 32B plus length of your data multiplied by 10% of NVS overhead. Plug in the numbers, see for yourself.

If the system supports 64bit word then how many words can be stored in the RAM? Also how many bits will be required to address each Word uniquely?

a) Suppose there is a RAM of size 4GB installed in an intel Core i5 computer. If the system supports 64bit word then how many words can be stored in the RAM? Also how many bits will be required to address each Word uniquely?
There are a lot of variables to this question. How big are the words? How much RAM does Microsoft let Word use before it starts swapping data in a pagefile? Is any compression used?
If we assume that Word has access to all 4GB of RAM (the OS, and Word aren't using any in this situation), that there is no pagefile to dump to, and no compression going on in the background, we can do some simple math to find out. We know that typically a single ascii character takes up 1 byte of memory, and that there are 1024 bytes in a KB, 1024 KB in a MB, and so on. So, we just multiply until we get to 4GB worth of bytes. 1024 Bytes X 1024 KB X 1024 MB X 4 GB = 4,294,967,296 ascii characters.
Coincidentally, that's the maximum number of possible 32-bit addresses in 32bit systems, and why 32bit systems were unable to support more than 4GB of RAM without Physical Address Extension (PAE). With so little RAM, it doesn't matter that the program is 64bit, it just means it can handle addressing and integers larger than that if the RAM is available.
I hope that helps you in some way! I'm kind of concerned that you are having to worry about the maximum number of words that can be stored in 4GB.

For a 2GBytes memory, suppose its memory width is 8 bits: what is the address space of the memory?

For a 2GBytes memory, suppose its memory width is 8 bits….
what is the address space of the memory?
what is the address width of the memory?
I’m not looking for the answer to question, I’m just trying to understand the process of how to get there.
EDIT: all instances of Gb replaced with GB
The address-space is the same as the memory size. This was not true (for example) in 32 bit operating systems that had more than 2^32 bytes of memory installed. Since the number of bits used for addressing is not specified in your question, one can only assume it to be sufficient to address the installed memory. To contrast, while you could install more than 4GB in a 32 bit system, you couldn't access more than 4GB, since (2^32)-1 is the location of the last byte you could access. Bear in mind, this address-space must include video memory and any/all bioses in the system. This meant that in 32bit WinXP MS limited the amount of user-accessible memory to a figure considerably less than 4GB.
Since the memory width is 8 bits, each address will point to 1 byte. Since you've got 2GB, you need to use a number of bits for addressing that is equal-to or greater-than that which will allow you to point to any one of those bytes.
Spoiler:
Your address-space is 2GB and you need 31 bit wide addresses to use it all.

Minimum memory for RAR decompression

What's the minimum memory needed to run a RAR decompression algorithm?
I want to port a RAR decompression algorithm to mobiles (iPhone, Android and BlackBerry) and want to know if there's a bare minimum of memory needed before starting. I've heard that RAR decompression requires much more memory than ZIP decompression.
Quite a lot. The maximum size of the dictionary are 4 MB, but at least the official unrar library (which is built from the same source as WinRAR) takes over 24 MB in some decompression algorithms.
(as to the last statement: note that the t is at least 1 MB (uint t=SASize << 20;), but can be more because SASize may be more than 1)
Can't give you a concrete number, but I remember using WinRAR back in 2001 on my PocketPC with only 64mb of RAM, about half of which was shared for storage -- so I'm pretty sure a modern phone should suffice.
There are plenty of Comic Viewers on the iPhone that support .cbr so I its doable.

How big can a memory-mapped file be?

What limits the size of a memory-mapped file? I know it can't be bigger than the largest continuous chunk of unallocated address space, and that there should be enough free disk space. But are there other limits?
You're being too conservative: A memory-mapped file can be larger than the address space. The view of the memory-mapped file is limited by OS memory constraints, but that's only the part of the file you're looking at at one time. (And I guess technically you could map multiple views of discontinuous parts of the file at once, so aside from overhead and page length constraints, it's only the total # of bytes you're looking at that poses a limit. You could look at bytes [0 to 1024] and bytes [240 to 240 + 1024] with two separate views.)
In MS Windows, look at the MapViewOfFile function. It effectively takes a 64-bit file offset and a 32-bit length.
This has been my experience when using memory-mapped files under Win32:
If your map the entire file into one segment, it normally taps out at around 750 MB, because it can't find a bigger contiguous block of memory. If you split it up into smaller segments, say 100MB each, you can get around 1500MB-1800MB depending on what else is running.
If you use the /3g switch you can get more than 2GB up to about 2700MB but OS performance is penalized.
I'm not sure about 64-bit, I've never tried it but I presume the max file size is then limited only by the amount of physical memory you have.
Under Windows: "The size of a file view is limited to the largest available contiguous block of unreserved virtual memory. This is at most 2 GB minus the virtual memory already reserved by the process. "
From MDSN.
I'm not sure about LINUX/OSX/Whatever Else, but it's probably also related to address space.
Yes, there are limits to memory-mapped files. Most shockingly is:
Memory-mapped files cannot be larger than 2GB on 32-bit systems.
When a memmap causes a file to be created or extended beyond its current size in the filesystem, the contents of the new part are unspecified. On systems with POSIX filesystem semantics, the extended part will be filled with zero bytes.
Even on my 64-bit, 32GB RAM system, I get the following error if I try to read in one big numpy memory-mapped file instead of taking portions of it using byte-offsets:
Overflow Error: memory mapped size must be positive
Big datasets are really a pain to work with.
The limit of virtual address space is >16 Terabyte on 64Bit Windows systems. The issue discussed here is most probably related to mixing DWORD with SIZE_T.
There should be no other limits. Aren't those enough? ;-)