SD card write limit - data logging - raspberry-pi

I want track/register when my system (a Raspberry Pi) was shut down, usually due to abrupt power loss.
I want to do it by recording a heartbeat every 10 minutes to an SD card - so every 10 mins it'd go to the SD and write the current time/date in a file. Would that damage the SD in the long run?
If there's only 100k write cycles to it, it'd have a bad block in a couple of years. But I've read there's circuitry to prevent it - would it prevent the bad block? Would it be safer to distribute the log in several blocks?
Thanks

The general answer to this question is a strong "it depends". (Practical answer is what you already have; if your file system parameters are not wrong, you have a large margin in this case.) It depends on the following:
SD card type (SLC/MLC)
SD card controller (wear levelling)
SD card size
file system
luck
If we take a look at a flash chip, it is organised into sectors. A sector is an area which can be completely erased (actually reset to a state with only 1's), typically 128 KiB for SD cards. Zeros can be written bit-by-bit, but the only way to write ones is to erase the sector.
The number of sector erases is limited. The erase operation will take longer each time it is performed on the same sector, and there is more uncertainty in the values written to each cell. The write limit given to a card is really the number of erases for a single sector.
In order to avoid reaching this limit too fast, the SD card has a controller which takes care of wear levelling. The basic idea is that transparently to the user the card changes which sectors are used. If you request the same memory position, it may be mapped to different sectors at different times. The basic idea is that the card has a list of empty sectors, and whenever one is needed, it takes the one which has been used least.
There are other algorithms, as well. The controller may track sector erase times or errors occurring on a sector. Unfortunately, the card manufacturers do not usually tell too much about the exact algorithms, but for an overview, see:
http://en.wikipedia.org/wiki/Wear_leveling
There are different types of flash chips available. SLC chips store only one bit per memory cell (it is either 0 or 1), MLC cells store two or three bits. Naturally, MLC chips are more sensitive to ageing. Three-bit (eight level) cells may not endure more than 1000 writes. So, if you need reliability, take a SLC card despite its higher price,
As the wear levelling distributes the wear across the card, bigger cards endure more sector erases than small cards, as they have more sectors. In principle, a 4 GiB card with 100 000 write cycles will be able to carry 400 TB of data during its lifetime.
But to make things more complicated, the file system has a lot to do with this. When a small piece of data is written onto a disk, a lot of different things happen. At least the data is appended to the file, and the associated directory information (file size) is changed. With a typical file system this means at least two 4 KiB block writes, of which one may be just an append (no requirement for an erase). But a lot of other things may happen: write to a journal, a block becoming full, etc.
There are file systems which have been tuned to be used with flash devices (JFFS2 being the most common). They are all, as far as I know, optimised for raw flash and take care of wear levelling and use bit or octet level atomic operations. I am not aware of any file systems optimised for SD cards. (Maybe someone with academic interests could create one taking the wear levelling systems of the cards into account. That would result in a nice paper or even a few.) Fortunately, the usual file systems can be tuned to be more compatible (faster, leads wear and tear) with the SD card by tweaking file system parameters.
Now that there are these two layers on top of the physical disk, it is almost impossible to track how many erases have been performed. One of the layers is very complicated (file system), the other (wear levelling) completely non-transparent.
So, we can just make some rough estimates. Let's guess that a small write invalidates two 4 KiB blocks in average. This way logging every 10 minutes consumes a 128 KiB erase sector every 160 minutes. If the card is a 8 GiB card, it has around 64k sectors, so the card is gone through once every 20 years. If the card endures 1000 write cycles, it will be good for 20 000 years...
The calculation above assumes perfect wear levelling and a very efficient file system. However, a safety factor of 1 000 should be enough.
Of course, this can be spoiled quite easily. One of the easiest ways is to forget to mount the disk with the noatime attribute. Then the file system will update file access times, which may result in a write every time a file is accessed (even read). Or the OS is swapping virtual memory onto the card.
Last but not least of the factors is luck. Modern SD cards have the unfortunate tendency to die from other causes. The number of lemons with even quite well-known manufacturers is not very small. If you kill a card, it is not necessarily because of the wear limit. If the card is worn out, it is still readable. If it is completely dead, it has died of something else (static electricity, small fracture somewhere).

Related

About Ram & Secondary Storage

Why do Ram size is always smaller than Secondary Storage(HDD/SSD)? If you observe any device you will get the same question
Why do Ram size is always smaller than Secondary Storage(HDD/SSD)? If you observe any device you will get the same question
The primary reason is price. For example (depending a lot on type, etc) currently RAM is around $4 per GiB and "rotating disk" HDD is $0.04 per GiB, so RAM costs about 100 times as much per GIB.
Another reason is that HDD/SSD is persistent (the data remains when you turn the power off); and the amount of data you want to keep when power is turned off is typically much larger than amount of data you don't want to keep when power is turned off. A special case for this is when you put a computer into a "hibernate" state (where the OS stores everything in RAM and turns power off, and then when power is turned on again it loads everything back into RAM so that it looks the same; where the amount of persistent storage needs to be larger than the amount of RAM).
Another (much smaller) reason is speed. It's not enough to be able to store data, you have to be able to access it too, and the speed of accessing data gets worse as the amount of storage increases. This holds true for all kinds of storage for different reasons (and is why you also have L1, L2, L3 caches ranging from "very small and very fast" to "larger and slower"). For RAM it's caused by the number of address lines and the size of "row select" circuitry. For HDD it's caused by seek times. For humans getting the milk out of a refrigerator it's "search time + movement speed" (faster to get the milk out of a tiny bar fridge than to walk around inside a large industrial walk-in refrigerator).
However; there are special cases (there's always special cases). For example, you might have a computer that boots from network and then uses the network for persistent storage; where there's literally no secondary storage in the computer at all. Another special case is small embedded systems where RAM is often larger than persistent storage.

Why does registers exists and how they work together with cpu?

So I am currently learning Operating Systems and Programming.
I want how the registers work in detail.
All I know is there is the main memory and our CPU which takes address and instruction from the main memory by the help of the address bus.
And also there is something MCC (Memory Controller Chip which helps in fetching the memory location from RAM.)
On the internet, it shows register is temporary storage and data can be accessed faster than ram for registers.
But I want to really understand the deep-down process on how they work. As they are also of 32 bits and 16 bits something like that. I am really confused.!!!
I'm not a native english speaker, pardon me for some perhaps incorrect terminology. Hope this will be a little bit helpful.
Why does registers exists
When user program is running on CPU, it works in a 'dynamic' sense. That is, we should store incoming source data or any intermediate data, and do specific calculation upon them. Memory devices are needed. We have a choice among flip-flop, on-chip RAM/ROM, and off-chip RAM/ROM.
The term register for programmer's model is actually a D flip-flop in the physical circuit, which is a memory device and can hold a single bit. An IC design consists of standard cell part (including the register mentioned before, and and/or/etc. gates) and hard macro (like SRAM). As the technology node advances, the standard cells' delay are getting smaller and smaller. Auto Place-n-Route tool will place the register and the related surrounding logic nearby, to make sure the logic can run at the specified 3.0/4.0GHz speed target. For some practical reasons (which I'm not quite sure because I don't do layout), we tend to place hard macros around, leading to much longer metal wire. This plus SRAM's own characteristics, on-chip SRAM is normally slower than D flip-flop. If the memory device is off the chip, say an external Flash chip or KGD (known good die), it will be further slower since the signals should traverse through 2 more IO devices which have much larger delay.
how they work together with cpu
Each register is assigned a different 'address' (which maybe not open to programmer). That is implemented by adding address decode logic. For instance, when CPU is going to execute an instruction mov R1, 0x12, the address decode logic sees the binary code of R1, and selects only those flip-flops corresponding to R1. Then data 0x12 is stored (written) into those flip-flops. Same for read process.
Regarding "they are also of 32 bits and 16 bits something like that", the bit width is not a problem. Both flip-flops and a word in RAM can have a bit width of N, as long as the same address can select N flip-flops or N bits in RAM at one time.
Registers are small memories which resides inside the processor (what you called CPU). Their role is to hold the operands for fast processor calculations and to store the results. A register is usually designated by a name (AL, BX, ECX, RDX, cr3, RIP, R0, R8, R15, etc.) and has a size which is the number of bits it can store (4, 8, 16, 32, 64, 128 bits). Other registers have special meanings, and their bits control the state or provide information about the state of the processor.
There are not many registers (because they are very expensive). All of them have a capacity of only a few kilobytes, so they can't store all the code and data of your program, which can go up to gigabytes. This is the role of the central memory (what you call RAM). This big memory can hold gigabytes of data and each byte has its address. However, it only holds data when the computer is turned on. The RAM reside outside of the CPU Chip and interacts with him via Memory Controller Chip which stands as interface between CPU and RAM.
On top of that, there is the hard drive that stores your data when you turn off your computer.
That is a very simple view to get you started.

SD card lifetime optimization

Simple question:
Which approach is best in terms of prolonging the life expectancy of an SD card?
Writing 10-minute files with 10 Hz lines of data input (~700 kB each)
1) directly to the SD card
or
2) to the internal memory of the device, then moving the file to the SD card
?
The amount of data being written to the SD card remains the same. The question is simply if a lot of tiny file operations (6000 lines written in the course of ten minutes, 100 ms apart) or one file operation moving the entire file containing the 6000 lines onto the card as once is better. Or does it even matter? Of course the card specifications are hugely important as well, but let's leave that out of the discussion.
1) You should only write to fill flash page boundaries discussed here:
https://electronics.stackexchange.com/questions/227686/sd-card-sector-size
2) Keeping fault-tolerant track of how much data is written where also needs to be written. That counts as a write hit on FAT etc as well, on a page that gets more traffic than others. Avoid if possible (ie fdup/fclose/fopen append) techniques which cause buffer and directory cached data to be flushed. But I would use this trick every minute or so so you never lose more than a minute of data on a crash or accidental removal.
3) OS-supported wear leveling will solve the above, if properly implemented. I have read horror stories about flash memories being destroyed in days.
4) Calculate expected life using the total wear-leveled lifetime writes spec of that memory. Usually in TB's. If you see numbers in the decades, don't bother doing more than (1).
5) Which OS and file-system you are using matters somewhat. For example EXT3 is supposedly faster than EXT2 due to less drive access at a slightly higher risk ratio. Since your question doesn't ask about OS/FS you use, I'll leave the rest of that up to you.

When writing to SD card, does it matter if I write zeros or ones?

As I understand it, a flash cell is 'flashed' (erased) by setting all bits to one. Afterwards, the actual value is then written by setting bits to zero.
Does that mean that, if I have a file and I update certain bits from one to zero, I can use the card for longer than if I write bits from zero to one? Or is there firmware getting in the way (e.g. wear leveling) that would nullify this? Does the filesystem choice influence this?
Does that mean that, if I have a file and I update certain bits from
one to zero, I can use the card for longer than if I write bits from
zero to one?
No, you can't use the card for longer than if you write bits from zero to one.
Wear levelling is handled by the file system you choose to write in the SD card. For example, Jffs2 (most commonly used flash filesystem) will take care of wear levelling in the SD card.
At SD card side the microcontroller implements a FTL (Flash Translation Layer) that takes disk-like block accesses and translates it into meaningful NAND operations, as well as performing wear-levelling and block sparing.

Why do we need to specify the number of flash wait cycles?

Especially when working with "faster" devices like STMF4xx/F7xx we need to specify the number of flash wait cycles, based on the supply voltage and the sys-clock frequency.
When the CPU fetches instructions/or constants this is done over the FLITF. Am I right with the assumption that the FLITF holds a CPU request as long as it can provide the requested data, making it impossible for other Bus-Masters to access flash meanwhile.
If this was true, why should it be important to any interface to know flash wait cycles. Like Cache does preload instructions so or so, independent if it knows how long to wait, no?
Because the flash interface isn't magic.
It has to meet the necessary setup and hold times for addressing and reading out the flash cells, which will vary somewhat depending on voltage. Taking the STM32F411 as an example (because I have that TRM handy), doing some maths with the voltage/frequency/wait-state table implies that a read from flash on one of those takes in the order of ~30ns above 2.7V, down to ~60ns below 2.1V.
Since the flash interface doesn't have its own asynchronous nanosecond-precision timekeeping ability (because that would be needlessly complicated, power-hungry, and silly), that translates to asserting its signals for n clock cycles, after which it can assume the data signals from the cells are stable enough to read back*. How does it know what the clock frequency is, and therefore what n should be? Simple: you, as the programmer who set the clock, tell it. Some hardware things are just infinitely easier to let software deal with.
* and then going through the further shenanigans of extracting the relevant 8, 16 or 32 bits out of the 128-bit line it's read, to finally spit that out the other side onto the AHB bus to the waiting CPU, obviously.