Do any one has timing diagram for SD card multiple block read in SPI mode? - sd-card

I am trying to read data from SD card using Multiple block read command. In between sectors I see multiple junk characters, otherwise the data is fine. I am reading out the the two CRC bytes in between sectors. I have tried putting delay between two sector reads, but did not help.
Did anybody face this before? I somebody has a packet flow diagram, even that would be okay.

Related

Can RoCC read a large chunk of data from dcache at once? How about write?

I am new to rocket chip. I want to design a coprocessor to accelerate data processing. I have a question about how to do a large chunk of data exchange between core and accelerator for one custom instruction. I am wondering if dcache can be used for this data exchange. I went to check LazyRoCC.scala and found io.mem.resp.bits.data only xLen bits long. Does it mean the length of data exchange between dcache and RoCC is limited to xLen? Is there any other way to do this data exchange? Thank you in advance!

SD card lifetime optimization

Simple question:
Which approach is best in terms of prolonging the life expectancy of an SD card?
Writing 10-minute files with 10 Hz lines of data input (~700 kB each)
1) directly to the SD card
or
2) to the internal memory of the device, then moving the file to the SD card
?
The amount of data being written to the SD card remains the same. The question is simply if a lot of tiny file operations (6000 lines written in the course of ten minutes, 100 ms apart) or one file operation moving the entire file containing the 6000 lines onto the card as once is better. Or does it even matter? Of course the card specifications are hugely important as well, but let's leave that out of the discussion.
1) You should only write to fill flash page boundaries discussed here:
https://electronics.stackexchange.com/questions/227686/sd-card-sector-size
2) Keeping fault-tolerant track of how much data is written where also needs to be written. That counts as a write hit on FAT etc as well, on a page that gets more traffic than others. Avoid if possible (ie fdup/fclose/fopen append) techniques which cause buffer and directory cached data to be flushed. But I would use this trick every minute or so so you never lose more than a minute of data on a crash or accidental removal.
3) OS-supported wear leveling will solve the above, if properly implemented. I have read horror stories about flash memories being destroyed in days.
4) Calculate expected life using the total wear-leveled lifetime writes spec of that memory. Usually in TB's. If you see numbers in the decades, don't bother doing more than (1).
5) Which OS and file-system you are using matters somewhat. For example EXT3 is supposedly faster than EXT2 due to less drive access at a slightly higher risk ratio. Since your question doesn't ask about OS/FS you use, I'll leave the rest of that up to you.

comport read in Matlab every time specific characters are transmitted

I have an instrument transmitting data in packets of 40 characters each time.
Each packet includes a packet of interest (32 characters in the middle of it) that is bounded by additional four predefined characters before and after it used in to locate it.
The data is transmitted to the PC to a comport buffer. I want to read the data from it using matlab:
Is there a way to scan the content of a buffer continuously in the background without blocking the command line?
Is it possible to read data from the buffer as soon as a packet exists in it by finding the first and last characters that define each packet?

SD card write limit - data logging

I want track/register when my system (a Raspberry Pi) was shut down, usually due to abrupt power loss.
I want to do it by recording a heartbeat every 10 minutes to an SD card - so every 10 mins it'd go to the SD and write the current time/date in a file. Would that damage the SD in the long run?
If there's only 100k write cycles to it, it'd have a bad block in a couple of years. But I've read there's circuitry to prevent it - would it prevent the bad block? Would it be safer to distribute the log in several blocks?
Thanks
The general answer to this question is a strong "it depends". (Practical answer is what you already have; if your file system parameters are not wrong, you have a large margin in this case.) It depends on the following:
SD card type (SLC/MLC)
SD card controller (wear levelling)
SD card size
file system
luck
If we take a look at a flash chip, it is organised into sectors. A sector is an area which can be completely erased (actually reset to a state with only 1's), typically 128 KiB for SD cards. Zeros can be written bit-by-bit, but the only way to write ones is to erase the sector.
The number of sector erases is limited. The erase operation will take longer each time it is performed on the same sector, and there is more uncertainty in the values written to each cell. The write limit given to a card is really the number of erases for a single sector.
In order to avoid reaching this limit too fast, the SD card has a controller which takes care of wear levelling. The basic idea is that transparently to the user the card changes which sectors are used. If you request the same memory position, it may be mapped to different sectors at different times. The basic idea is that the card has a list of empty sectors, and whenever one is needed, it takes the one which has been used least.
There are other algorithms, as well. The controller may track sector erase times or errors occurring on a sector. Unfortunately, the card manufacturers do not usually tell too much about the exact algorithms, but for an overview, see:
http://en.wikipedia.org/wiki/Wear_leveling
There are different types of flash chips available. SLC chips store only one bit per memory cell (it is either 0 or 1), MLC cells store two or three bits. Naturally, MLC chips are more sensitive to ageing. Three-bit (eight level) cells may not endure more than 1000 writes. So, if you need reliability, take a SLC card despite its higher price,
As the wear levelling distributes the wear across the card, bigger cards endure more sector erases than small cards, as they have more sectors. In principle, a 4 GiB card with 100 000 write cycles will be able to carry 400 TB of data during its lifetime.
But to make things more complicated, the file system has a lot to do with this. When a small piece of data is written onto a disk, a lot of different things happen. At least the data is appended to the file, and the associated directory information (file size) is changed. With a typical file system this means at least two 4 KiB block writes, of which one may be just an append (no requirement for an erase). But a lot of other things may happen: write to a journal, a block becoming full, etc.
There are file systems which have been tuned to be used with flash devices (JFFS2 being the most common). They are all, as far as I know, optimised for raw flash and take care of wear levelling and use bit or octet level atomic operations. I am not aware of any file systems optimised for SD cards. (Maybe someone with academic interests could create one taking the wear levelling systems of the cards into account. That would result in a nice paper or even a few.) Fortunately, the usual file systems can be tuned to be more compatible (faster, leads wear and tear) with the SD card by tweaking file system parameters.
Now that there are these two layers on top of the physical disk, it is almost impossible to track how many erases have been performed. One of the layers is very complicated (file system), the other (wear levelling) completely non-transparent.
So, we can just make some rough estimates. Let's guess that a small write invalidates two 4 KiB blocks in average. This way logging every 10 minutes consumes a 128 KiB erase sector every 160 minutes. If the card is a 8 GiB card, it has around 64k sectors, so the card is gone through once every 20 years. If the card endures 1000 write cycles, it will be good for 20 000 years...
The calculation above assumes perfect wear levelling and a very efficient file system. However, a safety factor of 1 000 should be enough.
Of course, this can be spoiled quite easily. One of the easiest ways is to forget to mount the disk with the noatime attribute. Then the file system will update file access times, which may result in a write every time a file is accessed (even read). Or the OS is swapping virtual memory onto the card.
Last but not least of the factors is luck. Modern SD cards have the unfortunate tendency to die from other causes. The number of lemons with even quite well-known manufacturers is not very small. If you kill a card, it is not necessarily because of the wear limit. If the card is worn out, it is still readable. If it is completely dead, it has died of something else (static electricity, small fracture somewhere).

Direct control of Floppy drive

I'm trying to extract data from 3.5" floppy disks formatted on a +D interface for a ZX spectrum. It's close but not exactly the same as for a PC. I've written software to do this in the past useing the BIOS to access a floppy.
However some disks are old and have bad sectors. I am trying to create a floppy drive controller to read a disk at a bit level to recover as much data as possible. I'm fully aware of how difficult this might be. I have however written a disk utility program that interfaced with the interface at a machine code level on the original spectrum computer, written in Z80 assembly software to emulate MSDOS to access and write files to FAT12 floppy disks. The original computer that accessed these disks did so using a 3.4MHz processor, so the Rasperry Pi that I'm thinking of using should be more than fast enough. I might even be able to run it from Linux but if not I have figured out to access the GPIO port, screen, keyboard and SD card using assembly language that would not need any kernal to run it. I've read up on how floppy drive reads and write data and have seen some basic example of how to opperate the floppy disk (not just the stepping motor).
I've done some research but have a few questions I can't seem to find answers to, and wonder if people here might know.
1) The read data pin (30). Does this return a logic high/low value of what's under the read head (rounding up or down to logic high or low), or is it analog? I ask because if it's analog, getting any input back would enable me to better try and recover corrupt sectors,but would make interface circuit harder to make, and depending on ADC used make interface with GPIO harder, and slower.
2) I know the molex power of +5V and +12V. But what current would a floppy expect?
3) I assume that the control pins from the ribbon cable on the floppy work at 0 or +5V, but that people seem to be able to run them at +3.3V. Does anyone know what they should be running at, and what their current tolerance are: what voltage and current the inputs expect, and what current/voltage the outputs deliver?
Many thanks for any information/knowledge that you might have on this.
A little late, but if someone else is interested:
1) The data output of the floppy is open-collector. So you can pull it up to your 3.3 Volts and will be fine.
2) 600 mA # 12V, 500 mA # 5V should be safe
3) Think of TTL input, that expects 2.4 Volts for HIGH. (2.5V according to the NEC 3.5" floppy drive).