Here in this article it is written as
If data block 3 pointed to data block 10, but the FAT entry for 10 was -1, this means the file points to a free block, which is an error. The file system is inconsistent.
My understanding about data block is that they are chunks of continous bits.But How come one group of bits point to another group of bits.They are bits in memory right.I am having trouble understanding this please help me.Or is the author trying to state that datablock is one that is in the fat table.
Please help me.Please dont get annoyed if this is a stupid question as icouldnt find any additional simple explained resource to understand this. and my understanding about fat table is very vague.
The article is confusing. When they say "data block 3 points at", they actually mean "the FAT entry for data block 3 points at", but they were being sloppy. The wikipedia article has a lot more detail. Or just google for FAT filesystem.
BTW, the article is also wrong -- generally FAT filesystems use 0 for a free block, and an all 1s value for the last cluster in a file. FAT entries are generally described as unsigned numbers, so can never be negative.
It's not the data blocks that point to each other, it's the file system storing bookkeeping information about how the blocks are connected.
Related
I need to know how to create a list of number in 8085 assembler and store the list in successive memory location?
Most assemblers will have a set of define statements and a way to specify different bases.
For example, the values zero, one and forty-two, along with a short nul-terminated string, may be created with something like:
some_vals: db 0, 1, 2Ah, 'hello', 0
How your assembler does it is probably in the documentation somewhere. Without more specific details on which assembler you're using, this not much more help I can give.
Pseudo-ops like data definition (db, dw, ds), address specification (org) or label setting (mylabel:) do not generally form part of the processor documentation itself, rather they're a function of the assembler.
See, for example, chapter 4 of this document. I particularly love the fact that we used to buy these 200-page books for $3.95 whereas now you'll be shelling out a hundred bucks for a digital copy with no incremental cost of production :-)
I'm working on IBM MVS (z/OS) and trying to make Window Services working.
On the function CSREVW I don't understand what the purpose of the parameter pfcount.
Acording to the documentation this will ask to the window services to read more than one block after my program references a block that is not in my window.
But how the window services is suposed to know that I tried to reference data that are not in my window? I mean, it can't know that I'm reading data out of my window if i don't call CSREVW or CSRVIEW again.
Maybe my major issue is that I have trouble to understand english but this seems clear to me...
Here is the link to the documentation, this is explained at pages 23-24 :
http://publibz.boulder.ibm.com/epubs/pdf/iea3c102.pdf
I know this is a very specific problem about an IBM service and I apologize about that.
Thank you !
Tim
I think the problem you're having is that you need to understand a little bit about how the underlying objects behind the windowing service work in virtual storage.
At the core, the various windowing services work to give you what amounts to a "private" page dataset. You allocate and reference storage, but the objects in that virtual space aren't really in memory - the system's page fault mechanism brings them in as you reference them. So yes, you're accessing data within a "window", but in reality, the data you expect to see may not be "paged in" at that moment.
Going a little deeper, when you first allocate the object, the virtual storage it's mapped to has all of the pages marked "invalid" in the underlying page table entries. That means that as soon as you touch this storage, a page fault interrupt occurs. At this point, the operating system steps in and resolves the page fault by brining the necessary data into memory, then your program continues, oblivious to all of this processing on your behalf. You're correct that you're just referencing data within the window, but there's a lot under the covers going on to support this.
This is where PFCOUNT comes in...
Let's say you have structures that are, say, 64K long inside your virtual window. It would be sloppy and slow to reference each page of this structure and cause a page fault each time. Much better would be to use PFCOUNT to cause the page you reference and all 15 other pages needed by your object to be paged-in with a single operation. Conversely, if your data was small and you were highly random about how you access it, PFCOUNT isn't going to help you - the next page you reference could be anywhere, and it's actually wasteful to have a large PFCOUNT since you end up bringing in a lot of data you never use.
Hope that makes sense - if you'd like a challenge, take yourself a system dump and examine the system trace entries as you reference data...you'll see a very distinct pattern of page faults, I/O and resumption of your program, and hopefully it will all make sense to you.
From the manual
,pfcount
Specifies the number of additional blocks you want window services to bring into the window each time your program references data that
is not already in the window. The number you specify is added to the
minimum of one block that window services always brings in. That is,
if you specify a value of 20, window services brings in up to 21. The
number of additional blocks ranges from zero through 255.
Note that you get 1 block without asking.
I have a product that is basically a USB flash drive based on an NXP LPC18xx microcontroller. I'm using a library provided from the manufacturer (LPCOpen) that handles the USB MSC and the SD card media (which is where I store data).
Here is the problem: Internally the LPC18xx has a 64kB (limited by hardware) buffer used to cache reads/writes which means it can only cache up to 128 blocks(512B) of memory. The SCSI Write-10 command has a total-blocks field that can be up to 256 blocks (128kB). When originally testing the product on Windows 7 it never writes more than 128 blocks at a time but when tested on Linux it sometimes writes more than 128 blocks, which causes the microcontroller to crash.
Is there a way to tell the host OS not to request more than 128 blocks? I see references[1] to a Read-Block-Limit command(05h) but it doesn't seem to be widely supported. Also, what sense key would I return on the Write-10 command to tell Linux the write is too large? I also see references to a block limit VPD page in some device spec sheets but cannot find a lot of documentation about how it is implemented.
[1]https://en.wikipedia.org/wiki/SCSI_command
Let me offer a disclaimer up front that this is what you SHOULD do, but none of this may work. A cursory search of the Linux SCSI driver didn't show me what I wanted to see. So, I'm not at all sure that "doing the right thing" will get you the results you want.
Going by the book, you've got to do two things: implement the Block Limits VPD and handle too-large transfer sizes in WRITE AND READ.
First, implement the Block Limits VPD page, which you can find in late revisions of SBC-3 floating around on the Internet (like this one: http://www.13thmonkey.org/documentation/SCSI/sbc3r25.pdf). It's probably worth going to the t10.org site, registering, and then downloading the last revision (http://www.t10.org/cgi-bin/ac.pl?t=f&f=sbc3r36.pdf).
The Block Limits VPD page has a maximum transfer length field that specifies the maximum number of blocks that can be transferred by all the READ and WRITE commands, and basically anything else that reads or writes data. Of course the downside of implementing this page is that you have to make sure that all the other fields you return are correct!
Second, when handling READ and WRITE, if the command's transfer length exceeds your maximum, respond with an ILLEGAL REQUEST key, and set the additional sense code to INVALID FIELD IN CDB. This behavior is indicated by a table in the section that describes the Block Limits VPD, but only in late revisions of SBC-3 (I'm looking at 35h).
You might just start with returning INVALID FIELD IN CDB, since it's the easiest course of action. See if that's enough?
I read about how FoundationDB does its network testing/simulation here: http://www.slideshare.net/FoundationDB/deterministic-simulation-testing
I would like to implement something very similar, but cannot figure out how they actually did implement it. How would one go about writing, for example, a C++ class that does what they do. Is it possible to do the kind of simulation they do without doing any code generation (as they presumeably do)?
Also: How can a simulation be repeated, if it contains random events?? Each time the simulation would require to choose a new random value and thus be not the same run as the one before. Maybe I am missing something here...hope somebody can shed a bit of light on the matter.
You can find a little bit more detail in the talk that went along with those slides here: https://www.youtube.com/watch?v=4fFDFbi3toc
As for the determinism question, you're right that a simulation cannot be repeated exactly unless all possible sources of randomness and other non-determinism are carefully controlled. To that end:
(1) Generate all random numbers from a PRNG that you seed with a known value.
(2) Avoid any sort of branching or conditionals based on facts about the world which you don't control (e.g. the time of day, the load on the machine, etc.), or if you can't help that, then pseudo-randomly simulate those things too.
(3) Ensure that whatever mechanism you pick for concurrency has a mode in which it can guarantee a deterministic execution order.
Since it's easy to mess all those things up, you'll also want to have a way of checking whether determinism has been violated.
All of this is covered in greater detail in the talk that I linked above.
In the sims I've built the biggest issue with repeatability ends up being proper seed management (as per the previous answer). You want your simulations to give different results only when you supply a different seed to your random number generators than before.
After that the biggest issue I've seen seems tends to be making sure you don't iterate over collections with nondeterministic ordering. For instance, in Java, you'd use a LinkedHashMap instead of a HashMap.
I've read the article: http://n00tc0d3r.blogspot.com/ about the idea for consistent hashing, but I'm confused about the method on multiple machines.
The basic process is:
Insert
Hash an input long url into a single integer;
Locate a server on the ring and store the key--longUrl on the server;
Compute the shorten url using base conversion (from 10-base to 62-base) and return it to the user.(How does this step work? In a single machine, there is a auto-increased id to calculate for shorten url, but what is the value to calculate for shorten url on multiple machines? There is no auto-increased id.)
Retrieve
Convert the shorten url back to the key using base conversion (from 62-base to 10-base);
Locate the server containing that key and return the longUrl. (And how can we locate the server containing the key?)
I don't see any clear answer on that page for how the author intended it. I think this is basically an exercise for the reader. Here's some ideas:
Implement it as described, with hash-table style collision resolution. That is, when creating the URL, if it already matches something, deal with that in some way. Rehashing or arithmetic transformation (eg, add 1) are both possibilities. This means, naively, a theoretical worst case of having to hit a server n times trying to find an available key.
There's a lot of ways to take that basic idea and smarten it, eg, just search for another available key on the same server, eg, by rehashing iteratively until you find one that's on the server.
Allow servers to talk to each other, and coordinate on the autoincrement id.
This is probably not a great solution, but it might work well in some situations: give each server (or set of servers) separate namespace, eg, the first 16 bits selects a server. On creation, randomly choose one. Then you just need to figure out how you want that namespace to map. The namespaces only really matter for who is allowed to create what IDs, so if you want to add nodes or rebalance later, it is no big deal.
Let me know if you want more elaboration. I think there's a lot of ways that this one could go. It is annoying that the author didn't elaborate on this point; my experience with these sorts of algorithms is that collision resolution and similar problems tend to be at the very heart of a practical implementation of a distributed system.