What are the data structures used to implement the Process Control Block in Unix? - operating-system

I am taking an operating systems course and we have talked about what a process control block is, what is stored in it, what purpose it servers, and I understand all of that, but we didn't really touch on what data structure is actually used to make it. After googling it, I've come across two structures: either a linked list or an array is used. I realize that the structure could possible differ based on the operating system, but I was wondering exactly what data structures are used to create one, specifically in the Unix operating system (since I am using a Unix machine)?

The doubly-linked list data-structure is used 'generally' to implement Process Control Block! In UNIX also, the PCB is implemented as a doubly-linked list.
But,in case if your OS (talking about custom OS) is lightweight,then you can get off by using simpler data-structure like arrays! But,in general,PCB's are a very large data-structure and hence, advised to store in doubly-linked list which can be accommodated for any process to any level(storing all possible sort of info about process).
Also, check this answer of mine,here also I have mentioned in the last line the same answer...

Related

How does on-demand pruning of kernel work in context of Unikernel?

In Unikernel computing environment, kernel sometimes can be pruned targeting the need of specific user application.
How does this process work? is it manual or can it be automated?
There is no 'pruning' of the kernel. To be clear all unikernel implementations don't use linux - they have written their own kernel (whether they want to admit to that or not).
There are well over 10 different unikernel implementations out there today and some are very focused on providing the absolute bare minimum of things to work. Others, such as nanos of which I work on are in the 'posix' category meaning if it runs on linux it'll probably run on here.
Whenever you see references to people talking about 'only including what you need' and picking certain libraries to use -- most of these systems are not much more than a hello world -- which is simple enough to do in ~20 lines of assembly. When you start trying to run real world applications, especially ones that you use everyday (did you write your own database? did you write your own webserver?) and compare things like performance you start running into actual operating system components that you can't just pick/choose.
Having said that, supporting the set of syscalls that linux has is only a portion of the battle. There is a lot of code that goes into a general purpose operating system like linux (north of 20M loc and that's a 'small' kernel) or windows or macos that is not readily recognizable by it's end users. The easy things to point at are picking which network driver to use depending on which hypervisor you want to run it on (xen/kvm/hyper-v).
The harder, less clear things are making choices between things like apic or x2apic? How many levels are your page tables? Do you support smp or not? If so how?
You can't just 'prune' this type of stuff away. It needs to be consciously added and even if you did have a linux that you were 'unikernelizing' you wouldn't be able to prune it cause it just touches too much code. As a thought exercise try removing support for multiple processes (no unikernel supports this). Now you are touching multiple schedulers, shared memory, message passing, user privileges, (are there users?), etc. (this list goes on forever).
Your question is actually a very common question and it highlights a misunderstanding of how these things work in real life.
So to answer your question - there is no work that I'm aware of where people are trying to automatically 'prune' the kernel and even the projects that exist where you can select what type of functionality via config file or something is not something that I would expect to see much progress in because of the aforementioned reasons.

What is a MongoDB driver in layman's terms?

I've been scouring the MongoDB documentation, Google, Stackoverflow and YouTube... but I still can't seem to understand what a driver is used for in MongoDB.
I do know that different programming language can have one or many different drivers - but why do I need one?
You don't strictly speaking need one, but the alternative is building network packets manually scattered around in your code base... The term 'driver' is a bit irritating, because most people expect some kernel-level program that talks to hardware.
The MongoDB driver is more like an SDK or a helper library that helps you with a number of tasks that you'll almost certainly need to solve when you want to use MongoDB.
In essence, the MongoDB driver does these things:
it implements the MongoDB wire protocol that is used to talk to the database, i.e. it knows what 'messages' the database expects, it knows relevant constants, etc. 'It implements the MongoDB API' if you will.
It also comes with helpers to manage the actual TCP/IP sockets, creating them, resolving replica set addresses, implementing connection pooling, etc.
Next, the drivers contain helpers that make it easier to work with the BSON datatypes from your language, since there normally isn't a 1:1 mapping of types. A mongodb array, for example, could be mapped to an array or some kind of list or set container in most languages; ObjectId and ISODate might need a wrapper, and so on.
Lastly, the driver implements a serializer, that is, a piece of software that can create a copy of an instance 'from the outside', that is, without you having to implement a Serialize() method on each and every class (or whatever concept your language supports) you want to store. Together with 3), this writes the BSON representation of your data.
Serialization in itself isn't trivial, because one quickly has to cope with cyclical references, so a recursive algorithm on a set of unknown properties is required. If that doesn't sound complicated enough, the de-serialization (or hydration) of objects is even more painful, so it's not exactly the type of code that is super rewarding to write, unless it's highly reusable.
I'm sure I forgot something else the drivers do, but I think these are the key pain points they solve. As far as I know, their exact feature set varies from language to language and in some languages, the individual problems might be less or more pronounced, but they generally exist everywhere.

How does logical indexing work?

In some high-level languages like Matlab, you can use "logical indexing" to select a whole set of entries in an array for operating on.
I understand what logical indexing is and how to use it.
Instead, I am asking:
How does it work ("behind the scenes")?
Does it not boil down to just a for-loop?
If so, why is it so much faster than for-looping?
Interpreted languages can be thought of as a variation on assembler running on an emulated core. They have stacks and commands that work in ways like the assembler without actually being the assembler. They are a virtual machine.
A for loop can be thought of as telling the system, set a value, run a sequence of tasks, and when you are done then come back and check on that value. If it is not at a threshold, then change it in a prescribed way, and go repeat those tasks and come back. In assembler you are running screaming fast, but in the "VM" not so much. Consider the demonstration between 13:50 and 15:30 of this link: (link)
This means that what appears to be a for loop, isn't actually a for loop. It is operating system interrupts, and virtualized memory. It is virus-scans in the background and megasloth bloatware.
If you had a virtual system, could you make a short-cut for addressing memory that didn't use the virtualized for loop, that was reasonably efficient? MatLab tries to major on data processing, so it has to have very efficient ways of storing, sorting, and selecting data within its virtual machine.
MathWorks is not going to make the details of this accessible to the public. If it has a great idea then they don't want it implemented in Python, and R tomorrow. If it has a mediocre idea then they don't want to be beaten in execution by Python and R tomorrow. Either way, making the nuts and bolts of that particular approach accessible to the public without an NDA - it is likely a losing proposition for them.
Bottom lines:
its not a real "for", even for a for loop, because its running virtually
they are opening up some of the internals of their data handling to improve usability
they aren't likely to disclose actual code because of negative business consequences
It is worthy to note that vectorized code can outperform for loops while doing the same thing. This means they likely are applying more of that internals to execution of the "sequence of tasks" to get performance improvement.

why does memcached not support "multi set"

Can anyone explain why memcached folks decided to support multi get but not multi set.
By multi I mean operation involving more than one key (see protocol at http://code.google.com/p/memcached/wiki/NewCommands).
So you can get multiple keys in one shot (basic advantage is the standard saving you get by doing less round trips) but why can not you get bulk sets?
My theory is that it was meant to do less number of sets and that too individually (e.g. on a cache read and miss). But I still do not see how multi-set really conflicts with the general philosophy of memcached.
I looked at the client features at http://code.google.com/p/memcached/wiki/NewCommonFeatures and it seems that some clients potentially do support "Multi-Set" (why only in binary protocol?). I am using Java spy memcached, btw.
It's not supported in the text protocol because it'd be very, very complicated to express, no clients would support it, and it would provide very little that you can't already do from the text protocol.
It's supported in the binary protocol because it's a trivial use case of binary operations.
spymemcached supports it implicitly -- just do a bunch of sets and magic happens:
http://dustin.github.com/2009/09/23/spymemcached-optimizations.html
I don't know a lot about memcache internals, but I assume writes have to be blocking, atomic operations. I assume that allowing multiple set operations to be batched, you could block all reads for a long time (or risk a get occurring while only half of a write had been applied). Forcing writes to be done individually allows them to be interleaved fairly with gets.
I would imagine that the restriction against using multi sets is to avoid collisions when writing cached values to the memcache.
As an object cache, I can't foresee an example of when you would need transactional type writes. This use case seems less suited for a caching layer, but better suited for the underlying database.
If sets come in interleaved from different clients, it is most likely the case that for one key, the last one wins, or is at least close enough, until the cache is invalidated and a newer value is written.
As Gian mentions, there don't seem to be any good reasons to block reads from the cache while several or many writes to the cache happen.

Do any common OS file systems use hashes to avoid storing the same content data more than once?

Many file storage systems use hashes to avoid duplication of the same file content data (among other reasons), e.g., Git and Dropbox both use SHA256. The file names and dates can be different, but as long as the content gets the same hash generated, it never gets stored more than once.
It seems this would be a sensible thing to do in a OS file system in order to save space. Are there any file systems for Windows or *nix that do this, or is there a good reason why none of them do?
This would, for the most part, eliminate the need for duplicate file finder utilities, because at that point the only space you would be saving would be for the file entry in the file system, which for most users is not enough to matter.
Edit: Arguably this could go on serverfault, but I feel developers are more likely to understand the issues and trade-offs involved.
ZFS supports deduplication since last month: http://blogs.oracle.com/bonwick/en_US/entry/zfs_dedup
Though I wouldn't call this a "common" filesystem (afaik, it is currently only supported by *BSD), it is definitely one worth looking at.
It would save space, but the time cost is prohibitive. The products you mention are already io bound, so the computational cost of hashing is not a bottleneck. If you hashed at the filesystem level, all io operations which are already slow will get worse.
NTFS has single instance storage.
NetApp has supported deduplication (that's what its called in the storage industry) in the WAFL filesystem (yeah, not your common filesystem) for a few years now. This is one of the most important features found in the enterprise filesystems today (and NetApp stands out because they support this on their primary storage also as compared to other similar products which support it only on their backup or secondary storage; they are too slow for primary storage).
The amount of data which is duplicate in a large enterprise with thousands of users is staggering. A lot of those users store the same documents, source code, etc. across their home directories. Reports of 50-70% data deduplicated have been seen often, saving lots of space and tons of money for large enterprises.
All of this means that if you create any common filesystem on a LUN exported by a NetApp filer, then you get deduplication for free, no matter what the filesystem created in that LUN. Cheers. Find out how it works here and here.
btrfs supports online de-duplication of data at the block level. I'd recommend duperemove as an external tool is needed.
It would require a fair amount of work to make this work in a file system. First of all, a user might be creating a copy of a file, planning to edit one copy, while the other remains intact -- so when you eliminate the duplication, the hard link you created that way would have to give COW semantics.
Second, the permissions on a file are often based on the directory into which that file's name is placed. You'd have to ensure that when you create your hidden hard link, that the permissions were correctly applied based on the link, not just the location of the actual content.
Third, users are likely to be upset if they make (say) three copies of a file on physically separate media to ensure against data loss from hardware failure, then find out that there was really only one copy of the file, so when that hardware failed, all three copies disappeared.
This strikes me as a bit like a second-system effect -- a solution to a problem long after the problem ceased to exist (or at least matter). With hard drives current running less than $100US/terabyte, I find it hard to believe that this would save most people a whole dollar worth of hard drive space. At that point, it's hard to imagine most people caring much.
There are file systems that do deduplication, which is sort of like this, but still noticeably different. In particular, deduplication is typically done on a basis of relatively small blocks of a file, not on complete files. Under such a system, a "file" basically just becomes a collection of pointers to de-duplicated blocks. Along with the data, each block will typically have some metadata for the block itself, that's separate from the metadata for the file(s) that refer to that block (e.g., it'll typically include at least a reference count). Any block that has a reference count greater than 1 will be treated as copy on write. That is, any attempt at writing to that block will typically create a copy, write to the copy, then store the copy of the block to the pool (so if the result comes out the same as some other block, deduplication will coalesce it with the existing block with the same content).
Many of the same considerations still apply though--most people don't have enough duplication to start with for deduplication to help a lot.
At the same time, especially on servers, deduplication at a block level can serve a real purpose. One really common case is dealing with multiple VM images, each running one of only a few choices of operating systems. If we look at the VM image as a whole, each is usually unique, so file-level deduplication would do no good. But they still frequently have a large chunk of data devoted to storing the operating system for that VM, and it's pretty common to have many VMs running only a few operating systems. With block-level deduplication, we can eliminate most of that redundancy. For a cloud server system like AWS or Azure, this can produce really serious savings.