iSCSI port virtualization techniques - iscsi

I need to create a scaled up iSCSI setup for some testing (around 1024), but all I have is limited hardware. My requirement is to create a large number of iscsi ports, which has unique IQN names and can be discovered at a storage controller as separate physical entities.
In FC, I should be able to do it with NPIV, where I could virtualize a single port to have multiple number of WWNs. But I don't find an equivalent solution in iSCSI.
Any suggestions?
I don't have any simulators like SANBLAZE handy. So I am trying to explore options which can be done at operating system level.

You can use software iscsi to set up as many targets as you like (and have backing store for).
For example, you can use openSUSE (disclaimer: I work for SUSE) running something like Leap 42.2 and use targetcli to set up targets. The man page for targetcli(1) is pretty clear on examples. You can, for example, set up a separate 1Gig file for each target -- that is, if you have 1T of storage. Scale down the size so 1024 of them fit on your disc.

Related

What constitutes a minimum baremetal hypervisor and does a open source one exist?

I know there are a number of full featured hypervisors(type 1) in existence including Xen, KVM and VMWare. I am however curious what constitutes a bare minimum as a bare metal hypervisor and if something that is quite small LOC wise exists for hacking purposes or if something of the sort would be difficult to implement with (unoptimized drivers). Thanks.
It seems there is a paper and source code on the internet for a hypervisor called Nova which claims to contain around 9k lines of code. This is significantly smaller than Xen and combines the notion of a microkernel with a hypervisor.

Why does the Kubernetes revisionHistoryLimit default to 2?

Kubernetes' .spec.revisionHistoryLimit is used in keeping track of changes to the Deployment definition. Since these definitions are small yaml files, it doesn't seem like much (given the typical conditions of a modern cloud server) to keep 100's or more of these definitions around if necessary.
According to the documentation, it is going to be set to 2 (instead of unlimited). Why is this?
Keeping it unlimited would eventually clog up etcd and iiuc etcd isn't designed for big data usages. Also the Kubernetes control plane is syncing and downloading regulary, which means there would be much unnecessary data to process. Just image service with daily deploymets, which runs over a year.
And setting this to a high-ish number like 100 seems very random to me. Why not 1000 vor 9001? Besides that I cannot imaging anyone who might want to roll back something a hundred versions.
Anyway, we are only talking about a default setting, so you can set it to a very high number, if your usw case requires it.

ARM11/ARMv6 cache flushing on VM mapping changes?

I'm writing a toy operating system for the Raspberry Pi, which is based around an ARM11/ARMv6. I want to use basic memory mapping features, mainly so I can swap code in and out of a particular virtual address. I'm intending to use the 1MB sections because they look pretty simple and they're big enough that I only need to change one at a time.
However, there are two things I haven't been able to figure out yet --- the ARM ARM is nigh impenetrable...
when changing a mapping by updating a TLB table entry, do I need to invalidate that region of virtual address space? Some of the diagrams I've seen indicate that the caches are attached to physical memory, but suggests no, but the caching behaviour is controlled by flags on the TLB table entry, which suggests yes.
if I have two regions of virtual memory pointing at the same physical location, are they cache coherent? Can I write to one and then assume that data is immediately readable from the other? It'd make life loads easier if it were...
Does anyone know the answers for sure?

MongoDB: Can different databases be placed on separate drives?

I am working on an application in which there is a pretty dramatic difference in usage patterns between "hot" data and other data. We have selected MongoDB as our data repository, and in most ways it seems to be a fantastic match for the kind of application we're building.
Here's the problem. There will be a central document repository, which must be searched and accessed fairly often: it's size is about 2 GB now, and will grow to 4GB in the next couple years. To increase performance, we will be placing that DB on a server-class mirrored SSD array, and given the total size of the data, don't imagine that memory will become a problem.
The system will also be keeping record versions, audit trail, customer interactions, notification records, and the like. that will be referenced only rarely, and which could grow quite large in size. We would like to place this on more traditional spinning disks, as it would be accessed rarely (we're guessing that a typical record might be accessed four or five times per year, and will be needed only to satisfy research and customer service inquiries), and could grow quite large, as well.
I haven't found any reference material that indicates whether MongoDB would allow us to place different databases on different disks (were're running mongod under Windows, but that doesn't have to be the case when we go into production.
Sorry about all the detail here, but these are primary factors we have to think about as we plan for deployment. Given Mongo's proclivity to grab all available memory, and that it'll be running on a machine that maxes out at 24GB memory, we're trying to work out the best production configuration for our database(s).
So here are what our options seem to be:
Single instance of Mongo with multiple databases This seems to have the advantage of simplicity, but I still haven't found any definitive answer on how to split databases to different physical drives on the machine.
Two instances of Mongo, one for the "hot" data, and the other for the archival stuff. I'm not sure how well Mongo will handle two instances of mongod contending for resources, but we were thinking that, since the 32-bit version of the server is limited to 2GB of memory, we could use that for the archival stuff without having it overwhelm the resources of the machine. For the "hot" data, we could then easily configure a 64-bit instance of the database engine to use an SSD array, and given the relatively small size of our data, the whole DB and indexes could be directly memory mapped without page faults.
Two instances of Mongo in two separate virtual machines Would could use VMWare, or something similar, to create two Linux machines which could host Mongo separately. While it might up the administrative burden a bit, this seems to me to provide the most fine-grained control of system resource usage, while still leaving the Windows Server host enough memory to run IIS and it's own processes.
But all this is speculation, as none of us have ever done significant MongoDB deployments before, so we don't have a great experience base to draw upon.
My actual question is whether there are options to have two databases in the same mongod server instance utilize entirely separate drives. But any insight into the advantages and drawbacks of our three identified deployment options would be welcome as well.
That's actually a pretty easy thing to do when using Linux:
Activate the directoryPerDB config option
Create the databases you need.
Shut down the instance.
Copy over the data from the individual database directories to the different block devices (disks, RAID arrays, Logical volumes, iSCSI targets and alike).
Mount the respective block devices to their according positions beyond the dbpath directory (don't forget to add the according lines to /etc/fstab!)
Restart mongod.
Edit: As a side note, I would like to add that you should not use Windows as OS for a production MongoDB. The available filesystems NTFS and ReFS perform horribly when compared to ext4 or XFS (the latter being the suggested filesystem for production, see the MongoDB production notes for details ). For this reason alone, I would suggest Linux. Another reason is the RAM used by rather unnecessary subsystems of Windows, like the GUI.

mongodb single node configuration

I am going to configure mongodb on a small number of cloud servers.
I am coming from mysql, and I remember that if I needed to change settings like RAM, etc. I would have to modify "my.cnf" file. This came useful while resizing each cloud server.
Now, how can I check or modify how much RAM or disk space the database is going to take for each node?
thank you in advance.
I don't think there are any built in broad stroke limitation tools or flags in mongodb per se and that is most likely because this is something you should be doing at the operating system level.
Most modern multi-user operating systems have built in ways to set quotas on disk space, etc per user so you could probably set up a mongo user and place the limits on them if you really wanted to. MongoDB works best when it has enough memory to hold the working set of data and indexes in memory and it does a good job of managing that on its own.
However, if you want to get granular you can take a look at the help output of mongod --help
I see the following options that you could tweak:
--nssize arg (=16) .ns file size (in MB) for new databases
--quota limits each database to a certain number of files (8 default)
--quotaFiles arg number of files allower per db, requires --quota