How to read/write to raw device with PowerShell? - powershell

I have to read and write data (up to 512 bytes) to/from raw disks (first sector on disk, and first sector on partitions).
I'd like to use PowerShell for that, but fail to find any reference to access to raw disks and raw partitions.
Whats is/are the way/s to do that?

You can do in PowerShell most of the things you can do in .NET (with C# or another language). You will find in CCS LABS C#: Low Level Disk Access the way to do it in C#, but really I'am not sure it's a good idea to do that using a scripting language.

Related

Append-only file write with fsync, on emmc/SSD/sdcard, ext4 or f2fs?

I am building two operating systems for IoT. The Libertas OS and Hornet OS.
The data APIs are designed to be append-only time series. fsync() is required after each append of byte block to ensure data safety.
The storage could be emmc, SSD, or sdcard. The question is, which filesystem is a better fit for different storage types?
I understand f2fs is designed as append-only. But what about EXT4? Couldn't easily find information about it.
Theoretically, at least for file content, the append shall continue writing on the current underlying block to minimize wearing. Since the file size is changed after append, the file meta-data shall be updated, ideally through append-log.
I also don't know the details of the internal controller of sdcard and emmc, will the controller honor such block level append?
Any insight will be greatly appreciated!

Memory usage of zfs for mapped files

I read the following on https://blogs.oracle.com/roch/entry/does_zfs_really_use_more
There is one peculiar workload that does lead ZFS to consume more
memory: writing (using syscalls) to pages that are also mmaped. ZFS
does not use the regular paging system to manage data that passes
through reads and writes syscalls. However mmaped I/O which is closely
tied to the Virtual Memory subsystem still goes through the regular
paging code . So syscall writting to mmaped pages, means we will keep
2 copies of the associated data at least until we manage to get the
data to disk. We don't expect that type of load to commonly use large
amount of ram
What does this mean exactly? does this mean that zfs will "uselessly" double cache any memory region that is backed by a memory mapped file? or does "using syscalls" mean writing using some other method of writing that I am not familiar with.
If so, am I better off keeping the working directories of files written this way on a ufs partition?
Does this mean that zfs will "uselessly" double cache any memory region that is backed by a memory mapped file?
Hopefully, no.
or does "using syscalls" mean writing using some other method of writing that I am not familiar with.
That method is just regular low level write(fd, buf, nbytes) system calls and similars and not what memory mapped files are designed to support: accessing file content just with reading / writing memory by using pointers, using the file data as a byte array or whatever.
If so, am I better off keeping the working directories of files written this way on a ufs partition?
No, unless memory mapped files that are also written to using system calls sum to a significant part of your RAM workload, which is quite unlikely to happen.
PS: Note that this blog is almost ten years old. There might have been changes in the implementation since that time.

How does the OS decide data that goes in each page?

I have a comma separated data file, lets assume each record is of fixed length.
How does the OS(Linux) determine, which data parts are kept in one page in the hard disk?
Does it simply look at the file, organize the records one after the other(sequentially) in one page? Is it possible to programmatically set this or does the OS take care of it automatically?
Your question is quite general - you didn't specify which OS or filesystem - so the answer will be too.
Generally speaking the OS does not examine the data being written to a file. It simply writes the data to enough disk sectors to contain the data. If the sector size is 4K, then bytes 0-4095 are written to the first sector, bytes 4096-8191 are written to the second sector, etc. The OS does this automatically.
Very few programs wish to manage their disk sector allocation. One exception is high performance database management systems, which often implement their own filesystem in order have low level control of the file data to sector mapping.

Writing to hard disk from contiguous physical memory

I have an ARM based device, running linux, which is connected to a camera, and I'm trying to store captured frames to HD efficiently.
I'm developing in user space, but can modify drivers at will
I'm coding in C
Frames which are written into memory using DMA, and I have their physical memory pointer.
I am able to control all the frame capturing flow, and I can tell when the frame buffers are stable (dqueued from the video4linux driver)
Linux version is 3.0.35
I'm familiar with kernel source code, not an expert, but I'm able to find my way in it and figure out things, as long as I get some hints...
I believe I have 2 alternatives:
Find the optimal configuration for my filesystem, for opening the file and writing into it. I'm now using ext4, and standard fopen() fwrite() functions. I understand I can also use mmap, or add O_DIRECT flag when calling open(), but didn't try it yet.
Find a way to pass the physical address of the buffer (I can get it
from my Video4Linux driver) directly to the filesystem/hard drive driver,
so the data will be transfered directly from there.
I found method 1 to be slow, having memory transactions as my bottleneck, since fwrite involves copying data from userspace to kernel space, and then again into some sort of cache, and then on to DMA. Too many memory transactions for a simple store...
Regarding method 2 - I don't know if that's possible, but if I was the one designing this system from scratch, this is what I would do.
Any thoughts?
Regarding method 1 (using open() and write(), mmap() and/or O_DIRECT)
can you recommend an optimal settings for my purpose?
Is method 2 (storing to HD directly from an existing DMA buffer) possible? If so - can you point me to an example?
the only problem with writing into a file via mmap on UNIXs, is that you either have to deal with signals in case of out-of-disk-space
or you have make certain that the file is not sparse
and thus all needed disk space is already allocated.
I think an uptodate G++ provides a method of converting signals into C++ exception handling,
but I'm not certain how supported this is on other systems than mac-os.

Is Cassandra good for storing files?

I'm developing a php platform that will make huge use of images, documents and any file format that will come in my mind so i was wondering if Cassandra is a good choice for my needs.
If not, can you tell me how should i store files? I'd like to keep using cassandra because it's fault-tolerant and uses auto-replication among nodes.
Thanks for help.
From the cassandra wiki,
Cassandra's public API is based on Thrift, which offers no streaming abilities
any value written or fetched has to fit in memory. This is inherent to Thrift's
design and is therefore unlikely to change. So adding large object support to
Cassandra would need a special API that manually split the large objects up
into pieces. A potential approach is described in http://issues.apache.org/jira/browse/CASSANDRA-265.
As a workaround in the meantime, you can manually split files into chunks of whatever
size you are comfortable with -- at least one person is using 64MB -- and making a file correspond
to a row, with the chunks as column values.
So if your files are < 10MB you should be fine, just make sure to limit the file size, or break large files up into chunks.
You should be OK with files of 10MB. In fact, DataStax Brisk puts a filesystem on top of Cassandra if I'm not mistaken: http://www.datastax.com/products/enterprise.
(I'm not associated with them in any way- this isn't an ad)
As fresh information, Netflix provides utilities for their cassandra client called astyanax for storing files as handled object stores. Description and examples can be found here. It can be a good starting point to write some tests using astyanax and evaluate Cassandra as a file storage.