hi every body there
first of all thanks for your support and help .Now i want two know how to calculate the load on a computer system. i heard about load sensor and other utility which mostly provide the option of finding the temperature of hard disk or system , like top, system monitor but i don't want to use them.
all i want to know simply load on CPU like CPU is 40% free, load on memory means total usages of memory or amount of free memory ,free disk space etc.
i don't need soft ware tool for finding these thing .
all i want to know is there any program in c or in other language or script which can find out these thing one by one or simultaneously?
are there any commands to find this?
or can anybody explain how system monitor work?
waiting for your help
Depends on your operating system. On Unix/Linux, there is the directory /proc which contains a lot of files/directories with system information such as /proc/loadvg or /proc/cpuinfo
These aren't real files on your disk, but virtual files containing system information. But you can just open them and read them with standard file functions, so it works with any programming language, from C to Python.
Related
I recently have this question for my homework and I have trouble figuring it out. I tried searching online, but I can't seem to find any answers.
" Some file systems use two block sizes for disk storage allocation,
for example, 4- Kbyte and 512-byte blocks. Thus, a 6 Kbytes file can
be allocated with a single 4- Kbyte block and four 512-byte blocks.
Discuss the advantage of this scheme compared to the file systems that
use one block size for disk storage allocation. "
So are more blocks better?
Any help? thanks in advance.
You can't have a big amount of different block sizes, that would be hell to implement and manage. I also think that some hardware limitations restrain what sizes you can use.
Now the thing is, unless the amount of data you wish to store fits exactly in all the blocks you are using, then some space is going to be wasted in the last block.
For example, if your block is one gygabyte long (hypothetically speaking), and you want to store a 1 or 2 bytes long file, you've just wasted nearly a gigabyte of disk space. All information is stored as blocks. You can't store half a block.
Long blocks make for better performance, though, since the disk may spend more time fetching information from a block before proceeding to the next one. Also it's less blocks to track and manage.
Linux is a fun operating system to play with because it can work with so many different file systems (as far as I remember you only get some variations of FAT and NTFS with Windows). You could read more about file system on this link:
Linux System Administrators Guide: Chapter 5. Using Disks and Other Storage Media
See section 5.10.5 for more info on advantages and disadvantages of small and big block sizes.
So back to your question: having different block sizes like that allows you to optimize storage. You can minimize wasted space by switching to smaller blocks by the end of the file, while having as few blocks as possible to reduce I/O times.
I have an ARM based device, running linux, which is connected to a camera, and I'm trying to store captured frames to HD efficiently.
I'm developing in user space, but can modify drivers at will
I'm coding in C
Frames which are written into memory using DMA, and I have their physical memory pointer.
I am able to control all the frame capturing flow, and I can tell when the frame buffers are stable (dqueued from the video4linux driver)
Linux version is 3.0.35
I'm familiar with kernel source code, not an expert, but I'm able to find my way in it and figure out things, as long as I get some hints...
I believe I have 2 alternatives:
Find the optimal configuration for my filesystem, for opening the file and writing into it. I'm now using ext4, and standard fopen() fwrite() functions. I understand I can also use mmap, or add O_DIRECT flag when calling open(), but didn't try it yet.
Find a way to pass the physical address of the buffer (I can get it
from my Video4Linux driver) directly to the filesystem/hard drive driver,
so the data will be transfered directly from there.
I found method 1 to be slow, having memory transactions as my bottleneck, since fwrite involves copying data from userspace to kernel space, and then again into some sort of cache, and then on to DMA. Too many memory transactions for a simple store...
Regarding method 2 - I don't know if that's possible, but if I was the one designing this system from scratch, this is what I would do.
Any thoughts?
Regarding method 1 (using open() and write(), mmap() and/or O_DIRECT)
can you recommend an optimal settings for my purpose?
Is method 2 (storing to HD directly from an existing DMA buffer) possible? If so - can you point me to an example?
the only problem with writing into a file via mmap on UNIXs, is that you either have to deal with signals in case of out-of-disk-space
or you have make certain that the file is not sparse
and thus all needed disk space is already allocated.
I think an uptodate G++ provides a method of converting signals into C++ exception handling,
but I'm not certain how supported this is on other systems than mac-os.
I was studying operating system concepts from galvin's sixth edition and i have some questions about the flow of execution of a program. A figure explains the processing of the user program as:
We get an executable binary file when we reach linkage editor point. As the book says, "The program must be brought into memory and placed within a process for it to be executed" Now some of my stupid questions are:
Before the program is loaded into the memory, the binary executable file generated by the linkage editor is stored in the hard disk. The address where the binary executable file is stored in the hard disk is the logical address as generated by the CPU ?
If the previous answer is yes, Why CPU has to generate the logical address ? I mean the executable file is stored somewhere in the hard disk which pertains to an address, why does CPU has to separately do the stuff ? CPU's main aim is processing after all!
Why does the executable file needs to be in the physical memory i.e ram and can not be executed in the hard disk? Is it due to speed issues ?
I know i am being stupid in asking these questions, but trust me, I can't find the answers! :|
1) The logical address where the binary file is stored in the hard disk is determined by the file system, the Operating System component that is aimed to manage files in the disks.
2) & 3) The Hard Disk is not a) fast enough b) does not support word addressing. The hard disks are addressed in sectors blocks. Usually the sector size is 512 bytes. The CPU need to be able to address each machine word in a program to execute it. So, the program is stored in the hard disk, that retains its content even being powered off (in contrast to the RAM that losts its content when it is powered off). Then the program is loaded into RAM to be executed. After program finished and possibly stored the result of its execution in the hard disk, the memory is freed for running another programs. The Compiler and the Linkage Editor in your sample are also programs. They are kept in the hard disk. The compiler get its input (the source text of your program) from the file in the hard disk. Then it stores the object file. The linkage editor, or linker for short does the same: it reads the object file and necessary library files and then produces a file with a binary representation of your program.
The Matlab program is installed on hard drive C together with Windows, whereas the scripts and data loaded are saved on hard drive D. Could that be a cause to slower loading of data and slower execution of scripts?
Until someone provides hard evidence to the contrary I don't think that this is something that you need to be concerned with. If there is any impact on execution rate of locating data and Matlab on different disks it will be unnoticeably small.
Once the Matlab program is loaded (from drive C in your case) it will sit in memory ready and waiting for your commands. It's possible that some of the non-core functionality will be read from disk on demand but you are unlikely to notice, and find it very difficult to measure, the time this takes. Whether you then read data and programs from C or D is immaterial.
I look forward to the data that proves me wrong.
I compared loading a 140 Mb .mat file from an external USB-2 drive and an internal (IDE or S-ATA) drive.
Loading time from the external drive: > 15 minutes
Loading time from the internal drive: a few seconds
However sometimes the loading from the external drive is fast as well.
I have a perl script which monitors several windows network share drive usages. It currently monitors the free space of several network drives using the cygwin df command. I'd like to add the individual drive usages as well. When I use the
du -s | grep total
command, it takes for ever. I need to look at the shared drive usages because there are several network drives that are shared from a single drive on the server. Thus, filling one network drive fills them all (yes I know, not the best solution, not my choice).
So, I'd like to know if there is a quicker way to get the folder usage that doesn't take for ever.
du -s works by recursively querying the size of every directory and file. If your filesystem implementation doesn't store this total value somewhere, this is the only way to determine disk usage. Therefore, you should investigate which filesystem and drivers you are using, and see if there is a way to directly query for this data. Otherwise, you're probably SOL and will have to suck up the time it takes to run du.
1) The problem possibly lies in the fact that they are network drives - local du is acceptably fast in most cases. Are you doing du on the exact server where the disk is housed? If not, try to approach the problem from a different angle - run an agent on every server hosting the drives which calculates the local du summaries and then report the totals to a central process (either IPC or heck, by writing a report into a file on that same share filesystem).
2) If one of the drives is taking a significantly larger share of space (on average) than the rest of them, you can optimize by doing du on all but the "biggest" one and then calculate the biggest one by subtracting the sum of others from df results
3) Also, to be perfectly honest, it sounds like a suboptimal solution from design standpoint - while you indicated that it's not your choice, I'd strongly recommend that you post a question on how you can improve the design within the parameters you were given (to ServerFault website, not SO)