How does my operating system get information about disk size, RAM size, CPU frequency, etc - operating-system

I can see from my OS the informations about my hard disk, RAM and CPU. But I've never told my OS these info.
How does my OS know it?
Is there some place in the hard disk or CPU or RAM that stores this kind of information?
Is there some standard about the format of this kind of information?

SMBIOS (formerly known as DMI) contains much of this information. SMBIOS is a a data structure/API that is part of the BIOS/UEFI firmware and contains info like brand and model of the computer, etc.
The rest is gathered by the OS querying hardware directly.

Answer grabbed from superuser by Mokubai.
You don't need to tell it because each device already knows (or has a way) to identify itself.
If you get the idea that every device is accessed via address and data lines, and in some cases only data lines then you come to the relaisation that in those data lines you need some kind of "protocol" that determines just how you talk to those devices.
In amongst that protocol you have commands that say "read this" and "send that" or "put this over there". It is also relatively easy to have a command that says "identify yourself" which, rather than reading a block of disk or memory or painting a pixel a particular colour, will return a premade string or set of strings that tell the driver or operating system what that device is. Using a series of identity commands you could discover a device type, it's capabilities and what driver might be able to work with it.
You don't need to tell a device what it is, because it already knows. And you don't need to tell the operating system what it is because it can ask the device itself.
You don't tell people what they're called and how they talk, you ask them.
Each device has it's own protocol for these messages, and they don't store the details of other devices because to do so would be insane and near useless given that you can remove any device at any time. Your hard drive doesn't need to store information about your memory or graphics card except for the driver that the operating system uses to talk to it with.
The PC UEFI specification would define a core set of system specifications that every computer has, allowing the processor to be powered up and for a program stored in an EEPROM to begin the asbolute basic system probing necessary to determine the processor, set up the RAM, find a disk and display and thus continue to boot the computer.
From there the UEFI system would hand over to the operating system which would have more detailed probing and identification procedures, but it all starts at the most basic "I have a processor, what is around me?" situation.

Related

What is the difference between File System and File Management in an operating system?

I’ve found about file management explanations and file system explanations and “file system is part of file management” explanations. But I am wondering if they are the same or two different things? Because I cannot seem to find an article about them.
A modern Operating System, to be portable, must be file system independent. i.e.: It should not matter what type of storage format a given media device contains. At the same time, a media device must contain a specific type of storage format to contain files and folders, and at the same time be Operating System independent.
For example, an OS should be able to handle any file locally, allowing the actual transfer of these files from physical media to the OS (and visa-versa) to be managed by the file system manager. Therefore, an OS can be completely independent of how the file was stored on the media.
With this in mind, there are at least two layers, usually more, of management between the file being viewed and the file on the physical media. Here is a (simple) list of layers that might be used from top down.
OS App viewing the file
OS File Manager
OS File System Manager (allowing multiple file systems)
Specific File System Driver
Media Device Driver
When a call to read a file is made, the app (1) calls the OS File manager (2), which in turn--due to the opening of the file--calls the correct OS File System Manager (3), which then calls the Specific File System Driver (4), which then calls the Media Device Driver (5) for the actual access.
Please note that any or all could have a working cache manager which means calls are processed and returned without calling lower layers. e.g.: each read more than requested in anticipation of another read.
By having multiple layers like this, you can have any (physical) file system and/or media device you wish and the OS would be none the wiser. All you need is a media driver for the specific physical device and a file system manager for the physical format of the contents of the media. As long as these layers all support the common service calls, any format of media and content on that media will be allowed by the OS.

How is a program loaded in OS

I am reading about logical and physical addressing. I am confused that when a binary file is run, does it pass first through the CPU where the logical address are generated or is it directly copied to the physical memory ?
when a binary file is run, does it pass first through the CPU where the logical address are generated or is it directly copied to the physical memory ?
Typically some code somewhere loads the executable file's headers into memory, and then uses information from the headers to figure out where various pieces of the file (sections - e.g. .text, .data, etc) should be in virtual memory and what each virtual page's virtual permissions should be (if writes are allowed, if execution is allowed).
After this, areas of the virtual address space are set up. Often this is done by memory mapping the relevant part of the file into the virtual address space, without actually loading them into physical memory. In this case each page's actual permissions don't reflect the page's virtual permissions (e.g. a "read/write" page might be "not present" initially, and when software tries to read from the page you'll get a page fault and the page fault handler might fetch the page from disk and change the page to "present, read only"; and then later when software tries to write to the page you might get a second page fault and the page fault handler might do a "copy on write" so that anything else using the same physical page isn't effected and then make the new copy "read/write" so that it matches the original virtual permissions).
While this is happening; the OS could (depending on amount of free physical RAM and whether storage devices have more important data to transfer) be prefetching the file data from disk (e.g. into VFS cache), and could be "opportunistically" updating the process' page tables to avoid the overhead of page faults for pages that have been prefetched.
However; if the OS knows that the file was on unreliable and/or removable media it may decide that using memory mapped files is a bad idea and may actually load the needed executable's sections into memory before executing it; and an OS could have other features that cause the file to be loaded into RAM before it's executed (e.g. if the OS checks that an executable file's digital signature is correct before allowing the file to be executed, then the entire file probably needs to be loaded into memory to allow the digital signature can be checked, and in that case the entire file is likely to still be in memory when virtual address space is set up).
You need to read an entire book on these topics and spend several weeks on that
But Operating Systems: Three Easy Pieces is a good book, and it is freely downloadable.
Once you have read it, look perhaps also into osdev.org for practical things. And don't forget free software OSes such as Linux, e.g.
https://kernelnewbies.org/
Be aware of copy-on-write and virtual address space...
An executable files is generally an interpreted program itself that is executed by the loader. The executable contains instructions that tell the loader how the program should exist in VIRTUAL memory. By that, I mean the instructions in the executable define how the initial VIRTUAL representation of process address space.
So when the executable starts, there is only a virtual representation of the address space in secondary storage. As the program executes, it starts page faulting repeatedly to load pages into memory. After the initial load, the page fault rate dies down.
The executable NORMALLY only contains logical addresses.

Bootloader and Firmware Common Usage and Firmware Upgrade

There are two case when working on an embedded system.
Embedded system have limited resources like as ARM Cortex M0 Microcontroller with 12 K Flash.
Case 1 :
Common function/module usage for Bootloader and Firmware :
Bootloader and Firmware may need to use same module and function to prevent code duplication. Otherwise, same code will be included both Firmware and Bootloader twice.
We can prevent this by specify the function address and call this function by calling functions by addresses. This is one of the solutions.
Is there any smart method to provide common function usage?
Case 2 :
Sometimes, we need to upgrade firmware. One of the duties of bootloader is firmware upgrades. We can easily upgrade the firmware by overwrite the old one.
As we saw, two case can be implemented separately. But when we merge they, some problems are appeared.
Question :
Bootloader's are generally static objects but firmware's are can be modified. Therefore, common functions are generally located at Bootloaders. But when we need update a common module/function, how can we do?
What are the general or smart approaches which bootloader, firmware structured embedded systems? In Addition, for limited resources.
To discrete common modules/functions, Can one or more additional areas solve this problems.
Firmware, Bootloader and Library(New Area)?
I want to learn general approaches. Is there any paper, book and source about advanced Firmware management?
Thanks
If you share code between your bootloader and your mainline firmware application, then your bootloader will be using this code when it flashes the application space. In order to prevent this condition you must sacrifice the ability to update the common code, otherwise your bootloader will crash.
With only 12k of flash, it's pretty ambitious to expect a bootloader and mainline application to fit. You might consider writing the bootloader in assembly (gasp!). Some Cortex M0 parts (such as the NXP LPC11xx family) have an additional boot ROM which stores some useful functions and help alleviate some of the memory constraints.
Your question states the problem correctly - you cannot have your cake and eat it. Either:
1. You go for a small memory footprint and do not include firmware upgrade logic in the bootloader (i.e bootloader might just validate application image CRC etc but nothing more complicated). Here you could share functions to save space. OR
2. The bootloader has firmware upgrade functionality. Here you have to have shared functions compiled both into app and bootloader. The shared functions should be small - probably not a huge overhead but you need the space that this would take - if you dont have it then you need to go for more memory.
There is no way to share functionality and do firmware upgrade from bootloader reliably.
In lights of the current discussion about security in the firmware update process I would like to add the following for clarification:
Sharing code between the bootloader and the app will open yet another door for the potential attack, so you really want to avoid that.
The bootloader part is the part you actually do not want to change ever, this should be as static as possible. If the bootloader is broken, in-the-field-updates become nearly impossible or at least insecure.
Having said that, you might want to use a different approach.
You could create a maintenance mode for your device.
This mode opens the JTAG interface and allows direct access to memory. Than the service technician could apply the update.
Now you "only" need to secure the activation of the maintenance mode.
The following could work:
Use a UART interface to communicate the activation.
Maintenance system sends its own id and requests maintenance mode via UART
The id of the maintenance system, a random number and a unique system id are sent back to the maintenance system.
The maintenance system sends this id-sequence to your certification server.
If the unique system id and the maintenance systems id is correct, the server will create a signature of the information received and send it back to the maintenance system.
Your system now will receive the signature via UART
Your system verifies the signature against the previously send id-string with a public key stored during production
On a successfull verification maintenance mode is entered
To add security, you definitely want to put some effort into the maintenance systems id following a similar scheme. The ID should basically depend on MAC-address or another unique hardware id and a signature of the same. The ID should be created in a secure environment during production process of the maintenance system. The unique hardware id should be something visible to the outside world, so the server could actually verify, whether the ID received matches with the maintenance system communicating with the server.
This whole setup would give you a secure firmware update without a bootloader.
To have secure firmware updates, common understanding is, that you need a authentication system based on asymmetric encryption like RSA. If you need the verification code anyway, the above will exchange a bootloader capable of accepting updates with a simple UART interface, saving some resources in the process.
Is this something you were looking for?
A commercial bootloader in my experience uses between 4 and 8k of flash memory depending on flash algorithm and a couple of other things. I have been sticking with the same vendor throughout my carreer, so this might vary from your experience.
A digital signature system optimized for embedded systems uses approximately 4.5kByte in flash memory (for an example, see here: https://www.segger.com/emlib-emsecure.html ) and no more RAM than the stack.
You see, that 12k is really really low in terms of having a system which can be updated securely in the field. And even more so, if you want the system to be updated using a bootloader.

How to determine the minimum JRE version and system requirements for my Java application

I have written an application in Java using Eclipse IDE and I now need to know the minimum JRE version that is required to run the application! I know that certain methods are only available under later JREs, but I was wondering what the easiest way to find out the highest requirement of my application would be, so any suggestions would be appreciated...
Also whilst I am on the topic of requirements, I would appreciate any advice or methods for determining the minimum system requirements for my software in general - i.e minimum amount of RAM...
Thanks in advance
Method 1: For minimum JRE version, that's going to be tough. The easiest way is to simply require the same version that you're building against, or later, e.g. JRE 6.x.x or higher.
Method 2: Install multiple JDK's, making them available in Eclipse, and just change the version you're building against, running your app's test suite each time, and making sure they all pass. The earliest version of the JDK that allows all your tests to pass is the lowest JRE it can run against. Simply having your app successfully compile isn't enough, because previous versions of the JRE/JDK might have bugs that allow for successful compilation, but don't allow for proper program execution.
Method 3: Always require the latest on the client side, because Oracle is constantly patching security holes, and ultimately, it may be best to require the latest versions, if you have that kind of control, on the client side.
As far as RAM, that's easy. When the JVM starts it sets a 'maximum' amount of RAM (I believe the default may be 128MB), and that's a hard limit that your application cannot exceed without crashing. Profile your app over time, tweaking the memory settings on the JVM, and find out what the minimum amount of RAM is that you'll need for your app to run both (a) with acceptable performance, and (b) without throwing an OutOfMemoryError, and you're done.
Ref: How to configure JVM options and memory?
For other requirements such as CPU req., things get a little fuzzier. There are a lot of CPUs out there, and the throughput that a given system produces can vary not just based on CPU speed, but the speed of the hard drive, the amount of RAM installed in the system, the speed of the network interface (if you're writing a network app), and other things. For requirements such as that, you'll want to just test it on a variety of systems and sort of draw a line somewhere, and say, "You can expect acceptable performance if you have hardware that is at least as powerful as X, Y, Z".
The other thing you could do is build in a benchmark, or some kind of performance logging, and have that performance data sent back to you. Lots of apps do this. You know that "May we send anonymous usage data back to the mothership?" question you get when installing some software? Well, common among that data are system-specific details such as RAM, CPU, hard drive model, and other hardware details (whatever data you determine is relevant to your app), along with performance logging data. By taking that kind of approach, what you get is a lot of performance data from lots of different system configurations without needing to have a huge number of differently configured machines in-house.
You can do the same thing for program crashes and bugs - have the stack traces, system info, and other relevant data dumped to a log file that is sent back to you - but of course, only if your users have said it's okay to send that data back to you.

What code to write for a dongle attached system to provide better security?

I have developed a software piece (with C and Python) which I want to protect with dongle so that copying and reverse engineering becomes hard enough. My dongle device comes with an api which provides these:
Check dongle existence
Check proper dongle
Write into a memory location in dongle
Read from a memory location in dongle etc. (I think the rests aren't that good..)
What I can do in the source code so that it becomes harder to crack. Dongle provider suggested that, I should check proper dongle existence in a loop or after an event, or I should use the dongle memory in an efficient way. But how? I have no idea how crackers crack. Please shed some light. Thanks in advance.
P.S: Please don't suggest obfuscating. I have already done that.
First of all, realize that the dongle will only provide a little bit of an obstacle. Someone who knows what they're doing will just remove the call to the dongle and put in a 'true' for whatever result that was called. Everyone will tell you this. But there are roadblocks you can add!
I would find a key portion of your code, something that's difficult or hard to know, something that requires domain knowledge. Then put that knowledge onto the key. One example of this would be shader routines. Shader routines are text files that are sent to a graphics card to achieve particular effects; a very simple brightness/contrast filter would take less than 500 characters to implement, and you can store that in the user space on most dongles. Then you put that information on the key, and only use information from the key in order to show images. That way, if someone tries to just simply remove your dongle, all the images in your program will be blacked out. It would take someone either having a copy of your program, grabbing the text file from the key, and then modifying your program to include that text file, and then knowing that that particular file will be the 'right' way to display images. Particulars of implementation depend on your deployment platform. If you're running a program in WPF, for instance, you might be able to store a directx routine onto your key, and then load that routine from the key and apply the effect to all the images in your app. The cracker then has to be able to intercept that directx routine and apply it properly.
Another possibility is to use the key's random number generation routines to develop UIDs. As soon as someone removes the dongle functionality, all generated UIDs will be zeroed.
The best thing to do, though, is to put a domain specific function onto the dongle (such as the entire UID generation routine). Different manufacturers will have different capabilities in this regard.
How much of a roadblock will these clevernesses get you? Realistically, it depends on the popularity of your program. The more popular your program, the more likely someone will want to crack it, and will devote their time to doing so. In that scenario, you might have a few days if you're particularly good at dongle coding. If your program is not that popular (only a few hundred customers, say), then just the presence of a dongle could be deterrent enough without having to do anything clever.
Crackers will crack by sniffing the traffic between your app and the dongle and either disabling any code that tests for dongle presence or writing code to emulate the dongle (e.g. by replaying recorded traffic), whichever looks easier.
Obfuscation of the testing code, and many scattered pieces of code that perform tests in different ways, as well as separating spatially and temporally the effect of the test (disabling/degrading functionality, displaying a warning etc.) from the test itself make the former method harder.
Mutating the content of the dongle with each test based on some random nonce created each run or possibly even preserved between runs, so that naively recording and replaying the traffic does not work, will make the latter method harder.
However, with the system as described, it is still straightforward to emulate the dongle, so sooner or later someone will do it.
If you have the ability to execute code inside the dongle, you could move code that performs functions critical to your application there, which would mean that the crackers must either rederive the code or break the dongle's physical security - a much more expensive proposal (though still feasible; realise that there is no such thing as perfect security).
How to maximize protection with a simple dongle?
Use API together with Enveloper if an enveloper exists for your resulting file format. This is a very basic rule. Because our enveloper is already equipped with some anti-debugging and obfuscating methods to prevent common newbie hackers to give up hacking the program. Only using enveloper is also not recommended, because once a hacker can break the enveloper protection in other program, they can also break yours.
Call dongle APIs in a LOT of places in your application. For example when first start up, when opening a file, when a dialog box opens, and before processing any information. Also maybe do some random checking even when there's nothing done at all.
Use more than one function to protect a program. Do not just only use find function to look for a plugged dongle.
Use multiple dlls/libraries (if applicable) to call dongle functions. In case one dll is hacked, then there are still other parts of the software that uses the functions from another dll. For example, copying sdx.dll to print.dll, open.dll, and other names, then define the function calls from each dll with different names.
If you use a dll file to call dongle functions, bind it together with the executable. There are quite some programs capable of doing this; for example PEBundle. 3
I have got this article on PRLOG and found it quite useful on maximizing protection with a simple dongle. Maybe this link may help you
Maximizing Protection with a Simple Dongle for your Software
You can implement many check points in your application.
I don't know if you use HASP, but unfortunatelly, dongles can be emulated.
You may want to look into using Dinkey Dongles for your copy protection.
It seems a very secure system and the documentation gives you tips for improving your overall security using the system.
http://www.microcosm.co.uk/dongles.php
Ironically, the thing you want to discourage is not piracy by users, but theft by vendors. The internet has become such a lawless place that vendors can steal and resell your software at will. You have legal recourse in some cases, and not in others.
Nothing is fool-proof, as previously stated. Also, the more complex your security is, the more likely it is to cause headaches or problems for legitimate users.
I'd say the most secure application is always the one tied closest to the server. Sadly, then users worry about it being spyware.
If you make a lot of different calls to your dongle, then maybe the cracker will just emulate your dongle -- or find a single point of failure (quite common to change one or two bytes and all your calls are useless). It is a no-win situation.
As the author of PECompact, I always tell customers that they can not rely on anything to protect their software -- as it can and will be cracked if a dedicated cracker goes after it. The harder you make it, the more of a challenge (fun) it is to them.
I personally use very minimal protection techniques on my software, knowing these facts.
Use smartcard + encrypt/decrypt working files through secret function stored in card. Then software can be pirated, but it will not able to open properly encrypted working files.
I would say that if someone wants to crack your software protection, they will do so. When you say 'hard enough' - how should 'enough' be interpreted?
A dongle will perhaps prevent your average user from copying your software - so in that sense it is already 'enough'. But anyone who feels the need and is able to circumvent the dongle will likely be able to get past any other scheme that you engineer.