Is Raspbian a realtime operating system? - raspberry-pi

I use raspbian for embedded systems like microcontrollers, I make several sensors and display them on an LCD, does the Rasbian operating system also have a real time operating system?

Raspbian is a distribution for Raspberry Pi built on top of Debian Linux, which is a general purpose operating system, as opposed to a real-time operating system. There are ways to run a RTOS on RPi, but it is rather waste of a powerful board like RPi. A more suitable way to achieve real-time behaviour would be to use the PREEMPT_RT patch for the Linux kernel.

Related

Trying to run Raspberry-Pi image under QEMU, but VM memory is limited to 256MB

I want to build a time consuming package (mediapipe) on my Raspberry-Pi buster image under QEMU. So far, I've gotten the image to load and run (including with network connectivity); however, I'm limited to 256MB of storage, which just isn't enough to do much - especially build a mediapipe. Can someone explain why Raspbian images running under QEMU seem to be limited to 256MB?
I've seen some posts about people running with 512MB and even one with 1GB, but they don't seem to be very successful. Can anyone explain the reason for the restriction, and a potential fix?
The problem here is that a lot of people claim to be running "raspberry pi emulation in QEMU" when they're actually just running Raspbian userspace on top of a kernel for a different machine emulation. So it's easy to be confused if you look at several different tutorials that are really describing entirely different emulation setups. Look for what machine type they pass QEMU.
The "versatilepb" machine type gets used in a lot of tutorials, especially older ones, because it has been in QEMU a long time and it is possible to get it to work with the 1176 CPU that the classic Raspberry Pi boards used. This specific machine has a 256MB maximum memory size, because the real hardware it's emulating has that restriction (it's imposed by the way the physical memory address space is designed). This machine type will never be able to support more RAM, so if you need more then you should ignore any tutorial or setup that uses it.
More recent versions of QEMU really do emulate the actual raspberry pi hardware; these are the raspi0, raspi1ap, raspi2b, raspi3ap, raspi3b machine types. These will have the same amount of RAM as the real raspi hardware they're emulating (either 512MB or 1GB). The downside of these board models is that some of the device emulation is lacking features -- so older QEMU will often not correctly boot a newer kernel, and sometimes devices you would like to use are not present. Also, because the raspi boards hang their ethernet device off the USB controller, the only way to get ethernet on these QEMU models would also be to use a USB ethernet device, eg with:
-device usb-net,netdev=eth0 -netdev user,id=eth0
This probably needs a recent QEMU version to get a working USB controller.
I don't know if there are any tutorials/recipes for running Raspbian on top of the QEMU "virt" board. If there are, this would probably be the best experience, because the virt board permits lots of memory, PCI devices, virtio devices, and is well maintained.

Is running a Windows on mac the same or very close to running windows traditionally?

It's been my understanding the OS sits on top of the hardware. Is it more or less the same to run windows from a macbook? When installing SQL on a windows partition, does this install similar to an all Windows setup?
I've heard the kernel is the main connector between hardware and basic OS, so would the mac kernel cause potential differences in operation?
Would installing the linux OS also adhere to these rules?
Thanks, and sorry for the simple question.
Generally, you are correct to say that, installing different operating systems on the same hardware would be the same. You will be able to both install Windows and Linux on the same laptop (whether that would be an Asus brand laptop, or HP, or whatever). Once you install an OS on some hardware, and the OS is able to recognize the hardware, and is able to utilize it, then you are in the clear. What's important is to install on OS that is compatible with the architecture of the computer. So if you get a Linux distro that supports x86 architecture, then you would have to install it on hardware that is with a x86 architecture.
Side note: Modern OS's are very smart and they have a wide range of architecture support (list of Linux architecture support, Windows support for ARM, Apple also has a wide range of architecture support).
Since you are asking about a macbook and Windows, then the short answer is: there won't be a problem for you to install Windows on your mac. Apple even gives you Boot Camp to easily do this (there are also quite a bit of recent tutorials on this topic as well).
So the end experience would be almost the same as having Windows on any other machine.
I've heard the kernel is the main connector between hardware and basic OS, so would the mac kernel cause potential differences in operation?
This is true. The kernel is the heart of any OS, but once you have your Windows running, it would be using its own heart and it won't touch the mac kernel. So if you remove your macOS and install only Windows, then only the Windows kernel would be taking control of the Mac hardware. But if you load your macOS, then the Mac kernel would be running and operating on the hardware.
Will Windows run faster on Mac hardware than macOS on its hardware? It's debatable and I would assume not a lot of studies have been made in that sphere. But, at least, it will run.
But what about dual-booting your macbook with Linux? Technically, it is possible (and the principle is the same), but Apple have made restrictions to their firmware, limiting the option of having both a macOS and a Linux distro at the same time. What's so different here than the case with Windows? Well mainly that the firmware of the macbook (the software embedded in the hardware of the laptop) doesn't allow for Linux to be installed. Maybe things have changed, but these are the (not so recent anymore) news I know about (I guess there are still ways of installing Linux on mac hardware).

Are emulation and hardware-assisted virtualization synonyms?

What is the distinction between emulation and Full Virtualization, also called Hardware-assisted virtualizion (HVM)?
From this source, it is not clear what the relationship is.
Full Virtualization or Hardware-assisted virtualizion (HVM) uses
virtualization extensions from the host CPU to virtualize guests. HVM
requires Intel VT or AMD-V hardware extensions. The Xen Project
software uses Qemu to emulate PC hardware, including BIOS, IDE disk
controller, VGA graphic adapter, USB controller, network adapter etc.
Virtualization hardware extensions are used to boost performance of
the emulation. Fully virtualized guests do not require any kernel
support. This means that Windows operating systems can be used as a
Xen Project HVM guest. Fully virtualized guests are usually slower
than paravirtualized guests, because of the required emulation.
Source: Xen Project Wiki
In the following book these terms are considered synonymous.
At one extreme you have full virtualization, or emulation, in which
the virtual machine is a software simulation of hardware, real or
fictional — as long as there’s a driver, it doesn’t matter much.
Products in this category include VMware and QEMU.
Source: The book of Xen
Following are the excerpts from an article describing the actual difference between emulation and HWM. However, the only distinction I can see is, that virtualization enables to create more than one computing environment.
If emulation takes such a toll, why bother? Because we might want to
do one of the following:
Run an OS on a hardware platform for which it was not designed.
Run an application on a device other than the one it was developed for (e.g., run a Windows program on a Mac).
Read data that was written onto storage media by a device we no longer have or that no longer works.
Source: Russell Kay
Virtual machines offer the following advantages:
They're compatible with all Intel x86 computers.
They're isolated from one another, just as if they were physically separate.
Each is a complete, encapsulated computing environment.
They're essentially independent of the underlying hardware.
They're created using existing hardware.
Source: Russell Kay
There is another article, which only supports my hypothesis.
Emulation, in short, involves making one system imitate another. For
example, if a piece of software runs on system A and not on system B,
we make system B “emulate” the working of system A. The software then
runs on an emulation of system A.
In this same example, virtualization would involve taking system A and
splitting it into two servers, B and C.
So lets consider B=C and we have emulation, dont we?
Please note that virtualization is achieved by emulating the hardware components network adapters, USB, hard disk, CD drives etc in software. Thus emulation actually helps achieving virtualization.
Full virtualization is the technique of virtualization in which the guest OS runs unmodified, that is, the guest is not aware of whether it is running in a virtual machine environment or on a physical machine. Initially binary translation of the guest code was done in order to achieve full virtualization, but it wasn't good from performance perspective.
Para virtualization is a technique which requires modifications in the guest Operating System in order to gain better performance.
Hardware assisted virtualization is full virtualization technique as the guest Operating System runs unmodified. It is called hardware assisted because this type of virtualization utilizes virutalization specific extensions in host hardware like Intel-vtx, AMD-V etc. This technique not only offers full virtualization (guest OS does not require modification) but also has performance benefits and major vendors like Intel and AMD are providing extensions in hardware to support virtualization.

PowerPC 970 Based Macs, Why Is Hypervisor Mode Unavailable?

I recently have acquired a Apple G5 computer (PPC 970) and am interested in learning more about the PowerPC architecture (most of my systems programming knowledge comes from x86 and my own hobby kernel).
After using the computer a while and getting used to PowerPC assembly (RISC), I noticed that low level CPU virtualization is not possible on PowerPC 970 based Macs. The CPU in documentation (PowerPC 64) seems to support hypervisor mode, but it has been noted that it is not possible due to Open Firmware.
Do all operating systems which are loaded from Open Firmware on PowerPC 970 series Macs load in hypervisor mode, making "nested" virtualization impossible? If this is true, why does Open Firmware load all Operating systems in hypervisor mode? Is this in order to provide a secure layer for communication between the the Operating System and Open Firmware (using firmware for everything except ACPI and memory discovery during boot, which requires a transition into "real-mode", is unsafe in x86?).
Also if the Operating system were using hyper-calls to facilitate a secure transition to firmware based routines, wouldn't this impose a large penalty just as syscalls do?
I'm not privy to Apple's hardware designs, but I've heard that the HV mode (ie., HV=1 in the Machine State Register) was disabled, through hardware, on the CPUs used in the G5 machines.
If this is the case, then it's not up to the system firmware to enable/disable HV mode - it's simply not available.
At the time that these machines were available, other Power hardware designs had a small amount of firmware running in HV=1 mode, and only exposed HV=0 to the kernel. However, the G5 wasn't one of these.

rpi v/s carambola --- rootfile system size

I am using following distribution for raspberry pi.
http://www.raspberrypi.org/downloads
http://downloads.raspberrypi.org/images/raspbian/2013-02-09-wheezy-raspbian/2013-02-09-wheezy-raspbian.zip
FOR rpi It is recommended to use >2GB card.
Also when i install it on my memory card size of root file system is about 1.4GB.
But dont you think it is too much size just for an root filesystem in EMBEDDED SYSTEM.
Is it possible for RPI to make a linux distribution with root file system with small size ?
Because most of the embedded system do not have this much memory.
like carambola have 8mb Flash & 32 MB RAM.
http://8devices.com/carambola
In this case carambola root filesystem (OPEN wrt) will fit in 8MB flash. How is it possible ?
Raspbian is a full on general purpose operating system, with the ability to run X windows, create a development environment, watch movies, play games, etc.
It also includes support for a great many devices you might want to connect via USB, such as networking devices, webcams, keyboards, mice, etc.
Many embedded systems are purpose built, with no options for adding/removing devices, nor options for running arbitrary programs. OpenWRT is a routing platform running on typical router hardware, and as such, can be MUCH smaller.
You could check out Buildroot for this, which also has been the base of OpenWrt (OpenWrt used for creating the rootfs for the linked Carambola). Buildroot also has support for raspberry pi, so you can easily create a pretty small rootfs with just a few of installed packages.