I recently learned that GPU can do operations as CPU do, although there is tradeoff in the operation latency. Regardless the latency, can a GPU run an OS by its own?
Imagine a computer with no persistent storage, no user input, no networking and no sense of time/date. For this hypothetical computer, there's no way for anything in the real world (e.g. a user) to influence its behavior, and no way to tell if it has an OS or is merely showing a predetermined slide show.
In theory, if you can write an OS for this hypothetical computer, you wouldn't because there's no possible reason to bother. This describes a typical "GPU + video card memory" scenario (where a host OS running on the CPU is the only means of influencing the work done by the GPU).
Now imagine a "GPU" with various devices attached to it (some persistent storage, some networking, something for user input, a graphics card, ...) so that the "GPU" is able to manage (and abstract, and maybe even allow multiple programs to securely share) these devices. In this case, the "GPU" has become a central unit that does processing (a CPU!). It'll probably be a relatively awful CPU (with poor flow control and over-powered SIMD), but it's no longer a GPU.
Can we run OS in a gpu only device?
I don't know. If a tree falls in the woods, but nobody is around to hear it, does it make a sound?
Related
I have these related questions:
Does anybody know how an OS gets to know all hardware connected on the motherboard? (I guess this is called "Hardware Enumeration").
How does it determine what kind of hardware is residing at an specific IO address (i.e.: serial or parallel or whatever controller)?
How to code a system module which will do this job? (Assuming no OS loaded yet, just BIOS).
I know BIOS is just a validation and an user friendly interface to configure hardware at boot time with no real use after that for most modern OS's (win, Linux, etc). Besides I know that for the BIOS it should not be difficult to find all hardware because it is specifically tuned by the board manufacturer (who knows everything about it!). But for an OS or an application above BIOS that is a complete different story. Right?
Pre-PCI this was much more difficult, you needed a trick for each product, and even with that it was difficult to figure everything out. With usb and pci you can scan the busses to find a vendor and product id, from that you go into a product specific discovery (like the old days this can be difficult). Sometimes the details for that board are protected by NDA or worse you just dont get to know unless you work there on the right team.
The general approach is either based on detection (usb, pcie, etc vendor/product ids) you load a driver or write a driver for that family of product based on the documentation for that family of product. Since you mentioned BIOS, win, linux that implies X86 or a PC, and that pretty much covers the autodetectable. Beyond that you rely on the user who knows what hardware was installed in the system and a driver that came with it. or in some way you ask them what is installed (the specific printer at the end of a cable or out on the network if not auto detectable is an easy to understand example).
In short you take decades of experience in trying to succeed at this and apply it, and still fail from time to time since you are not in 100% control of all the hardware in the system, there are hundreds of vendors out there each doing their own thing.
BIOS enumerates the pci(e) for an x86 pc, for other platforms the OS might do it. The enumeration includes allocating address space for the device based on pci compliant rules. but within that address space you have to know how to program that specific board from vendor documentation if available.
Sorry my English is not very good.
Responding questions 1 and 2:
During the boot process, only the hardware modules strictly necessary to find and start the OS are loaded.
These hardware modules are: motherboard, hard drive, RAM, graphics card, keyboard, mouse, screen (that is detected by the graphics card), network card, CD / DVD, and a few extra peripherals such as USB units.
Each hardware module you connect to a computer has a controller that is like a small BIOS with all the information of the stored device: manufacturer, device type, protocols, etc
The detection process is very simple: the BIOS has hardcoded all the information about the motherboard, with all communication ports. At startup, the BIOS sends a signal to all system ports asking the questions "Who are you? What are you? How do you function?" and all attached devices answers by sending their information. In this way the computer knows how much RAM you have, if there is present a keyboard or a mouse, which storage devices are available, screen resolution, etc ...
Obviously, this process only works with the basic modules needed to boot the system, and does not work with complex peripherals that require specific drivers: printers, scanners, webcams ... all these complex peripherals are loaded by software once the OS has been charged.
Responding question 3:
If you wants to load a specific module during the boot, you must:
- Create a controller for this module.
- If the peripheral is too complex to load everything from the controller, you must to write all the proccess to control that module directly in the BIOS module, or reprogram the IPL to manage that specific module.
I hope I've helped
Actually, when you turn on your system, the BIOS starts the IPL (Initial program load) from ROM. For check the all connected devices are working good and also check the mandatory devices such as keyboard, CMOS, Hard disk and so on. If it is not success, it gives an error flag to the BIOS. The BIOS shows us the error through video devices.
If it is success, the control of boot is transferred to BIOS for further process.
The above all process are called POST (Power On Self Test).
Now the BIOS have the control. It checks all secondary memory devices for boot loader of the OS. If it hit, the bootloader's memory address is transferred to RAM.
The Ram started to execute the bootloader. Here only the os starts to run.
If the bootloader not hit, the bios shows us the boot failure error.
I hope your doubt clarified...
As far as I understand it, any program gets compiled to a series of assembly instructions for the architecture it is running on. What I fail to understand is how the operating system interacts with peripherals such as a video card. Isn't the driver itself a series of assembly instructions for the CPU?
The only thing I can think think of is that it uses regions of memory that is then monitored by the peripheral or it uses the BUS to communicate operations and receive results. Is there a simple explanation to this process.
Sorry if this question is too general, it's something that's been bothering me.
You're basically right in your guess. Depending on the CPU architecture, peripherals might respond to "memory-mapped I/O" (where they watch for reads and writes to specific memory addresses), or to other specific I/O instructions (such as the x86 IN and OUT instructions).
Device drivers are OS-specific software, and provide an interface between the OS and the hardware.
A specific physical device either has hardware that knows how to respond to whatever signals from the CPU it monitors, or it has its own CPU and software that is often called firmware. The firmware of a device is not specific to any operating system and is usually stored in persistent memory on the device even after it is powered off. However, some peripherals might have firmware that is loaded by the device driver when the OS boots.
There are simple explanations and there are truthfull explanations - choose one!
I'll try a simple one: Along the assembly instructions, there are some, that are specialized to talk to peripherials. The hardware interprets them not by e.g. adding values in registers oder writing something to RAM, but by moving some data from a register or a region in RAM to a peripherial (or the other way round).
Inside the OS, the e.g. the sound driver is responsible for assembling some sound data along with some command data in RAM, and the OS then invokes the bus driver to issue these special instructions to move the command and data to the soundcard. The soundcard hardware will (hopefully) understand the command and interpret the data as sound it should play.
I want to know the feasibility of these things:
1) Is it possible to download 200MB audio files to our application?
2) How much RAM can be accessed from an iPhone app? What is the largest amount of RAM an app can expect to use?
Anyone's help in this regard is deeply appreciated.
Thanks to all,
Monish
1) Yes, although you might make users angry who are not on Wifi + fast DSL. Also you will need to handle interrupted downloads.
2) No, since ARM is a 32bit processor a maximum of 4GB RAM can be addressed. Anyhow, iDevices have a maximum of 512MB right now (iPad 2). Your application will get killed by iOS if your app takes about 75% or so of the available RAM which means in reality you shouldn't use more than, say, 80MB of RAM. And if you need to address 8GB then your design is totally flawed to begin with.
There are always ways to work with a lot less (e. g. either by using better algorithms and/or by caching to disk). On the disk, you are only limited by the available space left on the device. So if you have an iDevice with just 8GB you're naturally out of luck as the system itself and other apps/data are reducing the available space. Same if you're on a 64GB iDevice which is packed with movies. You will need to be able to work with the space that is available. You can, for example, try to "reserve" the necessary space by creating a file and making it as big as you need it (via a seek and a write) but be prepared for angry customers.
The App that I'm working on has to load data from a webserver.
If I open the App with connected WiFi, it loads the data very fast. But it don't loads the data if it is connected to EDGE.
I don't know what I'm doing wrong.
Do I have to set something in the settings?
Greetings.
You shouldn't have to do anything specifically to turn on connectivity when using EDGE or 3G. It should just happen automatically. Not to state the obvious, but EDGE is pretty slow (probably 20-40kbps tops usable in normal situations), and often performs even worse than the theoretical speeds provided by the technology.
That said, using a neat little preference pane called Speed Limit, you can test your application in the simulator while simulating different bandwidth and latency constraints. You can get a better idea of the way you should expect your app to perform under EDGE with low bandwidth and high latency (or any other network connection). This will give you a better idea where your problem may lie and what to expect.
I'm analyzing the iPhone platform (for a paper). I've made a list with issues,
developers/architects have to consider, before working with the iPhone SDK.
The questions aims at people, who want to release iPhone software. What constraints restrict them in comparsion to other mobile platforms, such as Android, Windows Mobile, Symbian, etc.
Feel free to add hurdles, which I may have forgotten to list.
Thanks.
iPhone platform constrains/hurdles:
No physical keyboard
No replacable battery
One Application A Time
Sandbox File System
Restricted Deployment Cycle (Dev program...)
App Store Approval Process
No replaceable battery is no concern for software developers whatsoever, as there are no APIs for battery manipulation or replacement. This is no more of a concern for iPhone developers than "access to electricity" is a practical concern for developing for other platforms.
Others I would add:
Requires a Mac. Fairly obvious one, not a terrible barrier to entry compared to other closed systems like game consoles, but still higher than some other phone/mobile platforms like Windows Mobile, J2ME or Brew.
Costs money to debug on real hardware. You can only run and debug in the simulator unless you buy a $99 developer program subscription, which lets you pair iPhone and iTouch hardware with your Xcode install and run apps on it.
Objective-C as the programming language. It really shouldn't deter anyone but a lot of developers get really grumpy about learning anything new or different.
Must accommodate interruptions (i.e., the user may get a call at any time and the app must be prepared to save any state necessary and quit within a fixed time limit).
Not specific to iPhone but like any platform, you are constrained by the CPU/GPU/RAM the device has, and in the iPhone's case this is obviously quite a bit less hardware than people with a desktop background are accustomed to.
Restrictive wording in EULA regarding embedded scripting languages. It is apparently forbidden to execute any scripts via an iPhone application, which is quite a bummer as embedded scripting languages are quite common these days and very useful.
Limited CPU speed
Limited RAM
Objective-C is effectively the main
dev language
Power management concerns (I'm not sure if lack
of a replaceable battery is a concern
of mine). High CPU utilization can be a drain on the battery (and cause extra heat). In other words there are CPU intensive things I choose not to do, in order not to drain the battery too fast.
Only one IDE
inability to access other apps' data
easily