will the binary files of ansi c Application can run of iPhone? - iphone

I have some applications that are written in ANSI C. Will I be able to run the binary files of those apps on the iPhone? If not directly, is there any other useful method to do so? I don't want to rewrite the applications.

The binaries? Unlikely. When you compile to a specific platform, you tend to be locked into that platform. You can't, for example, take binaries created on a PPC Mac and expect them to run on an Intel CPU.
Your first problem is that the binaries are a different language (different processors have different instruction sets such as Intel x86, PPC, SPARC and so on). The second is that other platforms may not have the same way of implementing lower-level functions like disk I/O or user interaction.
If it's ANSI/ISO C, you should probably be able to re-compile it for the new platform. Or you may be able to use an emulator to run the binary unchanged. But running the binary directly on different hardware that it wasn't made for is not going to fly unfortunately.

It probably won't be able to run. You should compile the source code to the specific plataform.

Related

Implementation of USB device driver on my own OS based in Linux

I’m on process of developing my own Operating System based in Linux.
This week we’re aiming to implement very simple USB device driver , which is quite hard to get basic algorithm .
However commonly it’s hard to find out some sources aside from commercial linux system.
And I want to get some advice about this .
Plus, I do all these stuffs on Ubuntu , using QEMU emulator.
I’ve done simple file system and hard disk device driver so far.
Help me out how to implement USB device driver with very sime ideas.
Thank you !! :)
Implementing USB is quite the task. First you must have a working PCI(e) enumerator or other form of finding the USB controller. You then must find out which of the four most common controller types it happens to be. Each controller type is completely different from the previous and must contain its own driver. You also need a standard USB interface that is independent of the controller type.
Taking on the USB is quite the task, but in my opinion a very interesting and enjoyable task. Enjoyable enough, that I even wrote a book about how to do it. It explains how to find the controller(s) via the PCI(e) bus, how to setup this bus, how to detect the type of USB controller--UHCI, OHCI, EHCI, or xHCI--and how to send and receive data packets to/from attached devices. This book was written exactly for the purpose of those of us creating our own operating systems and adding USB support to them. The fact that you are basing your OS on Linux should not matter since the book does not rely upon any existing OS to accomplish this task, other than the example programs relying on memory allocation, which is easily modified for your developing platform.
Might I say that if you do take on this task, it will be a difficult task, but it will be an enjoyable task. In my opinion, the USB is the most enjoyable part of this hobby of ours.

How drivers work out of the box in x86 and not in embedded computers like Android phone

I'm curious about how the drivers work out of the box in x86 motherboards, like display, USB controllers etc.
For example :
Booting a toy custom kernel in x86 can display to screen without doing any extra work on the drivers space, however for an Android phone which is an embedded system, it seems almost impossible to display to screen with my own toy custom kernel (as there is information out there available about the memory map of the device and how the display is interfaced with the device).
Why is that I/O works out of the box in x86 motherboards and doesn't on embedded computers?
x86 PC firmware has standard software interfaces (a lot like system calls), either modern UEFI or legacy BIOS int 0x10 and other interrupts.
The key point is that it's not just bare-metal x86, it's IBM PC-compatible which means software and even emulated legacy hardware like a PS/2 port, VGA, and even legacy interrupt controller.
If you didn't have all this help from firmware (for the benefit of bootloaders and toy OSes), you'd have a much harder job, e.g. needing at least a basic USB-hid and USB host-controller driver to get keyboard input. The lowest level function to handle a user input
Why is that I/O works out of the box in x86 motherboards and doesn't on embedded computers?
That's not your real question. Embedded machines have working I/O hardware, they just don't come with portable software APIs/ABIs wrapped around drivers as part of the firmware.
I think most vendor's SDK comes with functions to access the I/O hardware (after maybe doing some fiddling to get it into a usable state). i.e. to write your own driver for it.
Embedded doesn't need this in firmware because it's expected that the kernel will be customized for the hardware.
Wouldn't it be better to have a BIOS or UEFI for maximum portability? Does it have any drawbacks to include one?
Yes: code size in the boot ROM, and someone has to write + debug that code. This costs time and developer salary.
No point in booting up what's nearly an OS (a UEFI environment) just to load a kernel which is going to take over the HW anyway.
Also a downside in boot time: any code that runs other than loading the kernel where it wants to be is wasted CPU time that slows down the boot. Having a very lightweight interface that just lets you get your kernel loaded, and leaving all the I/O to it, makes sense for this.
Unlike x86 PCs, there's no expectation that you can use this hardware with an OS install disc / image you downloaded that isn't specifically customized for this hardware.
It's not intended to be easy for hobbyists to play with using training-wheels APIs. Real OSes on this hardware won't use such APIs so why provide them in the first place?

How exactly does a program talk to a device driver?

So I'm confused on how exactly we as the programmers talk to devices on the computer. Now I'm not talking the big ideas. I know that there are device drivers that sit atop the hardware so that different programs can use there features.
But in general who exactly talks to the drivers? Is the programmer writing the application responsible for calling a function on the driver? Or does the programmer call a function through the operating system which then handles the call to the driver? As you can see I'm really just confused about the nitty gritty of how the driver, OS and your application fit together.
The application doesn't call the driver directly - that would violate the entire idea of user mode and kernel separation. Instead the OS exposes the relevant ABI to the user mode programs, enabling the applications to call the exposed functionality (with respect to the predefined restrictions that should be documented).

Is it possible to create an OS that can run all application?

Just a thought, if we have to make our application cross-platform, then is it possible to create a cross-application OS?
No.
Lets say you do go and invest - a monumental amount of - effort in building you're Uber-OS (that will run Mac apps, Linux apps, Unix apps, Android apps, i-phone apps, Nokia apps, Symbian apps, SAP apps, Windows Apps etc).
Then there's nothing stopping someone writing a new OS that you don't support.
P.S. And there are hundreds (if not thousands) of different hand held devices out there for scanning products, weights and mesures etc many of which have their own flavour of OS.
Technically yes as long as you limit the scope of all to all applications that run on major OSes.
It is theoretically possible to create an OS that could handle applications run on the 4-5 most common OSes but the amount of work involved would be monumental.
Every time a new feature was added to any of the OSes, you'd need to add it to your OS too - So as well as being almost impossible to build, you'd need a large enough dev team to stay ahead of 4-5 of the largest dev teams/groups in the world.
No but with virtualization you could have a single computer that can run any application.
First there is the practical impossibility of successfully following the evolution of an indefinite number of operating systems. Do we take embedded OS into account? How about one-shot OS for specific applications? How about proprietary OS with no access to documentation?
Then there is also the - very difficult, if not impossible - problem of merging the various paradigms used in the wild. Ideally you would want OS services like the clipboard, or networking or ... or ... to work in a uniform way and allow applications to cooperate as if targeted to the same OS.
(Let's not even think about the various hardware-dependent applications.)
After all this, you should also consider what the application development for your own OS would be like...
I wonder if this is a good case for Gödel's incompleteness theorems :-)
PS: That said, there are quite a few projects attempting to bridge the various OS gaps:
http://en.wikipedia.org/wiki/List_of_computer_system_emulators
http://en.wikipedia.org/wiki/List_of_emulators#Operating_System_emulators
What you can do is use virtual machines, such as VMWare's software, and emulate several operating systems on the same physical machine.
What do you define by an operating system that can run all applications?
Applications are mostly written in a higher level language and then translated into binary code that differs between machine architectures (like Intel and PowerPC) and operating systems (like Windows or Unix-based systems).
Java for example is only cross-platform because not the language itself is cross-platform (any high level language is), but because there exist Java virtual machines for different architectures and operating systems that abstract the heterogeneity of the underlying system.
It is definitely not theoretically impossible (nothing is except for some mathematical problems), but can you imagine what one would have to do in order to make such a thing work? You can basically run Linux programs in Windows with CygWin, you can also run Windows programs in Linux with Wine. All of those try to create a small operating system (e.g. the Windows core) into your other OS (e.g. Linux). This is probably not what you want.
To summarize, I can't imagine anyone really trying to do that. With all the money in the world, seriously. Better invest in writing native apps for the operating systems you want to support.

can LLVM IR (Intermediate Representation) be used to create cross-platform (iphone and Android) ARM executables?

I'm looking into possible means of efficiently creating an Android and iPhone targeted application from the same code base, be it in C/C++/C#/Objective-C or Java (using VMKit).
LLVM looks promising, however I'm slightly confused regarding compatibility issues surrounding the differing ARM CPU implementations, mainly from the aspect of how graphics and sound code are 'resolved' by underlying chipsets (i.e. do I have to code to specific ARM chipsets, or will a higher-level API, like OpenGL, suffice?).
I do know a little about various Cross Dev products (i.e. Airplay SDK, MoSync (GPL-GCC), Unity3d, XMLVM etc.), but what I'd really like to do is either write in Java or use a C/C++ engine, emit LLVM IR and create compatible ARM executables, if possible.
Apologies if any of the above is vague.
Thanks
Rich
The compiler is not the problem. To develop for both you need to create an abstraction layer that allows you to write a single application on that layer. Then have two implementations of the abstraction layer, one that makes Android api calls and one that makes iPhone api calls. There is nothing the compiler can do to help you.
Where LLVM IR might be interesting in its portability is for programs like:
int a,b;
a=7;
b=a-4;
Compile to IR then take the same IR and generate assembler for all the different processor types and examine the differences.
In the case of real applications that for example need to write a pixel on a display, the registers, sizes of the display and a whole host of other differences exist, and those differences are not exposed between the IR and the assembler backend but are exposed in the main C program and the api calls defined by the platforms library, so you have to solve the problem in C not IR or assembler.
LLVM is no different from any other compiler in the sense that you need, so I'm afraid the answer is no.
The LLVM IR is in layman's terms a "partly compiled" code, and can be used, say, to compile the rest on the end device. For example, if you have a graphically intensive app, you might ship parts of it in IR, then compile on the device to get the most performance out of the specific hardware.
For what you want, you either need to use one of the products you mentioned, or have native UIs (using Cocoa/VMKit), but possible share the data/logic code in the app
For standard app store legal development for stock OS devices, neither the sound nor the graphics code in an app have anything to do with the underlying chipsets or specific ARM CPU architecture. The sound and graphics (and all other user IO) are abstracted by each OS through platform dependent APIs and library linkages. You can either code to each platform's completely different APIs, or use an abstraction layer on top, such as Unity, et. al.
LLVM might allow you to optimize from intermediate code to machine code for certain differences in the ARM architectures (armv6, armv7, fp support, etc.), but only in self-contained code that does no user IO, or otherwise require any higher level interface use to the OS.