Related
Some background:
I am a Computer Engineering major attending school right now, and I just completed a project that created a microprocessor with a super simple instruction set that ran on an FPGA. I chose to implement a simple file storage scheme, a VGA text only display output, and a PS/2 keyboard input. I wrote two main programs, a firmware that was in ROM in the processor and a kernel that provided a bunch of library type functions, and was capable of loading and executing files from the filesystem. This project was challenging and overall a lot of fun.
My Question:
I want to do some super low level programming on a modern computer, but I can't seem to find any resources or documentation that help me get started. To be clear, I want to find the proper documentation that would help me to write a program in C, x86, or x86-64 assembly that I could compile, and format into some form of bootable data. I know this is a daunting task, and typically not something a hobbyist would take on, but I know that it's possible (Terry Davis's TempleOS).
Are there any websites or books that I can read that will contain the specifics needed to make something like this?
Look out, you might just catch the bug. OS Development, though having a very small demographic, is still quite a thriving hobby. Once you start, you may not ever give it up.
Since your subject line states 64-bit and you use the term modern hardware, be advised that modern hardware no longer has the older style BIOS, where the developer wrote the boot process which included the video out, file system in, and other standard routines. Modern hardware now uses an EFI firmware which does all of the booting for you, including the reading from the file system(s). For modern hardware, OS development really starts with the OS Loader, the part that loads the OS, and this is done in a high level language such as C/C++. Very little if any assembly at all, in fact that is its point/purpose.
Don't get too discouraged, currently a lot of computers still allow the old style boot. However, the old style boot starts in 16-bit mode, moves to 32-bit mode, then if desired, moves to long mode (64-bit). There also are emulators that you can use so you don't have to have a separate system, just to test your development. I prefer Bochs myself, but I am a little bias since I wrote some of the code for it, namely most of the (original) USB emulation.
If you wish to dip your toes into this hobby, there are numerous places to start. I personally wrote a few books on the subject. They show you how to start from when the time the POST gives up control to your boot code, up to the point of a minimal Round Robin style task/thread switching OS, with all the necessary hardware and software basics to do so. There is a forum to OS Development, along with its wiki.
Again, a project like this is not for the faint at heart, though it is an enjoyable hobby most have found to be a very good learning experience.
Do interpreted languages such as Java and Python need an operating system to work?
For example, on a bare-metal ARM microcontroller, can an interpreter be installed such that we can have both compiled code such as C, and interpreted code such as Python working together, Or is an OS needed to support this?
Of course you can write an interpreter that runs on bare-metal, it is just that if the platform does not have an OS any run-time support the language needs must be part of the interpreter. To the extent in some cases that such an interpreter might essentially be an OS. That is if it provides the services to operate a system, it could be called an operating system.
It is not perhaps as simple as interpreted vs compiled. Java for example runs on a virtual machine and is "compiled" to bytecode. The bytecode is interpreted (or just-in-time compiled in some cases), rather then the Java source directly. In an embedded system, it is possible that you would deploy cross-compiled bytecode on the target rather then the source. Certainly however JVMs exist for bare-metal. Some support multi-threading through a third party RTOS, others either have that support built-in or do not support threading at all.
There are interpreters for cut-down subsets of JavaScript and Python that run on bare-metal microcontrollers. I am not sure about full implementations, but it is technically possible given sufficient run-time support even if not explicitly implemented. To fully support some of these languages along with all the standard and third-party libraries and frameworks a developer might expect, may require so much run-time support and resource that it is simpler to deploy and OS, so implementations for resource constrained systems are often subsets or have restricted libraries.
Java needs a VM - virtual machine. It isn't interpreted, but executes byte code. Interpreted would mean grabbing the source in run-time as it goes, like BASIC.
When Java was new and exciting around year 2000, everyone thought it would be the next big general-purpose language, replacing C++. The syntax was so clean, it was "pure OO" and not some "filthy hybrid".
It was the major buzz word of the time. Schools stopped teaching C and C++. MCU manufacturers started to make chips with Java VM in hardware. Microsoft made their own Java "standard". Everyone was high on the Java hype.
Then as the Internet hype as whole collapsed in 2002, it took the Java hype with it. In the sober hang-over afterwards, people started to realize that things like byte code, VMs and garbage collection probably don't belong on bare metal systems.
They went back to using compiled C for hardware-related programming. Or in fact they never stopped, since Java never quite made it there, save for some oddball exotic architectures.
Java remained used only in the areas were it was suitable, namely web, desktop and mobile development. And so it got a second golden age when the smart phone hype struck around 2010.
No. See for example picoJava, which is one of several solutions for running Java natively. You can't get closer to bare metal than running bytecode on the CPU.
No. Some 8-bit computers had interpreted languages in ROM despite not having anything reasonably resembling a modern operating system. The Apple 2 is one example. You could boot the system without any disks or tapes, and it would go straight to a BASIC prompt, where you could write basic (no pun intended) programs.
Note that an operating system is somewhat of a vague term when speaking about these days - these 8-bit computers did have some level of firmware, and this firmware did provide some OS-type functionality like access to basic peripherals. In these days, what we now know as an OS was more commonly called a "DOS" - a Disk Operating System. MS-DOS is one of them, as well as Apple's ProDOS. These DOS's evolved into our modern-day operating systems (e.g. Windows 95 was based on top of MS-DOS, while modern Windows versions derive from a separate branch that was largely re-implemented with more modern techniques), so one could claim that their ancestors are the closest they had to what we now call an OS.
But what is an interpreter but a piece of software?
In a more theoretical sense, an interpreter is simply software - a program that takes input and produces output. Suppose you were to implement a custom solid-state Turing Machine. In this case, your "input" would be the program to be interpreted, and the "output" would be the program's behavior. If "software" can run without an operating system, then an interpreter can.
Is this model a little simplified? Of course. The difference is a matter of degree, not nature. Add very basic user input and output capabilities (e.g. a TTY) and you have the foundation to implement all, or nearly all, of the basic functionality of a language such as Java byte code, Python, or BASIC. The main things you would be missing are libraries and whatnot that depend on things like screen manipulation, multiprocessing, and networking, but you could handle them with time too.
This is regarding an issue I have been facing for sometime. Though I have found a solution, I really would like to get some opinion about the approach taken.
We have an application which receives messages from a host, does some processing and then pass that message on to an external system. This application is developed in Java and has to run on Linux/Oracle and HP-NonS top Tandem/SQLMX OS/DB combination.
I have developed a test automation framework which is written in Perl.This script traverses directories (specified as an argument to this script) and executes test cases specified under those directories. Test cases could be organized into directories as per functionality. This approach was taken to ensure that a specific functionality can also checked in addition to entire regression suite.For verification of the test results, script read test case specific input files which has sql queries mentioned in them.
In Linux/Oracle, Perl DBD/DBI interface is used to query Oracle database.
When this automation tool was run in Tandem, I came to know that there was no DBD/DBI interface for SQLMX. When we contacted HP, they informed us that it would be a while before they develop DBD/DBI interfaces for SQLMX DB.
To circumvent this issue, I developed a small Java application which accepts DB connection string, user name, password and various other parameters. This Java app is now responsible for test case verification functionality.
I must say it meets our current needs, but something tells me (do not know what) that approach taken is not a good one, though now I have the flexibility of running this automation with any DB which has a JDBC interface.
Can you please provide feedback on the above approach and suggest a better solution?
Thanks in advance
The question is a bit too broad to comment usefully on except for one part.
If the project is in Java, write the tests in Java. Writing the tests in a different language adds all sorts of complications.
You have to maintain another programming language and attendant libraries. They can have different caveats and bugs for the same actions, such as you ran into with a lack of a database driver in a certain environment.
Having the tests done in a different language than the project is developed in drives a wedge between testing and development. Developers will not feel responsible for participating in the testing process because they don't even know the language.
With the tests written in a different language, they cannot leverage any work which has already been done. They have to write all over again basic code to access and work with the data and services, doubling the work and doubling the bugs. If the project code changes APIs or data structures, the test code can easily fall out of sync requiring extra maintenance hassles.
Java already has well developed testing tools to do what you want. The whole structure of running specific tests vs the whole test suite is built into test suites like jUnit.
So I can underscore the point, I wrote Test::More and I'm recommending you not use it here.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are the best technologies to use for behavior-driven development on the iPhone? And what are some open source example projects that demonstrate sound use of these technologies? Here are some options I've found:
Unit Testing
Test::Unit Style
OCUnit/SenTestingKit as explained in iOS Development Guide: Unit Testing Applications & other OCUnit references.
Examples: iPhoneUnitTests, Three20
CATCH
GHUnit
Google Toolbox for Mac: iPhone Unit Testing
RSpec Style
Kiwi (which also comes with mocking & expectations)
Cedar
Jasmine with UI Automation as shown in dexterous' iOS-Acceptance-Testing specs
Acceptance Testing
Selenium Style
UI Automation (works on device)
UI Automation Instruments Guide
UI Automation reference documentation
Tuneup js - cool library for using with UIAutomation.
Capturing User Interface Actions into Automation Scripts
It's possible to use Cucumber (written in JavaScript) to drive UI Automation. This would be a great open-source project. Then, we could write Gherkin to run UI Automation testing. For now, I'll just write Gherkin as comments.
UPDATE: Zucchini Framework seems to blend Cucumber & UI Automation! :)
Old Blog Posts:
Alex Vollmer's UI Automation tutorial
O'Reilly Answers UI Automation tutorial
Adi Saxena's UI Automation tutorial
UISpec with UISpecRunner
UISpec is open source on Google Code.
UISpec has comprehensive documentation.
FoneMonkey
Cucumber Style
Frank and iCuke (based on the Cucumber meets iPhone talk)
The Frank Google Group has much more activity than the iCuke Google Group.
Frank runs on both device and simulator, while iCuke only runs in simulator.
Frank seems to have a more comprehensive set of step definitions than iCuke's step definitions. And, Frank also has a step definition compendium on their wiki.
I proposed that we merge iCuke & Frank (similar to how Merb & Rails merged) since they have the same common goal: Cucumber for iOS.
KIF (Keep It Functional) by Square
Zucchini Framework uses Cucumber syntax for writing tests and uses CoffeeScript for step definitions.
Additions
OCMock for mocking
OCHamcrest and/or Expecta for expectations
Conclusion
Well, obviously, there's no right answer to this question, but here's what I'm choosing to go with currently:
For unit testing, I used to use OCUnit/SenTestingKit in XCode 4. It's simple & solid. But, I prefer the language of BDD over TDD (Why is RSpec better than Test::Unit?) because our words create our world. So now, I use Kiwi with ARC & Kiwi code completion/autocompletion. I prefer Kiwi over Cedar because it's built on top of OCUnit and comes with RSpec-style matchers & mocks/stubs. UPDATE: I'm now looking into OCMock because, currently, Kiwi doesn't support stubbing toll-free bridged objects.
For acceptance testing, I use UI Automation because it's awesome. It lets you record each test case, making writing tests automatic. Also, Apple develops it, and so it has a promising future. It also works on the device and from Instruments, which allows for other cool features, like showing memory leaks. Unfortunately, with UI Automation, I don't know how to run Objective-C code, but with Frank & iCuke you can. So, I'll just test the lower-level Objective-C stuff with unit tests, or create UIButtons only for the TEST build configuration, which when clicked, will run Objective-C code.
Which solutions do you use?
Related Questions
Is there a BDD solution that presently works well with iOS4 and Xcode4?
SenTestingKit (integrated with XCode) versus GHUnit on XCode 4 for Unit Testing?
Testing asynchronous code on iOS with OCunit
SenTestingKit in Xcode 4: Asynchronous testing?
How does unit testing on the iPhone work?
tl;dr
At Pivotal we wrote Cedar because we use and love Rspec on our Ruby projects. Cedar isn't meant to replace or compete with OCUnit; it's meant to bring the possibility of BDD-style testing to Objective C, just as Rspec pioneered BDD-style testing in Ruby, but hasn't eliminated Test::Unit. Choosing one or the other is largely a matter of style preferences.
In some cases we designed Cedar to overcome some shortcomings in the way OCUnit works for us. Specifically, we wanted to be able to use the debugger in tests, to run tests from the command line and in CI builds, and get useful text output of test results. These things may be more or less useful to you.
Long answer
Deciding between two testing frameworks like Cedar and OCUnit (for example) comes down to two things: preferred style, and ease of use. I'll start with the style, because that's simply a matter of opinion and preference; ease of use tends to be a set of tradeoffs.
Style considerations transcend what technology or language you use. xUnit-style unit testing has been around for far longer than BDD-style testing, but the latter has rapidly gained in popularity, largely due to Rspec.
The primary advantage of xUnit-style testing is its simplicity, and wide adoption (amongst developers who write unit tests); nearly any language you could consider writing code in has an xUnit-style framework available.
BDD-style frameworks tend to have two main differences when compared to xUnit-style: how you structure the test (or specs), and the syntax for writing your assertions. For me, the structural difference is the main differentiator. xUnit tests are one-dimensional, with one setUp method for all tests in a given test class. The classes that we test, however, aren't one-dimensional; we often need to test actions in several different, potentially conflicting, contexts. For example, consider a simple ShoppingCart class, with an addItem: method (for the purposes of this answer I'll use Objective C syntax). The behavior of this method may differ when the cart is empty compared to when the cart contains other items; it may differ if the user has entered a discount code; it may differ if the specified item can't be shipped by the selected shipping method; etc. As these possible conditions intersect with one another you end up with a geometrically increasing number of possible contexts; in xUnit-style testing this often leads to a lot of methods with names like testAddItemWhenCartIsEmptyAndNoDiscountCodeAndShippingMethodApplies. The structure of BDD-style frameworks allows you to organize these conditions individually, which I find makes it easier to make sure I cover all cases, as well as easier to find, change, or add individual conditions. As an example, using Cedar syntax, the method above would look like this:
describe(#"ShoppingCart", ^{
describe(#"addItem:", ^{
describe(#"when the cart is empty", ^{
describe(#"with no discount code", ^{
describe(#"when the shipping method applies to the item", ^{
it(#"should add the item to the cart", ^{
...
});
it(#"should add the full price of the item to the overall price", ^{
...
});
});
describe(#"when the shipping method does not apply to the item", ^{
...
});
});
describe(#"with a discount code", ^{
...
});
});
describe(#"when the cart contains other items, ^{
...
});
});
});
In some cases you'll find contexts in that contain the same sets of assertions, which you can DRY up using shared example contexts.
The second main difference between BDD-style frameworks and xUnit-style frameworks, assertion (or "matcher") syntax, simply makes the style of the specs somewhat nicer; some people really like it, others don't.
That leads to the question of ease of use. In this case, each framework has its pros and cons:
OCUnit has been around much longer than Cedar, and is integrated directly into Xcode. This means it's simple to make a new test target, and, most of the time, getting tests up and running "just works." On the other hand, we found that in some cases, such as running on an iOS device, getting OCUnit tests to work was nigh impossible. Setting up Cedar specs takes some more work than OCUnit tests, since you have get the library and link against it yourself (never a trivial task in Xcode). We're working on making setup easier, and any suggestions are more than welcome.
OCUnit runs tests as part of the build. This means you don't need to run an executable to make your tests run; if any tests fail, your build fails. This makes the process of running tests one step simpler, and test output goes directly into your build output window which makes it easy to see. We chose to have Cedar specs build into an executable which you run separately for a few reasons:
We wanted to be able to use the debugger. You run Cedar specs just like you would run any other executable, so you can use the debugger in the same way.
We wanted easy console logging in tests. You can use NSLog() in OCUnit tests, but the output goes into the build window where you have to unfold the build step in order to read it.
We wanted easy to read test reporting, both on the command line and in Xcode. OCUnit results appear nicely in the build window in Xcode, but building from the command line (or as part of a CI process) results in test output intermingled with lots and lots of other build output. With separate build and run phases Cedar separates the output so the test output is easy to find. The default Cedar test runner copies the standard style of printing "." for each passing spec, "F" for failing specs, etc. Cedar also has the ability to use custom reporter objects, so you can have it output results any way you like, with a little effort.
OCUnit is the official unit testing framework for Objective C, and is supported by Apple. Apple has basically limitless resources, so if they want something done it will get done. And, after all, this is Apple's sandbox we're playing in. The flip side of that coin, however, is that Apple receives on the order of a bajillion support requests and bug reports each day. They're remarkably good about handling them all, but they may not be able to handle issues you report immediately, or at all. Cedar is much newer and less baked than OCUnit, but if you have questions or problems or suggestions send a message to the Cedar mailing list (cedar-discuss#googlegroups.com) and we'll do what we can to help you out. Also, feel free to fork the code from Github (github.com/pivotal/cedar) and add whatever you think is missing. We make our testing frameworks open source for a reason.
Running OCUnit tests on iOS devices can be difficult. Honestly, I haven't tried this for quite some time, so it may have gotten easier, but the last time I tried I simply couldn't get OCUnit tests for any UIKit functionality to work. When we wrote Cedar we made sure that we could test UIKit-dependent code both on the simulator and on devices.
Finally, we wrote Cedar for unit testing, which means it's not really comparable with projects like UISpec. It's been quite a while since I tried using UISpec, but I understood it to be focused primarily on programmatically driving the UI on an iOS device. We specifically decided not to try to have Cedar support these types of specs, since Apple was (at the time) about to announce UIAutomation.
I'm going to have to toss Frank into the acceptance testing mix. This is a fairly new addition but has worked excellent for me so far. Also, it is actually being actively worked on, unlike icuke and the others.
For test driven development, I like to use GHUnit, its a breeze to set up, and works great for debugging too.
Great List!
I found another interesting solution for UI testing iOS applications.
Zucchini Framework
It is based on UIAutomation.
The framework let you write screen centric scenarios in Cucumber like style.
The scenarios can be executed in Simulator and on device from a console (it is CI friendly).
The assertions are screenshot based. Sounds inflexible, but it gets you nice HTML report, with highlighted screen comparison and you can provide masks which define the regions you want to have pixel exact assertion.
Each screen has to be described in CoffeScript and the tool it self is written in ruby.
It is kind of polyglott nightmare, but the tool provides a nice abstraction for UIAutomation and when the screens are described it is manageable even for QA person.
I would choose iCuke for acceptance tests and Cedar for unit tests. UIAutomation is a step in the right direction for Apple, but the tools need better support for continuous integration; automatically running UIAutomation tests with Instruments is currently not possible, for example.
GHUnit is good for unit tests; for integration tests, I've used UISpec with some success (github fork here: https://github.com/drync/UISpec), but am looking forward to trying iCuke, since it promises to be a lightweight setup, and you can use the rails testing goodness, like RSpec and Cucumber.
I currently use specta for rspec like setups and it's partner (as mentioned above) expecta which has ton's of awesome matching options.
I happen to really like OCDSpec2 but I'm biased, I wrote OCDSpec and contribute to the second.
It's very fast even on iOS, in part because it's built from the ground up rather than being put on top of OCUnit. It has an RSpec/Jasmine syntax as well.
https://github.com/ericmeyer/ocdspec2
I'm taking an introductory course (3 months) about real time systems design, but any implementation.
I would like to build something that let me understand better what I'll learn in theory, but since I have never done any real time system I can't estimate how long will take any project. It would be a concept proof project, or something like that, given my available time and knowledge.
Please, could you give me some idea? Thank you in advance.
I programm in TSQL, Delphi and C#, but I'll not have any problem in learning another language.
Suggest you consider exploring the Real-Time Specification for Java (RTSJ). While it is not a traditional environment for constructing real-time software, it is an up-and-coming technology with a lot of interest. Even better, you can witness some of the ongoing debate about what matters and what doesn't in real-time systems.
Sun's JavaRTS is freely available for download, and has some interesting demonstrations available to show deterministic behavior, and show off their RT garbage collector.
In terms of a specific project, I suggest you start simple: 1) Build a work-generator that you can tune to consume a given amount of CPU time; 2) Put this into a framework that can produce a distribution of work-generator tasks (as threads, or as chunks of work executed in a thread) and a mechanism for logging the work produced; 3) Produce charts of the execution time, sojourn time, deadline, slack/overrun of these tasks versus their priority; 4) demonstrate that tasks running in the context of real-time threads (vice timesharing) behave differently.
Bonus points if you can measure the overhead in the scheduler by determining at what supplied load (total CPU time produced by your work generator tasks divided by wall-clock time) your tasks begin missing deadlines.
Try to think of real-time tasks that are time-critical, for instance video-playing, which fails if tasks are not finished (e.g. calculating the next frame) in time.
You can also think of some industrial solutions, but they are probably more difficult to study in your local environment.
You should definitely consider building your system using a hardware development board equipped with a small processor (ARM, PIC, AVR, any one will do). This really helped remove my fear of the low-level when I started developing. You'll have to use C or C++ though.
You will then have two alternatives : either go bare-metal, or use a real-time OS.
Going bare-metal, you can learn :
How to initalize your processor from scratch and most importantly how to use interrupts, which are the fastest way you have to respond to an externel event
How to implement lightweight threads with fast context switching, something every real-time OS implements
In order to ease this a bit, look for a dev kit which comes with lots of documentation and source code. I used Embedded Artists ARM boards and they give you a lot of material.
Going with the RT OS :
You'll fast-track your project, and will be able to learn how to fine-tune a RT OS
You may try your hand at an open-source OS, such as Linux or the BSDs, and learn a lot from the source code
Either choice is good, you will get a really cool hands-on project to show off and hopefully better understand your course material. Good luck!
As most realtime systems are still implemented in C or C++ it may be good to brush up your knowledge of these programming languages. Many realtime systems are also embedded systems, so you might want to play around with a cheap open source one like BeagleBoard (http://beagleboard.org/). This will also give you a chance to learn about cross compiling etc.