Porting a project that is written for STM32 (ARM-Cortex M7 to NXP (ARM-Cortex M7) - stm32

The STM32 supply is very bad at the moment hence I am considering moving away from the STM32 and going for NXP since the supply is much better.
I would like to ask for advice regarding migrating from STM32 to NXP:
Have anyone tried to migrate their project from STM32 to NXP? Can this be done easily if the Core is the same?
What are the major difficulties that I may encounter?
Can I easily just remap pins, copy paste SPI/I2C and other drivers and they will just simply "work"

I have not gone through the migration, but consider that every single peripheral device (timers, Flash, SPI, I2C, etc) between two different micro manufacturers has a completely different register interface. This means that not a single thing works until you've implemented the new register interface. Usually this is handled by the manufacturers HAL, but those also have completely different C interfaces - so you're going to have to implement that, at the very least. So it's going to be a massive change no matter what. People who predict moving their code from one manufacturer to another usually build a porting layer in advance that hides the HAL, and swap out the HALs behind this layer. It mostly moves the development effort to another place (upfront) and starts reducing the work if there are more than 2 ports to maintain.
To get started it's best to have a quick look at the NXP HAL documentation on the peripherals you're using.

Related

How to make a simple OS with very specific features

I want to make a sort of operating system, but with some very specific features.
The only thing i need it to do is show a video, place some text over it, accept user input and some basic file manipulation.
Something that looks like an OS'es loading screen would do.
i am, however, a complete noob to this part of programming and i have no idea where to look. Don't worry i do not need a complete explaination for every single step, but it would be nice to know in what direction i should go search.
Anyone got an idea where to start searching?
Thanks in advance ;)
Although you narrowed down your specification it's still complicated enough. Playing video involves file I/O, a file system, storage device access, buffering mechanism, memory allocation, memory management primitives, GPU access. Accepting user input requires keyboard and mouse handling which requires a working USB layer. Not to mention that you needed to make the video decoding library work with your own system, unless you want to implement that from scratch too. That might require you to have an ABI or a POSIX emulation layer. You might need to port at least one or two graphics libraries like SDL.
That's why "OS loading screen"s don't play videos :)
You might get away with using BIOS only for accessing all devices. But BIOS and VBE are usually slow and may not work well with your video playing scenario. BIOS is slowly getting pushed out of PC ecosystem in favor of UEFI too.
If you don't need a custom OS, you can trim a Linux kernel down to the parts you need to have. Even that's not a trivial thing to do though.

Programming SAM uC family without ATmel Studio

I've asked this question on :https://electronics.stackexchange.com/
with no success, So I'm trying my luck here.
I've designed a board that used a SAM uC (Cortex M0+). I don't want to use the ATmel Stduio, I want to learn how to use eclipse and arm-gcc and OpenOCD ( still can't understand, what this one is for ?). So my question is it possible to do that and if yes, does anyone has hint how can I proceed ?
I've installed the 3 parts,and this is the farthest I could get:
I agree with Notlikethat, what is it that you cant figure out.
1) Yes, I have dozens or hundreds of bare metal microcontroller examples that use the gnu toolchain, no ides, just the command line. No problems there whatsoever.
2) OpenOCD is an open source tool that knows how to speak to the on chip debuggers, in particular ARM ones. And it supports JTAG and SWD which is what your cortex-m0+ will have if exposed (which it most likely is).
3) which SAM microcontroller there are hundreds of different ones over a decade or so, the cortex-m0+ certainly narrows that down from what is actually more like thousands to more like hundreds of different ones. Narrow it down to at least one family. Is it the popular SAMD21? I have personally used one of those, and using openocd and gcc is quite doable without any need for an IDE from anyone. Just add a text editor.
I prefer to use a $10 st discovery or nucleo board as my SWD debugger, remove a couple of the jumpers and you can use that st-link front end for other microcontrollers st or not st.
The chip documentation as well as google will show you how to hook all this up and talk to the chip.
Do you have just a raw chip or do you have it on a board, one you made or one you bought?
Your question is the equivalent of, I have a Ford, and I dont know what size brake pads I need and do I have to have the dealer install them.
Is it a truck is it a car, which one, how many models/variations have they had in the last hundred or so years? Look up the sizes/parts in a manual. And the latter is yes of course you can do it yourself, or have someone else install them, you dont have to go to the dealer.

What is the best way to start programming with Real Time Linux?

Although I have implemented many projects in C, I am completely new to operating systems. I tried real time linux on Discovery board (STM32) and got the correct results for blinking LED but I didn't really understand the whole process since I just followed the steps and could not find whole description for each step on the internet.
I want to implement scheduling on real time linux. What is the best way to start? Any sites, books, tutorials available?
Complete RTLinux process description will be appreciated.
Thanks in adv.
The transition from "bare metal" to OS based programming is something that I experienced in reverse. I started out a complete software guy, totally into the OS side of things and over time I have moved to the opposite of that (even designing circuits in VHDL!). My advice would be to start simple. Linux is pretty complex, and everywhere you look there are many layers of things all working together to deliver the final product. If you are dead set on a real time linux extension, I'd be happy to suggest https://xenomai.org/ which is a real time extension for linux.
However, to more specifically address your question about implementing scheduling in Linux, you can, but it will be a large amount of work and can be very complicated. The OS uses a completely fair scheduling process ( http://en.wikipedia.org/wiki/Completely_Fair_Scheduler ) and whenever you spin up a thread, it simply gets added to the list to run. This can differ slightly if you implement your code in kernel space as a driver, rely on hardware interrupts, etc., but in general, this is how Linux works. Real time generally means that it has the ability to assign threads one of several different priorities and utilize thread preemption fully at any given time which are concepts that aren't really a part of vanilla Linux. It has some notion of this, but it has limitations that can cause problems when you are looking for real time behavior from Linux.
What may be helpful to you is an RTOS. If you are looking for a full on Real Time Operating System, check out FreeRTOS http://www.freertos.org/ . It has a large community and supports a lot of different devices out of the box with a large amount of example code. They even support your specific board with an example package, so you can give it a shot with nothing to lose! http://www.freertos.org/FreeRTOS-for-Cortex-M3-STM32-STM32F100-Discovery.html . It gives you access to many OS ish constructs like network APIs, memory management, and threading without the overhead and latency of a huge OS. With an RTOS, you create tasks and assign them priorities so you become the scheduler and are no longer at the mercy of the OS. You run the OS, not the OS runs you (if that makes sense). Plus, the constructs offered within an RTOS will feel much like bare metal code and thus will be much easier to follow, understand, and fully learn. It is a more simple world to learn the base building blocks of a full blown OS such as Linux or Windows. If this option sounds good, I would suggest looking through the supported devices on FreeRTOS website and picking one you would like to experiment with and then go for it. I would highly recommend this as a way to learn about scheduling and OS constructs in general as it is as simple as you can get and open source. Once you have the basics of an RTOS down, buying a book about Linux specifically wouldn't be a bad idea. Although there are many free resources on the web related to learning about Linux, they are commonly contradictory, and can be misleading. Pile on learning Linux specific knowledge along with OS in general, and it can feel overwhelming. Starting simpler will help keep you from getting burnt out and minimize the amount of time you spend feeling lost. Linux is definitely a learning process, but like with any learning process, start simple, keep your ultimate goal in mind, make a plan, and take small, manageable steps along that plan until you look up and find yourself exactly where you want to be. Then go tackle the next mountain!
The real-time Linux landscape is quite confusing. 99.99% of the information out there is just plain obsolete.
First, there are lots of "microkernels" that run Linux as one task. (Such as the defunct RTLinux). The problem is that you must write your real-time task to a different API, and can't depend on anything in Linux, because Linux will be frozen in the background while your task runs. So unless your task is dead-simple ("stop the motors when I press this button"), this approach will cause more pain than gain.
Next, there is the realtime Linux patch set. This hasn't been doing so well. because of the next item:
Lastly, the current Linux kernel has gotten rid of the problems that caused people to need realtime in the past. You can even turn off Linux on one of your processors to have full control of the CPU. See also this paper.
To answer your question: I see two different paths you could take:
1) Start with a normal 3.xx Linux kernel and explore the various APIs and realtime techniques (i.e. realtime priorities, memory pinning, etc.) This can get you "close enough" for 99% of what people want "realtime" for. If it's good enough for high frequency trading, it's probably good enough for you.
2) If you have a hard realtime requirement and you are worried that Linux won't cut it, then (as Nick mentioned above), just go buy a processor and write your realtime code with no OS. By splitting up your "realtime" and "non-realtime" code onto different CPUs, you will make the whole system simpler and much more robust.
If you want to learn real-time operating systems then I suggest that you get an FPGA, for example the Altera DE2, and experiment with your own operating system and ucos. You can read a good text about embedded RTOS here.
You could also get a Linux Raspberry and write your own operating system for that.

Extensive comparison between SIMULINK and LabVIEW

I am trying to determine which of these two to buy for my work. I have used SIMULINK but not LabVIEW. Is there anyone who has used both and would like to provide some details? My investigation criteria are the user friendliness, availability of libraries and template functions, real-time probing facility, COTS hardware interfacing opportunity, quality of code generation, design for testability (i.e. ease of generating unit/acceptance tests), etc. However, if anyone would like to educate me with more criteria, please do so by all means!
For anyone who does not know about SIMULINK and LabVIEW - These are both Domain-Specific Languages (DSLs) intended for graphical dataflow modelling (and also code generation). These are multi-industrial tools and quite heavily used for engineering design and modelling.
IMPORTANT - I am quite interested to know if SIMULINK and LabVIEW offer real-time probing. For example, I have a model that I want to simulate. If there are variables associated to certain building blocks in that model, could I view them changing as the simulation continues? I know that it is certainly not possible with SIMULINK as it has a step-by-step debugger. I am not aware of anything similar in LabVIEW.
I really have not used LabVIEW and cannot obtain it temporarily as my work internet has got download restrictions and administrative privilege issues. This is the reason why I simply cannot use only NI website to draw conclusions. If there is any white paper available that addresses this issue, I would also love to know :)
UPDATE SINCE LAST POST
I have used MATLAB code generator and will not say that it is the best. However, I hear now that SIMULINK Embedded Coder is the best code generator and almost one of its own kind. Can anyone confirm whether or not this is good for safety critical system design i.e. generating code from safety-critical subsystem models. I know that the Mathworks is constantly trying to close the gap to achieve fully-flexible production-level C/C++ code generation.
I know that an ideal answer would be,"Depending on what you are trying to do, use a bit of both". And interestingly, I think I am heading to that direction. ATEOTD, it is a lot of money and need to be spent "nicely".
Thanks in advance.
I used labVIEW from 1995, and Simulink from 2000. Now I am involved in control system design, and simulation of robotic systems using labVIEW Real Time and automotive ECUs using MATALAB/Simulink/DSPACE .
LabVIEW is focus on measurement systems, and MATLAB/SIMULINK in dynamic simulation, so,
If you run complex simulations, and your work is create/debug complex simulation models of controllers or plants, use Simulink+RealTimeWorkShop+StateFlowChart. LabVIEW has no eficient code generators for dynamic simulation. RTW generates smaller and fastest code.
If your main work is developing systems with controllers and GUI for machines, or you want to deploy the controllers on field, use labVIEW.
If your main work is developing flexible HIL or SIL systems, with a good GUI, you can use VeriStand. Veristand can mix Simulink and LabVIEW code.
And if you have a big budget ( VERY BIG ) and you are working in automotive control prototypes, DSPACE hardware is a very good choice for fast development of automotive ECUS, or OPAL to develope electric power circuits. But only for prototype or HIL testing of controllers.
From the point of view of COTS hardware:
Mathworks don´t manufacture hardware -> Matlab/Simulink support hardware from several vendors.
National Instruments produce/sell hardware->LabVIEW Real Time is focused in support NationalInstruments hardware. There are no COTS full replacement.
I have absolutely no experience with Simulink, so I'll comment only on LV, although a quick read about Simulink on Wikipedia seems to indicate that it's focused mainly on simulation and modelling, which is certainly not the case with LabVIEW.
OK, so first of all, LV is NOT a DSL. While you wouldn't want to use it for any project, it's a general purpose programming language and you should take that into account. I know that NI has a simulation toolkit for LV, which might help you if that's what you're after, but I have absolutely no experience with it. The images I saw of it seemed to indicate that it adds a special kind of diagram to LV for simulation.
Second, LV is not restricted to any kind of hardware. It's a general purpose language, so you can write code which won't use any hardware at all, code which will use or run on NI's hardware or code which will use any hardware (be it through DLL calls, .NET assemblies, RS232, TCP, GPIB or any other option you can think of). There is quite a large collection of LV drivers for various devices and the quality of the driver usually depends on who wrote it.
Third, you can certainly probe in real time in LV. You write your code, just as you would in C or Java, and when you run it, you have several debugging options:
Single stepping. This isn't actually all that common, partially because LV is parallel.
Execution highlighting. This runs the code in slow motion, while showing all the values in the various wires.
Probes, which show you the last value that each wire had, where wires fill the same function that variables do in text based languages. This updates in real time and I assume is what you want.
Retain wire values, which allows you to probe a wire even after data passed through it. This is similar to what you get in text based IDEs with variables. In LV you don't usually have it because wire values are transient, so the value is not kept around unless you explicitly ask for it.
Of course, since you're talking about code, you could also simply write the code to display the values to the screen on a graph or a numeric indicator or to log them to a file, so there should be no need for actual probing. You could also add analysis code, etc.
Fourth, you could try downloading and running LV in a fully functional evaluation mode. If I remember correctly, NI currently gives you 7 days and then 45 days if you register on their site. If you can't do that on a work computer, you could try at home. If your problem is only with downloading, you could try contacting your local NI office and asking them to send you a DVD.
Note that I don't really know anything about modelling and simulation, so I have no idea what kind of code you would actually have to write in order to do what you want. I assume that if NI has a special module for it, then it's not something that you can completely cover in regular code (at least not if you want the original notation), but I would say that if you could write the code that does what you want in C, there's no reason you shouldn't be able to write it in LV (assuming, of course, that you know how to write code in LV).
A lot of the best answer would have to depend on your ultimate design requirements. Are you developing a product? If so, in what stage of development are you? Or are you doing research?
I recently did a comparison just as you are doing. I know LV, but was wanting to move towards a more hardware-scalable option, since NI HW is very expensive in volume. That is, my company was wanting to move towards a product. What LV and NI HW give you is flexibility. You can change code very quickly compared to C. On the other hand, LV does not run on nearly as many different HW platforms as C. So I wanted to find an inexpensive platform that would work well for real-time control and data acquisition, such that if we wanted to sell a product for, say, $30k, our controller wouldn't be costing $15k of that. We ended up with Diamond Systems Linux SBC's. Interestingly, Simulink ended up using the most expensive hardware! It did have a lot of flexibility, and could generate code, as well as model plants and controllers. But then, LV can do that as well.
As Yair wrote, LV has plenty of good debugging tools. One of the more interesting tools that is not so well known is the Suspend when Called option for a SubVI. This allows you to play with the inputs and outputs of a SubVI as much as you want while execution is paused.
MATLAB and Simulink are the defacto standard for control system design and simulation. Simulink controller models can be used for offline simulation in conjunction with plant models, all the way to realtime implementation on embedded targets. It is a general simulation framework with extensive built-in libraries as well as a la carte special purpose libraries, and can be extended through creation of custom blocks (S-function blocks) in C and other languages. It includes the ability to display values in graphs, numeric displays, gages, etc. while a nonrealtime simulation is taking place. Realtime target support from The Mathworks includes x86 (xPC Target) and several embedded targets (MPC555, etc.), and there is 3rd party support for other targets. The aforementioned dSPACE provides complete prototyping controllers including support for their quite powerful hardware. xPC Target includes support for a plethora of COTS PC data acquisition cards. Realtime target support includes GUI elements such as graphs, numeric displays gages, etc.
As I understand it (I have never really used it in anger), LabView only supports NI hardware, and is more hardware-oriented. Simulink supports hardware from multiple vendors, be it for data acquisition, or real-time implementation, but it may require a bit more work for the user to interface to his or her own hardware (less plug & play than LabView). On the other hand, Simulink provides tools to support the whole model-based design process, from modelling & simulation, control design, verification & validation, code generation, hardware-in-the-loop, etc...
Disclaimer: I used to work for MathWorks.
You guys may really be interested in Control Design adn Simulation Module for LabVIEW. It does a lot of simulations and in the future may be competitive to Simulink. I'm not a control engineer but I use it sometimes for simple testing and I'm glad that I don't have to learn Simulink from the beginning to do some work since I'm familiar with LabVIEW philosophy.

What is the difference betweet Netduino and .NET Gadgeteer?

I wanted to learn .NET Microframework and found that there is (among others) Netduino which is somehow compatible with Arduino. Recently .NET Gadgeteer came to public. There was a lot of enthusiasm, so it looks like important step for .NET Microframework.
Is it possible to compare them somehow? I'm not sure for what tasks is better Netduino and for what tasks Gadgeteer. Or are they in fact completely different beasts?
I'm unable to read this from information available on home pages, because there are mostly marketing materials.
Netduino (and other HW boards, including GHI's FEZ products) are HW devices with an MicroProcessor running .NET Microframework - but in an form factor that resembles Ardunio, meaning you can plug other boards (or shields) on top of the mainboard to extend its functionality.
.NET Gadgeteer is something different:
.NET Gadgeteer Hardware
A .NET Gadgeteer system is composed of a mainboard containing an embedded processor and a variety of modules which connect to the mainboard through a simple plug-and-play interface. There are lots of .NET Gadgeteer modules available today: display, camera, networking, storage, input controls, and more modules are being designed all the time!
The .NET Gadgeteer mainboard's sockets are numbered, and each is labeled with one or more letters which indicate which modules can be plugged into it
CPU is more powerful than Netduino class of devices.
Gadgeteer runtime
Gadgeteer is 100% C# managed code so it is not tied to any firmware (C++ code). http://gadgeteer.codeplex.com/
It is an “Open socket-connections standard".
You can get a module from company x, another module from company y and use both on mainboard from company z, even if you didn't have design files. All will work together nicely. Of course someone may come up with an advanced model that requires special software but mostly modules will simply work.
You can even make your own modules to go on any mainboard...this is the beauty of gadgeteer.
Think of this as "arduino shield like" but better since there is no pin overlapping and you are not limited to couple shields before the board is too long to be usable. You could even take the gadgeteer socket standard and use it on a board that is not running NETMF at all , but you will lose all the nice software gadgeteer provides.
For details about the runtime check out the documents at Codeplex, http://gadgeteer.codeplex.com/releases/view/72208
For more information check out:
http://research.microsoft.com/en-us/projects/gadgeteer/default.aspx
http://channel9.msdn.com/coding4fun/blog/Along-came-a-spider-a-NET-Gadgeteer-FEZ-Spider
http://www.ghielectronics.com/catalog/product/297/
http://channel9.msdn.com/Blogs/Clint/NET-Gadgeteer
Netduino Go was recently released...supporting both Arduino Shield and Gadgeteer module pin compatibility. It also supports plug-and-play go!bus modules.
A few clarifications on Gadgeteer and Netduino:
Gadgeteer, from a hardware perspective, is a pin-assignment technology like Arduino Shields. There's a similar level of simplicity/complexity to it as with Arduino shields (i.e. overlapping pins, peripherals that go away on one socket when you plug in modules on another socket, fixed number of peripheral features, etc.) In contrast to Arduino, only a subset of Gadgeteer modules will work with a given Gadgeteer mainboard. But with Gadgeteer, you get multiple pin interfaces so there's less pin overlapping.
Netduino Go uses plug-and-play style modules. The go!bus protocol that Netduino Go uses is virtual I/O...so when you plug in a go!bus module it auto-enumerates and adds its features to the mainboard. Similar to how USB works on your computer. No overlapping pins or module limitations.
Netduino Go also has a compatibility mode where you can plug in Gadgeteer modules to up to two sockets. Like with other Gadgeteer-compatible boards, plugging in a module disables functionality on one or more other sockets.
Netduino Go has six times the code storage (1MB, 384KB for code), four times the speed (168MHz), and twice the RAM (100KB+) of Netduino Plus.
More info on Netduino Go:
http://forums.netduino.com/index.php?/topic/3867-introducing-netduino-go/
More info on Gadgeteer:
http://gadgeteer.codeplex.com/
Chris
Secret Labs LLC
Netduino is built with the open source hardware movement in mind and is compatible with existing Arduino shields while allowing you to use the .NET Micro framework to program it. This allows you to leverage existing experience with .NET on that platform instead of having to go through another language.
.NET Gadgeteer is a completely different take on the hardware with a specific set of hardware created for it that is modular and standardized.
Think Netduino as an Erector set and .NET Gadgeteer as Legos. You can build stuff with both of them but one is a bit more powerful if you want to apply what you have created to a broader set of problems.
Initial startup costs to get involved are also cheaper with Netduino.
See: http://www.i-programmer.info/news/91-hardware/2819-net-gadgeteer-an-alternative-to-arduino.html
Really the only downside to the Netduino Go is the current lack of networking as at end May 2012.
Chris has already said (elsewhere) that this is only weeks away, and when it ships I suspect the Go will be the go, as it were. It is to Gadgeteer as C# is to Java - more or less the same but done better with the benefit of hindsight. Looking around the forums I see other platforms with either uneven hardware compatibility or mediocre driver quality.
There's also a possibility of onboard RTC. Not a certainty, but you never know your luck in the big city.
Something Chris (and the Gadgeteer guys) don't take enough credit for is the computer-as-a-network approach Gadgeteer and Go both take. The network stack on a single CPU system like the NetduinoPlus can never perform like one that has dedicated CPU with its own buffer, and pushing the network stack onto its own board gets it out of your application code space. I suspect that the Go, running on a Cortex M3 with a supporting cast of Cortex M0s smoothly integrated by the crunchy goodness of baked in virtualisation, is going to feel like developing on a much larger machine.
Some things none of the prototyping boards do well are
Hardware watchdog reboot for hung application code
OTAU (over the air update)
You need both of those for release grade hardware, which I guess means rolling your own. Netduino Go and Gadgeteer expressly support the notion of roll your own modules.