When simulating verilog output using Icarus, is there a way to include FPGA hardware features such as RAM in the simulation? - simulation

I'm new to FPGA, and have started out with an iceBreaker board using the ICE40UP5K chip. I'm aiming to make a LED display driver, driving something similar to HUB75 used on popular display modules.
I've been able to simulate waveform generation, and view it in GtkWave using the tutorial here:
https://brng.dev/blog/technical/tutorial/2019/05/11/icarus_gtkwave/
My next steps involve making use of the RAM banks inside of the ICE40UP5K. Is there some way to include the existence of this RAM in my simulation?

Yes, of course - there is a library of simulated ICE40 cells included in Yosys: https://github.com/YosysHQ/yosys/blob/master/techlibs/ice40/cells_sim.v

Related

Porting a project that is written for STM32 (ARM-Cortex M7 to NXP (ARM-Cortex M7)

The STM32 supply is very bad at the moment hence I am considering moving away from the STM32 and going for NXP since the supply is much better.
I would like to ask for advice regarding migrating from STM32 to NXP:
Have anyone tried to migrate their project from STM32 to NXP? Can this be done easily if the Core is the same?
What are the major difficulties that I may encounter?
Can I easily just remap pins, copy paste SPI/I2C and other drivers and they will just simply "work"
I have not gone through the migration, but consider that every single peripheral device (timers, Flash, SPI, I2C, etc) between two different micro manufacturers has a completely different register interface. This means that not a single thing works until you've implemented the new register interface. Usually this is handled by the manufacturers HAL, but those also have completely different C interfaces - so you're going to have to implement that, at the very least. So it's going to be a massive change no matter what. People who predict moving their code from one manufacturer to another usually build a porting layer in advance that hides the HAL, and swap out the HALs behind this layer. It mostly moves the development effort to another place (upfront) and starts reducing the work if there are more than 2 ports to maintain.
To get started it's best to have a quick look at the NXP HAL documentation on the peripherals you're using.

How do I implement matlab code on hardware device to make it run?

My question is quite basic to most appropriate levels on consideration. I lack the perception of how can I dump or implement my matlab code on a hardware component like processors or fpga? For eg:
Suppose I create an image processing matlab (object classification/detection) code which needs to detect real time images from drone and identify whether object is human or animal while its one air through the vision of camera, how shall I proceed to implement this matlab code onto a processor or controller and make it run while the drone is on air?
For consideration, take the matlab code is in its raw form for processing any input data and put out an output classified data. What should I do next? Do I need to convert the matlab code to any hdl or .exe format to run it across the hardware platform or is it possible to implement matlab code(.m format) directly into a device for processing and classification. Basically I am not getting how to practically use matlab code and put it into a system. Do I need to use some sort of toolbox or extension code ?
Could you please list down the steps for this process or share some link of website or youtube videos where it has been shown in detail how to harbour this mechanism.
You could use MATLAB Coder to convert your MATLAB code to an executable that can run on your hardware.
Here are a couple of articles from the official MathWorks documentation regarding Code Generation for Image Processing to get you started:
https://www.mathworks.com/help/images/code-generation-for-image-processing.html
https://www.mathworks.com/help/images/code-generation-with-cell-detection.html

Extensive comparison between SIMULINK and LabVIEW

I am trying to determine which of these two to buy for my work. I have used SIMULINK but not LabVIEW. Is there anyone who has used both and would like to provide some details? My investigation criteria are the user friendliness, availability of libraries and template functions, real-time probing facility, COTS hardware interfacing opportunity, quality of code generation, design for testability (i.e. ease of generating unit/acceptance tests), etc. However, if anyone would like to educate me with more criteria, please do so by all means!
For anyone who does not know about SIMULINK and LabVIEW - These are both Domain-Specific Languages (DSLs) intended for graphical dataflow modelling (and also code generation). These are multi-industrial tools and quite heavily used for engineering design and modelling.
IMPORTANT - I am quite interested to know if SIMULINK and LabVIEW offer real-time probing. For example, I have a model that I want to simulate. If there are variables associated to certain building blocks in that model, could I view them changing as the simulation continues? I know that it is certainly not possible with SIMULINK as it has a step-by-step debugger. I am not aware of anything similar in LabVIEW.
I really have not used LabVIEW and cannot obtain it temporarily as my work internet has got download restrictions and administrative privilege issues. This is the reason why I simply cannot use only NI website to draw conclusions. If there is any white paper available that addresses this issue, I would also love to know :)
UPDATE SINCE LAST POST
I have used MATLAB code generator and will not say that it is the best. However, I hear now that SIMULINK Embedded Coder is the best code generator and almost one of its own kind. Can anyone confirm whether or not this is good for safety critical system design i.e. generating code from safety-critical subsystem models. I know that the Mathworks is constantly trying to close the gap to achieve fully-flexible production-level C/C++ code generation.
I know that an ideal answer would be,"Depending on what you are trying to do, use a bit of both". And interestingly, I think I am heading to that direction. ATEOTD, it is a lot of money and need to be spent "nicely".
Thanks in advance.
I used labVIEW from 1995, and Simulink from 2000. Now I am involved in control system design, and simulation of robotic systems using labVIEW Real Time and automotive ECUs using MATALAB/Simulink/DSPACE .
LabVIEW is focus on measurement systems, and MATLAB/SIMULINK in dynamic simulation, so,
If you run complex simulations, and your work is create/debug complex simulation models of controllers or plants, use Simulink+RealTimeWorkShop+StateFlowChart. LabVIEW has no eficient code generators for dynamic simulation. RTW generates smaller and fastest code.
If your main work is developing systems with controllers and GUI for machines, or you want to deploy the controllers on field, use labVIEW.
If your main work is developing flexible HIL or SIL systems, with a good GUI, you can use VeriStand. Veristand can mix Simulink and LabVIEW code.
And if you have a big budget ( VERY BIG ) and you are working in automotive control prototypes, DSPACE hardware is a very good choice for fast development of automotive ECUS, or OPAL to develope electric power circuits. But only for prototype or HIL testing of controllers.
From the point of view of COTS hardware:
Mathworks donĀ“t manufacture hardware -> Matlab/Simulink support hardware from several vendors.
National Instruments produce/sell hardware->LabVIEW Real Time is focused in support NationalInstruments hardware. There are no COTS full replacement.
I have absolutely no experience with Simulink, so I'll comment only on LV, although a quick read about Simulink on Wikipedia seems to indicate that it's focused mainly on simulation and modelling, which is certainly not the case with LabVIEW.
OK, so first of all, LV is NOT a DSL. While you wouldn't want to use it for any project, it's a general purpose programming language and you should take that into account. I know that NI has a simulation toolkit for LV, which might help you if that's what you're after, but I have absolutely no experience with it. The images I saw of it seemed to indicate that it adds a special kind of diagram to LV for simulation.
Second, LV is not restricted to any kind of hardware. It's a general purpose language, so you can write code which won't use any hardware at all, code which will use or run on NI's hardware or code which will use any hardware (be it through DLL calls, .NET assemblies, RS232, TCP, GPIB or any other option you can think of). There is quite a large collection of LV drivers for various devices and the quality of the driver usually depends on who wrote it.
Third, you can certainly probe in real time in LV. You write your code, just as you would in C or Java, and when you run it, you have several debugging options:
Single stepping. This isn't actually all that common, partially because LV is parallel.
Execution highlighting. This runs the code in slow motion, while showing all the values in the various wires.
Probes, which show you the last value that each wire had, where wires fill the same function that variables do in text based languages. This updates in real time and I assume is what you want.
Retain wire values, which allows you to probe a wire even after data passed through it. This is similar to what you get in text based IDEs with variables. In LV you don't usually have it because wire values are transient, so the value is not kept around unless you explicitly ask for it.
Of course, since you're talking about code, you could also simply write the code to display the values to the screen on a graph or a numeric indicator or to log them to a file, so there should be no need for actual probing. You could also add analysis code, etc.
Fourth, you could try downloading and running LV in a fully functional evaluation mode. If I remember correctly, NI currently gives you 7 days and then 45 days if you register on their site. If you can't do that on a work computer, you could try at home. If your problem is only with downloading, you could try contacting your local NI office and asking them to send you a DVD.
Note that I don't really know anything about modelling and simulation, so I have no idea what kind of code you would actually have to write in order to do what you want. I assume that if NI has a special module for it, then it's not something that you can completely cover in regular code (at least not if you want the original notation), but I would say that if you could write the code that does what you want in C, there's no reason you shouldn't be able to write it in LV (assuming, of course, that you know how to write code in LV).
A lot of the best answer would have to depend on your ultimate design requirements. Are you developing a product? If so, in what stage of development are you? Or are you doing research?
I recently did a comparison just as you are doing. I know LV, but was wanting to move towards a more hardware-scalable option, since NI HW is very expensive in volume. That is, my company was wanting to move towards a product. What LV and NI HW give you is flexibility. You can change code very quickly compared to C. On the other hand, LV does not run on nearly as many different HW platforms as C. So I wanted to find an inexpensive platform that would work well for real-time control and data acquisition, such that if we wanted to sell a product for, say, $30k, our controller wouldn't be costing $15k of that. We ended up with Diamond Systems Linux SBC's. Interestingly, Simulink ended up using the most expensive hardware! It did have a lot of flexibility, and could generate code, as well as model plants and controllers. But then, LV can do that as well.
As Yair wrote, LV has plenty of good debugging tools. One of the more interesting tools that is not so well known is the Suspend when Called option for a SubVI. This allows you to play with the inputs and outputs of a SubVI as much as you want while execution is paused.
MATLAB and Simulink are the defacto standard for control system design and simulation. Simulink controller models can be used for offline simulation in conjunction with plant models, all the way to realtime implementation on embedded targets. It is a general simulation framework with extensive built-in libraries as well as a la carte special purpose libraries, and can be extended through creation of custom blocks (S-function blocks) in C and other languages. It includes the ability to display values in graphs, numeric displays, gages, etc. while a nonrealtime simulation is taking place. Realtime target support from The Mathworks includes x86 (xPC Target) and several embedded targets (MPC555, etc.), and there is 3rd party support for other targets. The aforementioned dSPACE provides complete prototyping controllers including support for their quite powerful hardware. xPC Target includes support for a plethora of COTS PC data acquisition cards. Realtime target support includes GUI elements such as graphs, numeric displays gages, etc.
As I understand it (I have never really used it in anger), LabView only supports NI hardware, and is more hardware-oriented. Simulink supports hardware from multiple vendors, be it for data acquisition, or real-time implementation, but it may require a bit more work for the user to interface to his or her own hardware (less plug & play than LabView). On the other hand, Simulink provides tools to support the whole model-based design process, from modelling & simulation, control design, verification & validation, code generation, hardware-in-the-loop, etc...
Disclaimer: I used to work for MathWorks.
You guys may really be interested in Control Design adn Simulation Module for LabVIEW. It does a lot of simulations and in the future may be competitive to Simulink. I'm not a control engineer but I use it sometimes for simple testing and I'm glad that I don't have to learn Simulink from the beginning to do some work since I'm familiar with LabVIEW philosophy.

Why is there no Simulink block for cDAQ devices?

I am currently working on a project involving data acquisition and real-time processing in Simulink. We have inherited some rather swanky DAQ hardware that was bought specifically for this project - namely, a National Instruments cDAQ device (USB). Changing hardware is too expensive at this point.
I had previously seen Simulink blocks that work with the DAQ Toolbox, so naturally assumed I could use our cDAQ in Simulink. On further investigation, however, it seems the blocks only work with PCI NI devices, not the 'compact' USB ones like we have.
I have created a workaround by writing a level-2 M-S-function that uses the DAQ toolbox's session based interface, puts the incoming data in a queue and pushes it out through the outports in onOutputs. This seems to be working fine.
My question is this: is there a reason why MathWorks decided not to make their DAQ blocks work with cDAQ devices? I understand that USB tends to have some latency issues, but am I really crazy for thinking this is possible? I would even go so far as to say that it actually seems fairly simple, but surely there must be a motivation for the lack of Simulink support for cDAQ devices in the DAQ Toolbox. Am I oversimplifying the issue? And if so, how?
Thanks for your help.
Mathworks motives are likely driven by need and there may not be many users asking for it.
Don't know if this helps you, but writing C libraries to do the data acquisition is very easy and likely faster. You could then just call the functions in the libraries from Simulink. This solution also has the advantage of giving you complete control of the DAQ board. The distribution disk for DAQmx has many C examples.

iPhone fluid simulation

Somebody know fluid engine for iphone?I need water and gases simulation.
Simulating fluids is a tremendous challenge for modern desktop computers, so I would not expect the greatest performance when trying to get this working on a mobile device. Running full Navier-Stokes calculations on the iPhone is probably going to chug pretty badly.
However, in the past I was able to perform 2-D fluid modeling simulations on limited hardware using lattice gas automata. With lattice gas automata, you approximate a fluid as a fine hexagonal grid, where particles can travel in one of six directions and obey specific collision rules. There are some limitations to this approach (addressed by the Lattice Boltzmann Method), but it can do a very good job of simulating fluids, even including compressible ones like air. Why this works well on limited hardware is that these calculations can be done using bitwise operators and simple lookup tables, without the need for any floating point calculations. You might be able to make something like this work on the iPhone's processor. For more on this technique, you can consult Appendix A of my Ph.D. dissertation, where I explain the process and have source code for a fluid modeler I wrote.
That said, if all you want to do is mimic the appearance of water in your application, the answers to the following questions provide some good suggestions:
"How to implement water ripples?"
"How do I make a water effect view with openGLES on the iPhone?"
I have just released an iPhone fluid simulator that uses a compressible particle in cell method. I have a video here: http://www.youtube.com/watch?v=-CCeeh8EzuA
An incompressible fluid simulator requires many iterations, so I use a compressible simulator. The good thing is if you can make a compressible simulator stable enough, it usually looks incompressible enough.
My app is called GFlow on the app store if you want to see it in action.
I have release two iPhone apps. One app solves the Navier Stokes equations:
http://itunes.apple.com/us/app/fluid-dynamics/id382274493?mt=8
and the other one uses a compressible particle in cell method:
http://itunes.apple.com/us/app/liquid-dynamics/id417814216?mt=8&ls=1
A description of the methods used is founde here:
http://www.infi.nl/blog/view/id/71/Navier_Stokes_iPhone_vs_iPad
and here:
http://www.infi.nl/blog/view/id/98/Liquid_on_iPhone_and_iPad