How do I change Simulink xPC target serial comm speed on the fly - matlab

I have an xPC target application that talks to a device over RS-232. I am using the xPC serial block for this.
To talk to this device I first have to start at a default speed, say, 9600 bps, request a change of speed to, say 57600 bps, then change the speed on my side to match it.
The problem with the xPC block is that it forces you to choose a specific speed before running, and can't change it at run time. Is there a way/trick/hack to do this?

Here is my take so far. I don't think it can be done using existing Simulink blocks. I think I am going to have to take the xpcserial C code that comes with Matlab, take the code that sets the RS-232 speed, and wrap it in my own S-function.

I agree with you: I don't think it can be done, I'm afraid.
On further reflection, I've realised that in my xPC system, I get a compilation warning telling me that the blocks I'm using don't support sample time changes during runtime; this implies that it's not impossible in general…

Ian,
What I've done before on this stuff is just modify the registers behind XPC target's back. It's ugly, but xPCTarget is ugly in the first place.
Try modify Line Control Register and set the divisors directly -- all you need is the serial port IO address, and you know that.
It's worth a shot anyway, you're going to have to do it anyway.

Related

How do I add a missing peripheral register to a STM32 MCU model in Renode?

I am trying out this MCU / SoC emulator, Renode.
I loaded their existing model template under platforms/cpus/stm32l072.repl, which just includes the repl file for stm32l071 and adds one little thing.
When I then load & run a program binary built with STM32CubeIDE and ST's LL library, and the code hits the initial function of SystemClock_Config(), where the Flash:ACR register is being probed in a loop, to observe an expected change in value, it gets stuck there, as the Renode Monitor window is outputting:
[WARNING] sysbus: Read from an unimplemented register Flash:ACR (0x40022000), returning a value from SVD: 0x0
This seems to be expected, not all existing templates model nearly everything out of the box. I also found that the stm32L071 model is missing some of the USARTs and NVIC channels. I saw how, probably, the latter might be added, but there seems to be not a single among the default models defining that Flash:ACR register that I could use as example.
How would one add such a missing register for this particular MCU model?
Note1: For this test, I'm using a STM32 firmware binary which works as intended on actual hardware, e.g. a devboard for this MCU.
Note2:
The stated advantage of Renode over QEMU, which does apparently not emulate peripherals, is also allowing to stick together a more complex system, out of mocked external e.g. I2C and other devices (apparently C# modules, not yet looked into it).
They say "use the same binary as on the real system".
Which is my reason for trying this out - sounds like a lot of potential for implementing systems where the hardware is not yet fully available, and also automatted testing.
So the obvious thing, commenting out a lot of parts in init code, to only test some hardware-independent code while sidestepping such issues, would defeat the purpose here.
If you want to just provide the ACR register for the flash to pass your init, use a tag.
You can either provide it via REPL (recommended, like here https://github.com/renode/renode/blob/master/platforms/cpus/stm32l071.repl#L175) or via RESC.
Assuming that your software would like to read value 0xDEADBEEF. In the repl you'd use:
sysbus:
init:
Tag <0x40022000, 0x40022003> "ACR" 0xDEADBEEF
In the resc or in the Monitor it would be just:
sysbus Tag <0x40022000, 0x40022003> "ACR" 0xDEADBEEF
If you want more complex logic, you can use a Python peripheral, as described in the docs (https://renode.readthedocs.io/en/latest/basic/using-python.html#python-peripherals-in-a-platform-description):
flash: Python.PythonPeripheral # sysbus 0x40022000
size: 0x1000
initable: false
filename: "script_with_complex_python_logic.py"
```
If you really need advanced implementation, then you need to create a complete C# model.
As you correctly mentioned, we do not want you to modify your binary. But we're ok with mocking some parts we're not interested in for a particular use case if the software passes with these mocks.
Disclaimer: I'm one of the Renode developers.

Can we edit callback function HAL_UART_TxCpltCallback for our convenience?

I am a newbie to both FreeRTOS and STM32. I want to know how exactly callback function HAL_UART_TxCpltCallback for HAL_UART_Transmit_IT works ?
Can we edit that that callback function for our convenience ?
Thanks in Advance
You call HAL_UART_Transmit_IT to transmit your data in the "interrupt" (non-blocking) mode. This call returns immediately, likely well before your data gets fully trasmitted.
The sequence of events is as follows:
HAL_UART_Transmit_IT stores a pointer and length of the data buffer you provide. It doesn't perform a copy, so your buffer you passed needs to remain valid until callback gets called. For example it cannot be a buffer you'll perform delete [] / free on before callbacks happen or a buffer that's local in a function you're going to return from before a callback call.
It then enables TXE interrupt for this UART, which happens every time the DR (or TDR, depending on STM in use) is empty and can have new data written
At this point interrupt happens immediately. In the IRQ handler (HAL_UART_IRQHandler) a new byte is put in the DR (TDR) register which then gets transmitted - this happens in UART_Transmit_IT.
Once this byte gets transmitted, TXE interrupt gets triggered again and this process repeats until reaching the end of the buffer you've provided.
If any error happens, HAL_UART_ErrorCallback will get called, from IRQ handler
If no errors happened and end of buffer has been reached, HAL_UART_TxCpltCallback is called (from HAL_UART_IRQHandler -> UART_EndTransmit_IT).
On to your second question whether you can edit this callback "for convenience" - I'd say you can do whatever you want, but you'll have to live with the consequences of modifying code what's essentially a library:
Upgrading HAL to newer versions is going to be a nightmare. You'll have to manually re-apply all your changes you've done to that code and test them again. To some extent this can be automated with some form of version control (git / svn) or even patch files, but if the code you've modified gets changed by ST, those patches will likely not apply anymore and you'll have to do it all by hand again. This may require re-discovering how the implementation changed and doing all your work from scratch.
Nobody is going to be able to help you as your library code no longer matches code that everyone else has. If you introduced new bugs by modifying library code, no one will be able to reproduce them. Even if you provided your modifications, I honestly doubt many here will bother to apply your changes and test them in practice.
If I was to express my personal opinion it'd be this: if you think there's bugs in the HAL code - fix them locally and report them to ST. Once they're fixed in future update, fully overwrite your HAL modifications with updated official release. If you think HAL code lacks functionality or flexibility for your needs, you have two options here:
Suggest your changes to ST. You have to keep in mind that HAL aims to serve "general purpose" needs.
Just don't use HAL for this specific peripheral. This "mixed" approach is exactly what I do personally. In some cases functionality provided by HAL for given peripheral is "good enough" to serve my needs (in my case one example is SPI where I fully rely on HAL) while in some other cases - such as UART - I use HAL only for initialization, while handling transmission myself. Even when you decide not to use HAL functions, it can still provide some value - you can for example copy their IRQ handler to your code and call your functions instead. That way you at least skip some parts in development.

Specman beginner's questions

I am new to Specman.
I have a couple of questions:
I am trying to use the agent methodology. After writing the env,agent,bfm etc - what is the recommended way to create clock and reset? by writing a tb.v (calling the top verilog module) or is there a better way?
How do I link the specman env file to the tb (or maybe its just enough to link the ports of the different specman files with a signals_map to the verilog files?
Most important how do I run the environment with irun?
I was thinking of creating a file listing all the verilog files, e.g. - veri.lst
the specman top shall import all the specman files, e.g - spec_top.e
irun -access +wrc veri.lst spec_top.e
should be ok?
should I mention the top level module in the command?
Should I put the test name in a special way in the command?
Thanks alot for all the help!!
Cadence recommends driving clocks from inside an HDL testbench (i.e. written in Verilog in your case). This is because every time the simulator yields control to Specman to execute it wastes processor time. You want to minimize the number of switches as much as possible.
Linking the env to the TB is done by connecting the Verilog signals of interest to the corresponding Specman ports (using hdl_path()).
W.r.t. running it, there are 2 things to keep in mind. e code can be executed in compiled or in interpreted mode. Also, compiled code is faster, but can't be debugged. You have to tell irun what you want compiled and what you want interpreted:
irun -f veri.lst \
compiled_top.e \
-snload interpreted_top.e
What you typically compile are files which you don't expect to change (verification components that you buy or reuse from other projects, for example). The rest of your files you'd load interpreted to be able to easily debug.
Adding to Tudor's great answer -
First - yes, connecting The e TB to the DUT is done using hdl_path(), and connecting the ports to external. You usually would have one unit designated for the interface, so configuring it would look something like this:
extend signal_map {
// name of the instance of the verilog module you interface
keep hdl_path() == "sub_system_a";
keep bind (sig_clock, external);
// name of the clock signal
keep sig_clock.hdl_path == "clk";
};
Please take a look in the IES release, at the UVM Examples.
They are in
specman/uvm/uvm_examples
For example, check out the specman/uvm/uvm_examples/xserial/e/xserial_collector_h.e:
And about the clock -
Connecting a clock in the e TB to the design is very simple. Something like this -
unit synch {
sig_clock : in simple_port of bit is instance;
keep bind(sig_clock, external);
event clock is rise(sig_clock$) #sim;
// can define also on fall or change
};
Now the clock event can be used as sampling event for TCMs and Temporals. This is a simple fast way for using the clock in the TB.
Another way to use the clock, is more "acceleration ready". In this methodology, you would implement a clock agent in verilog, and it will provide "clock services" to the TB. According to this methodology, the TB will not have any "wait cycles" in it. instead - it will call the Clock Agent task "wait_cycles()" - and wait for indication that required number of clock cycles passed.
This is a rather new methodology, oriented to be Acceleration Ready.
It will be demonstrated in the UVM Examples in next IES release, 15.1.
/efrat

Easy clock simulation for testing a project

Consider testing the project you've just implemented. If it's using the system's clock in anyway, testing it would be an issue. The first solution that comes to mind is simulation; manually manipulate system's clock to fool all the components of your software to believe the time is ticking the way you want it to. How do you implement such a solution?
My solution is:
Using a virtual environment (e.g. VMWare Player) and installing a Linux (I leave the distribution to you) and manipulating virtual system's clock to create the illusion of time passing. The only problem is, clock is ticking as your code is running. Me, myself, am looking for a solution that time will actually stop and it won't change unless I tell it to.
Constraints:
You can't confine the list of components used in project, as they might be anything. For instance I used MySQL date/time functions and I want to fool them without amending MySQL's code in anyway (it's too costy since you might end up compiling every single component of your project).
Write a small program that changes the system clock when you want it, and how much you want it. For example, each second, change the clock an extra 59 seconds.
The small program should
Either keep track of what it did, so it can undo it
Use the Network Time Protocol to get the clock back to its old value (reference before, remember difference, ask afterwards, apply difference).
From your additional explanation in the comments (maybe you cold add them to your question?), my thoughts are:
You may already have solved 1 & 2, but they relate to the problem, if not the question.
1) This is a web application, so you only need to concern yourself with your server's clock. Don't trust any clock that is controlled by the client.
2) You only seem to need elapsed time as opposed to absolute time. Therefore why not keep track of the time at which the server request starts and ends, then add the elapsed server time back on to the remaining 'time-bank' (or whatever the constraint is)?
3) As far as testing goes, you don't need to concern yourself with any actual 'clock' at all. As Gilbert Le Blanc suggests, write a wrapper around your system calls that you can then use to return dummy test data. So if you had a method getTime() which returned the current system time, you could wrap it in another method or overload it with a parameter that returns an arbitrary offset.
Encapsulate your system calls in their own methods, and you can replace the system calls with simulation calls for testing.
Edited to show an example.
I write Java games. Here's a simple Java Font class that puts the font for the game in one place, in case I decide to change the font later.
package xxx.xxx.minesweeper.view;
import java.awt.Font;
public class MinesweeperFont {
protected static final String FONT_NAME = "Comic Sans MS";
public static Font getBoldFont(int pointSize) {
return new Font(FONT_NAME, Font.BOLD, pointSize);
}
}
Again, using Java, here's a simple method of encapsulating a System call.
public static void printConsole(String text) {
System.out.println(text);
}
Replace every instance of System.out.println in your code with printConsole, and your system call exists in only one place.
By overriding or modifying the encapsulated methods, you can test them.
Another solution would be to debug and manipulate values returned by time functions to set them to anything you want

Embedded Linux LED-flashing daemon: does it exist?

I've seen embedded boards before that have an LED that flashes like a heartbeat to show that the board is still executing code. I'd like to do something similar on an embedded Linux board I'm working on. Given that it's a fairly trivial bit of code, it seems likely to me that someone has already written a daemon for Linux that does this, but I haven't been able to find any evidence.
Note that OS X Server's heartbeatd and the High-Availability Linux heartbeat daemon are not what I'm looking for-- they both coordinate system availability over IP networks, or something like that.
Assuming what I'm looking for doesn't exist, I'm also interested in advice about how to write a daemon that toggles a pin while minimizing resource usage. At what update rate does cron become a stupid idea?
(I'd also rather not hear gushing about the LED on the sleeping MacBook Pro, if that seems relevant for some reason.)
Thanks.
The LED heartbeat is a built-in kernel function. Assuming you have a device driver for your LED, turning on the heartbeat is done thus:
$ echo "heartbeat" > /sys/class/leds/MyLed/trigger
To see the list of triggers (MMC activity, heartbeat, etc.)
$ cat /sys/class/leds/MyLed/trigger
See drivers/leds/ledtrig-heartbeat.c and http://www.avrfreaks.net/wiki/index.php/Documentation:Linux/LEDs
The interesting thing about the heartbeat is that the pattern is dynamic. The basic pattern is thump-thump-pause, just like a human heartbeat. But the rate of the heartbeat is controlled by the load average! Light loads beat at about 50 beats per minute. Heavier loads cause faster beating until it maxes out at about 180 bpm.
I wouldn't use the cron. Its just not the right tool. A very simple solution is to just run a
shell script from your inittab.
Example:
#!/bin/sh
while [ true ];
do
logger "blink!" # to be replaced
sleep 1
done
Save this to /bin/blink.sh, add the following line to your inittab and have init reread the tab be running init q.
bl:2345:respawn:/bin/blink.sh
Of course you have to adjust the blink.sh script to your environment. Its highly depended on the
particular board how an LED can be toggled from user space (device driver file, sysfs entry, ....).
If you need something more efficient you might redo the while thing in C but it might not be worth the effort.
One thing to think about is what you want to signal with a pulsing LED. With the approach outlined above we can only show that the board is still alive (kernel is running, the process executing blink.sh is scheduled and blink.sh is doing what it is supposed to do). For some use cases this might be fine but more often you actually want to signal that the application running on an embedded board is still OK (doesn't hang, hasn't crashed, ...). To implement such functionality you need to integrate the code that toggles the LED into the main loop of your application.