Which file to extend for customized messages in veins? What is the purpose of AirFrame11p.msg? - simulation

I'm new to SUMO, Veins, OMNET++ and simulations with a bit background of networks. I have successfully setup environment and run veins 4.6 demo application. On google found that unlike RSU, Car modules are added on the fly.
In demo example car nodes send Airframe11p message, i'm not getting where this message is being populated because in TraCIDemo11p.cc methods (onWSA, onWSM, handleSelfMsg, handlePositionUpdate) we are dealing with WSM message types and BaseWaveApplLayer::checkAndTrackPacket methods ensures that message being sent is either BSM, WSM or WSA.
In veins\src\veins\modules\messages AirFrame11p.msg file exists but on finding references of "AirFrame11p" in project, matches are found in AirFrame11p_m.h and AirFrame11p_m.cc only. If demo is not using these files then for what purpose these files are added? and from where simulation gets the annotation of AirFrame11p.
I'm trying to simulate a car accident scenario without RSU using V2V communication, have replaced demo map with mine, generated random routes, now trying to remove RSU from demo application and exploring to send customized messages (including geo location, speed, direction, time etc) to nearby vehicles in specified range e.g. 100 meters using WiFi direct.
If i'm confusing something then please guide me. Thanks.

The short answer: The AirFrame11p message is a lower level message that encapsulates the upper layer messages. Just use the application message type that is appropriate for your application. If you want to replace the physical layer with WiFi direct instead of 11p, and you're starting from scratch, you're probably in for quite a bit of work, since the VEINS PHY implementation is very intricate. If you have an existing implementation of WiFi direct, it may be worth investigating the integration of VEINS' TraCI implementation with that code.
Encapsulation in VEINS
You are correct that the message types at the application layer are more diverse -- these message types (BSM and WSM) are used to encapsulate "application" behavior; it's just not very well visualized in the simulation execution. You can pause the simulation and look (for example) under scheduled events, where the queued packets can be examined visually.
Unlike regular networks, where such messages would be packaged in IP, MAC and PHY encapsulations, VEINS uses the following encapsulation process: BSMs are packaged in MAC frames (80211Pkt), which in turn are encapsulated by AirFrame11p signals. So basically, you should choose the correct message type for your application.
Footnote regarding application behavior:
Technically speaking, these messages would be more correctly placed at the Facilities layer (see e.g. ETSI's spec), since the periodic exchange of messages provides data stored in the facilities layer, which is then used by cITS/VANET applications that run on top. If you need this, look at Artery (as Ventu suggested in the comments).

Related

How do I add a missing peripheral register to a STM32 MCU model in Renode?

I am trying out this MCU / SoC emulator, Renode.
I loaded their existing model template under platforms/cpus/stm32l072.repl, which just includes the repl file for stm32l071 and adds one little thing.
When I then load & run a program binary built with STM32CubeIDE and ST's LL library, and the code hits the initial function of SystemClock_Config(), where the Flash:ACR register is being probed in a loop, to observe an expected change in value, it gets stuck there, as the Renode Monitor window is outputting:
[WARNING] sysbus: Read from an unimplemented register Flash:ACR (0x40022000), returning a value from SVD: 0x0
This seems to be expected, not all existing templates model nearly everything out of the box. I also found that the stm32L071 model is missing some of the USARTs and NVIC channels. I saw how, probably, the latter might be added, but there seems to be not a single among the default models defining that Flash:ACR register that I could use as example.
How would one add such a missing register for this particular MCU model?
Note1: For this test, I'm using a STM32 firmware binary which works as intended on actual hardware, e.g. a devboard for this MCU.
Note2:
The stated advantage of Renode over QEMU, which does apparently not emulate peripherals, is also allowing to stick together a more complex system, out of mocked external e.g. I2C and other devices (apparently C# modules, not yet looked into it).
They say "use the same binary as on the real system".
Which is my reason for trying this out - sounds like a lot of potential for implementing systems where the hardware is not yet fully available, and also automatted testing.
So the obvious thing, commenting out a lot of parts in init code, to only test some hardware-independent code while sidestepping such issues, would defeat the purpose here.
If you want to just provide the ACR register for the flash to pass your init, use a tag.
You can either provide it via REPL (recommended, like here https://github.com/renode/renode/blob/master/platforms/cpus/stm32l071.repl#L175) or via RESC.
Assuming that your software would like to read value 0xDEADBEEF. In the repl you'd use:
sysbus:
init:
Tag <0x40022000, 0x40022003> "ACR" 0xDEADBEEF
In the resc or in the Monitor it would be just:
sysbus Tag <0x40022000, 0x40022003> "ACR" 0xDEADBEEF
If you want more complex logic, you can use a Python peripheral, as described in the docs (https://renode.readthedocs.io/en/latest/basic/using-python.html#python-peripherals-in-a-platform-description):
flash: Python.PythonPeripheral # sysbus 0x40022000
size: 0x1000
initable: false
filename: "script_with_complex_python_logic.py"
```
If you really need advanced implementation, then you need to create a complete C# model.
As you correctly mentioned, we do not want you to modify your binary. But we're ok with mocking some parts we're not interested in for a particular use case if the software passes with these mocks.
Disclaimer: I'm one of the Renode developers.

Unable to get Rocket Chip waveforms for GTKwaves

I want to run a program on Rocket core and observe all the signals in corresponding registers in GTKwave (e.g. PC, register file, ALU registers and wires etc.)
However, the only I get (both in chipyard and rocket chip) is some strange list of wires in GTKwave, which I cannot relate to the core/tile.
I followed instructions at https://github.com/chipsalliance/rocket-chip for installations.
I'm able to run:
make CONFIG = freechips.rocketchip.system.TinyConfig (I did DefaultConfig as well)
make verilog
etc.
Also I generate *.vcd files with
make run-debug CONFIG= freechips.rocketchip.system.TinyConfig
or specific file with hello word.
For each file there is corresponding *.out file with all executed instructions, so I naively think that I can run any of such generated *.vcd and see all the register states for all instructions.
However I get only strange wires
Elsewhere people demonstrate reasonable signals like this:
I observe at the last image TestBench group. I did test benches for modelsim with pure verilog written in Quartus. However for rocket chip framework and Verilator in particular, I think I can run any *.vcd file.
It looks for me the same approach used in theses page 26 with reasonable waves at page 27
Can somebody give me a hint, what is wrong with my approach?
As Jerry said on https://github.com/chipsalliance/rocket-chip/issues/2955,
you can dig into the hierarchy to find the Rocket core and Rocket Tile. The wires you're seeing are the TestHarness. The DUT is the design under test beneath that and that contains the Rocket core.
The core should be under a path like:
ldut -> tile_prci_domain -> tile_reset_domain -> tile -> core

Pd-GEM - using multiple, separate particle streams

I'm working on a live music visualisation project, where I am using a particle stream to visualise each channel of audio (vocals, guitar, percussion, bass) which are each coming from a looper.
I have the visualisation aspects working - I do envelope tracking in a separate pd instance, send the envelope details via udp to my gem instance, which then uses that to vary the size and colour of multiple particle streams.
The problem I have is that I am trying to set the origin point of each stream, and they are either interacting or they are controlling the origin of a different stream. The part_velocity also seems to be having a similar issue.
Each particle system has it's own gemhead (which I init as say [gemhead 20] so each one is unique), but changing the XYZ for its [part_source 1 point] object seems to affect a stream that's in a different gemhead chain.
I have also moved it off into an abstraction, where I name its head [gemhead $0] and I am having the same issue.
This unanswered thread from years ago shows two other people having the same problem, but no answers.
Here's a portion of my main patch which calls the abstraction:
And this is the abstraction:
Am I missing something simple here, or is there perhaps a bug in that one of the part_xxx objects is not checking which gemhead list it's in? Note that there are other gemheads in the main patch, some have an argument, some don't, but they're doing other stuff.
Oh yeah, and input is welcome on the somewhat dumb-looking way that I'm preserving state here, I've NO idea what the patterns are here, and cannot for the life of me find any good advice on it!

For paths, the "getNumberOfTransporters" function throws an exception

I created a very simple network with some nodes and a few paths. A limited amount of agents (peoples) now were supposed to just get from A to B and back in a loop. Worked so far.
Next, I wanted to limit the number of agents that can be at the same time on a specific path, using the "limit number of transporters" option in the general section of a path. This did not work. When I wanted to know how many transporters are on the path anyway, I tried calling (and displaying the output) of various functions like "getNumberOfTransporters()", "getTransporters()", etc. (called by "pathname.functionname()", each resulting in an exception, which usually looked like this:
Exception during discrete event execution:
NullPointerException
java.lang.NullPointerException
at com.anylogic.engine.markup.Path.getNumberOfTransporters(Unknown Source)
at movetest.Main.executeActionOf(Main.java:141)
at com.anylogic.engine.EventTimeout.execute(Unknown Source)
at com.anylogic.engine.Engine.c(Unknown Source)
at com.anylogic.engine.Engine.gc(Unknown Source)
at com.anylogic.engine.Engine.a(Unknown Source)
at com.anylogic.engine.Engine$i.run(Unknown Source)
The function "getMaxNumberOfTransporters()" did work though, which simply outputted the number that was specified in the "limit number of transporters" option field.
So the question is: Why is this exception being thrown? Am I doing something wrong or is there a bug with Anylogic regarding these transporter-related functions/functionality?
By the way, I'm using AnyLogic 8 Personal Learning Edition 8.3.2 on a 64-bit Windows 10 computer.
Since AnyLogic Paths provide these methods (getNumberOfTransporters, etc.) this is definitely a bug; these methods should not be throwing internal exceptions under any circumstances.
A quick test confirms that these methods throw this exception if there is no transporter fleet in your model (so exceptions being thrown is a little more forgiveable). The exceptions aren't thrown if you have a fleet with a home location set, even if that location is in a different network to the path you are checking; i.e., even if it is never possible for any transporters to be on that path. (If you don't set a home location for the fleet you get a different exception relating to that.)
So it looks like you are trying to use normal moving resource agents (i.e., from the Process Modeling library) as your 'transporters' instead of the Material Handling library transporter fleet.
If you want to restrict 'transported' movement around your network, you have two options which are conceptually different:
Use Process Modeling resource pools (as you are doing) and control the movement inside the Process Modeling blocks via use of things like RestrictedAreaStart and RestrictedAreaEnd blocks (i.e., you break the movement down into the relevant segments and control flow through the blocks that control the relevant portions). See the Job Shop example model for a good (and complex) example of this. Note that, conceptually, space markup only gives you distances for use in the model (not any model behaviour). This is the norm: space markup is only there to visualise your model and provide distances. (It also controls what movements are valid since there needs to be a route through the network but it's normally a design error if a required movement is not permitted, so this isn't really model behaviour.)
Use a TransporterFleet instead. They can interoperate with normal Process Modeling blocks (see screenshot below) and they are designed precisely to support this style of 'control their flow via restrictions on numbers of transporters on paths' (plus have built-in functionality for load/unload times, behaviour after dropping off, etc.). Notice that conceptually with the Materials Handling library the space markup defines model behaviour (rather than just giving you distances and visualisation). This is a major conceptual departure with the Materials Handling library. (Similarly, the conveyer networks you define using Materials Handling space markup also define model behaviour; e.g., the Station elements therein are similar to Service blocks in the Process Modeling library.)
P.S. I meant to add that, unless you use a transporter fleet, there is no direct way of getting which agents are on which paths. The closest is that networks support the getNearestPath function (see the API reference for Network in the help), one flavour of which will give you the nearest Path to an agent. (So, by looping through all resource agents and checking this for each of them, you could obliquely determine how many are 'on' each path, though you have to be careful because this only gives the nearest Path.) But this is irrelevant for what you want to achieve.

Matlab and FTDI

I am trying to send/retreieve data from/to FPGA using Matlab. I connected FPGA using Virtual com port. Now how to send data from Matlab to FPGA or read data of FPGA ?
FTDI 2232H is on the FPGA as well. I connected external LED's and switches on the I/O ports of the FPGA.
I am new in this field, so want some guideline to start communication b/w MAtlab and FPGA:
I tried following code:
s1= serial('COM9')
fopen(s1)
. Is it the right way to communicate ? Kindly guide. thanks
FPGA's are configured using a Hardware Description Language (HDL) such as Verilog or VHDL. These languages let you specify how the switch configuration within the FPGA, which in turn lets you construct your custom digital logic and processing system.
The HDL Coder Toolbox in Matlab lets you design and prototype your custom logic using higher-level functions, which are then translated into HDL and can be be used to directly program your chip. This tutorial describes the process in detail.
If you already have a design implemented on your FPGA and want to communicate with that implementation, you would use Matlab's serial port communication functions. The exact protocol will depend on the interface you have implemented.
Some intermediate debugging steps I find helpful:
Verify that you can send serial port data from your computer. In Windows XP, you can do this easily with HyperTerminal, and hooking up a scope to the output pins of your serial cable. Set up a trigger to capture the event. For Windows 7 and newer, you'll need to download a HyperTerminal client.
Repeat this same process with Matlab. Using a scope, verify that you see the serial port signal when sent from Matlab, and that the output matches the results from step 1. Again, set up a scope trigger to capture the event.
Now connect the serial cable directly to the FPGA board. Modify your HDL to include a latch on the serial input that displays the output on the LED's. Verify that your board initializes to the correct LED state, and that the LED state changes when you send the serial message.
Lastly, verify that you are interpreting the message correctly on the FPGA side. This includes making sure that the bit-ordering is correct, etc. Again, the LED outputs can be very helpful for this part.
The key here is to take small, incremental steps, physically verifying that things are working each step of the way.