get reception time of a packet ns3 - packet

Good evening,
I need to know the reception time of a packet. I only found ways to print out the time using callback methods and I cannot use it as a value in my script. Is there any solution to get this information?

If by "reception time" you mean the time in your simulation that a packet is received, then you can use Simulation::Now().GetSeconds() function to get the current simulation time that your packet is received.
In order to do so, try to connect a callback function to Rx trace for your sink node and print the simulation time there.

Related

STM32 I2C interrupt method requires a blocking while loop?

I have a Nucleo-F446RE, and I'm trying to get the I2C working with an IMU I have (LSM6DS33). I am using STM32CubeMX and checked out all the example code for my board which is related to I2C. Specifically I'll be talking about their 'I2C_TwoBoards_ComIT' example, but all their examples which use the interrupt method have this same quirk. Here is a snipped of their code from main.c:
/* The board sends the message and expects to receive it back */
do
{
/*##-2- Start the transmission process #####################################*/
/* While the I2C in reception process, user can transmit data through
"aTxBuffer" buffer */
if(HAL_I2C_Master_Transmit_IT(&I2cHandle, (uint16_t)I2C_ADDRESS, (uint8_t*)aTxBuffer, TXBUFFERSIZE)!= HAL_OK)
{
/* Error_Handler() function is called in case of error. */
Error_Handler();
}
/*##-3- Wait for the end of the transfer ###################################*/
/* Before starting a new communication transfer, you need to check the current
state of the peripheral; if it’s busy you need to wait for the end of current
transfer before starting a new one.
For simplicity reasons, this example is just waiting till the end of the
transfer, but application may perform other tasks while transfer operation
is ongoing. */
while (HAL_I2C_GetState(&I2cHandle) != HAL_I2C_STATE_READY)
{
}
/* When Acknowledge failure occurs (Slave don't acknowledge its address)
Master restarts communication */
}
while(HAL_I2C_GetError(&I2cHandle) == HAL_I2C_ERROR_AF);
Under comment ##-3- they explain that unless we wait for the I2C state to be ready again, after sending a command, the next command will overwrite the previous one, so they use a while loop which waits for the I2C state to be 'ready' before continuing.
Isn't this a very inefficient way to use an interrupt, and no different from using the standard polling method? Both block the main code, so what's the purpose of the interrupt?
In my personal example, I want to collect the accelerometer/gyroscope data at the 1.66 kHz rate which the IMU is capable of. I use a 2kHz timer to send an I2C command to read the acc/gyr data-ready register, and if the data is ready for either sensor I read their 6 bytes to get the x/y/z plane information. Using the polling method is too slow as blocking the code at a rate of 2kHz is not inefficient, but the interrupt method doesn't seem to be any faster as I still need to hang the system during the aforementioned while loop to check if I2C is ready for another command. What am I missing here?
Is this (the example you provided) an efficient way of doing things? No. Can blocking part be avoided? Yes. It's only a small example, a proof of concept, so there is some blocking in there. You should look deeper at why it is there and how can you implement what it does without blocking.
The point of that blocking part is to not start an I2C communication while another I2C communication is in progress. The problem is that while your line of code to send something over I2C has already been executed, the data is still being physically sent over the line, just because your MCU is much faster than I2C. You need to wait until I2C line is idle and available for transmission.
How to achieve that with interrupts and not waste cycles and processing time? Given in your case you can easily estimate the amount of data per each transmission, there is no probem to estimate how much time every transmission will take given your I2C speed. Since you're smartly and correctly using timer to schedule regular transmissions, you should be able to set the timer in such a way that by the next timer interrupt, which will send data, your previous communication has already ended.
For example, if you set the timer to 1Hz to start transmission, you can obviously be sure that by the next interrupt all the communication has happened. You don't need to poll anything at all.
I don't see much point in I2C-polling the IC at 2kHz if it produces data at 1.6kHz. You will have uneven time periods between samples, some data will be very fresh, while some data will come with little delay, plus there will be communication without data ready. It would be better to poll it at something like 1.5-1.6kHz and just expect data to always be there. Of course, given the communication fits into 1.5kHz period, which requires some napkin math.

How can I be sure that all my data is sent and received on the CAN-bus?

I am using can-bus on the stm32f3 and transmitter. I send and receive data over a 1Mb/s can-bus line populated with 2 devices.
I analysed the line with an oscilloscope and detected no problem. But how can I make sure each data sent is received ?
If you observed via the oscilloscope that messages were being transmitted then if you want to be sure that all your data is being transmitted, you should handle the bus errors. If there is no error, everything is being transmitted.
For more information on CAN Bus Error Handling, see here
you may define a counter (1,2,3 ..) and check the arrival of all number on the other side.

Socket data read wait time

I have application where I am listening on multiple sockets using select. If I start processing request that came in from Socket A and in the meanwhile if another request on socket B arrives then I want to know how long does socket B request had to wait before I could get it. Since this is a single threaded application I cannot spawn a new thread and go back to select to monitor again and instantly start processing request from socket B.
Is there a 'C' api available to get me this metric or is this just not possible to get?
There is no a straightforward way how to measure the interval between the 'data ready' time and 'data read' time because there is not any timestamp written together with the data. Moreover the situation is even more complex because a stream oriented socket may receive several data segments till select is closed and the it is not what interval should be measured.
If the application data processing is longer than packet processing in the kernel the you can do a reasonable measurement in following way:
print current time and some unique data id based on application protocol when select wakes up due to socket B data availability.
log any packet received for the socket B. You can use either a network traffic capture tool like wireshark or tcpdump. Or you can configure an iptables firewall rule (if it is running on linux) with target -j LOG.
Write a simple script/program that correlates the captured packets and the application log and subtract received and start processing time.
Of course the idea above does ignore the kernel processing time. If you really need exact time I have to introduce a new thread to your application.

Continious stream of data via socket gets progressively more delayed

I am working on an application which, through a Java program, links two different robot simulation environments. One simulation environment (let's call it A) sends the current state of the robot to the Java application, which does some calculations and then sends data about this current state, as well as some other information, on to the other simulation environment (let's call it B). Simulation B then updates the state of the robot to match Simulation A's version.
The problem is that as the program continues to run, simulation B begins to lag behind what simulation A is doing. This lag increases continuously, so that after a minute or so simulation B is several seconds behind.
I am using TCP sockets to send data between these environments and the Java program. From background reading on socket programming, I found out it is bad practice to continuously open and close sockets rapidly, so what I am doing currently is just keeping both sockets open. I have a loop running which grabs data from Sim A, does some calculations, and then sends the position data to Sim B and then I have the thread wait for 100ms and then the loop repeats. To be clear, the position data sent to B is unaltered from what is received from A.
Upon researching the lag issue, someone suggested to me that for streams of data it is actually a good idea to open and close sockets, because if you keep the socket open, if one simulation takes a longer time to process things than the other, you end up with the position data stacking up in the buffer and being read sequentially, instead of reading the most recent data. Is this true? Would rewriting my code to open and close sockets every 100ms potentially get rid of the delay? Or is this not how sockets actually work?
Edit for clarification: It is more critical that the simulations stay in sync than that all position data is sent, in other words it is acceptable to not pass along all data points for the sake of staying in sync.
Besides keeping the socket open causing problems, does anyone have any ideas of what might be causing the lag issue?
Thanks in advance for any insight/suggestions/hints!
You are correct about using a single connection. Data can indeed back up, but using multiple connections doesn't change that.
The basic question here is whether the Java program can calculate as fast as the robot can send data. If it can't, it will get behind. You can do various things to the networking to speed it up but if the computations can't keep up they are futile. So you need to investigate your timings.

NSStream Response Time

My current requirement is to send some command to a set of ip addresses on some particular port, and as per the response, detect devices(say for example detecting a wifi printer on the network by pinging it on a particular port with a status command)
For this I am creating NSStreams, and everything is working peacefully with reading and writing data by NSInputSteam/NSOutputStream.
The only problem is that its taking too long for the response to come back when its an error and it doesnt find the 'intended' device.
I am assuming the input stream must be waiting for the response, and times out after a certain time interval if it doesnt get anything. So is there any way to control that 'time out' interval? So that this scanning process can be done in a few minutes rather than an hour.