I'm trying to code in OMNeT++ an app that gets the queue length from the node where it is invoked and sends it to another node.
The plan is to modify the UdpBasicApp.cc file invoked in a router and make it get the length of the queue of the DropTailQueue module.
Searching online I found that this is the right method...
cModule *mod = getModuleByPath("router3.eth[*].mac.queue");
queueing::PacketQueue *queue = check_and_cast<queueing::PacketQueue*>(mod);
int c = queue->getNumPackets();
EV << c;
...since the DropTailQueue extends the PacketQueue module.
I put a print at the end to see if there was something wrong.
When I run the simulation, using the modified UdpBasicApp module, c is always 0.
I hardly doubt that the queue is always 0, but I don't know how to verify this doubt.
If it's an error, why is it always 0?
My guess is, that you are querying a different queue than you assume. You should not use patterns (i.e. *) in your model path, because that may match on multiple eth modules, and it's unspecified, which one will be returned.
Related
Hi and thanks for the help. I have been running a program that has many functions with for loops that iterate over 10000 times. I have been using "#pragma omp_set_num_threads();" to use all the CPUs of my CENT OS device. This seems to work fine for all functions i have in my program except one. The function it doesnt work on is something like this:
void move_check()//function to check if "molecule obj" is in space under consid
{
for(int i = 0 ; i < NM ; ++i)//NM-no of "molecules"
{
int bounds;
bounds = molecule_obj.at(i).check(dom_x,dom_y,dom_z);
////returns status of molecule
////molecule is a class that i have created and molecule_obj is an obj of that class.
if(bounds==1)
{
molecule_obj.erase(molecule_obj.begin()+i);
i -= 1;
NM -= 1;
}
}
}
Can I use pragma for this? If not what other alternative do i have?
As the above function is the one that seems to be consuming the most time, i would like to utilize all the CPUs to execute it. How do i go about doing that?
Thanks a lot.
Yes, in principle you can use an OpenMP pragma, just add
#pragma omp parallel for
before the for loop.
You have to make sure that it is save to use the methods of your molecule_obj in parallel. I cannot know if they are but I assume the following:
molecule_obj.at(i).check(dom_x,dom_y,dom_z); really works with one specific molecule and does depend any others. That is fine than.
But the erase function most likely is not, dependent on how you store the entries. If you use something like std::list, std::vector erasing an element during the loop would lead to invalid iterators, wrong trip counts, etc.
I suggest that you replace the removal of the entry in the loop by just marking it for removal. You can add a special flag to each molecule for that.
Then after the completion of the parallel loop you can go over the list once in serial and actually remove the marked entries.
Just another suggestion: make a clearer distinction between one molecule and the list of molecules. It would improve the understandability of your code.
I have a circular dependency problem with Perl modules: say package X uses Y and wants to hold a static reference to an Y instance, and package Y uses X and wants to hold a static reference to an X instance.
Simply saying our $x_instance = new X will give Can't locate object method "new" in the module that was not loaded first.
I figured something like
our $x_instance;
INIT { $x_instance = new X }
would make sense, so I read everything about the specially named blocks.
Well, this works in a simple test I made, but in my real application it systematically shows Too late to run INIT block. The same happens with CHECK blocks.
The only explanation I found was from Perl Monks and I'm afraid I couldn't make much sense of it.
Does someone have an explanation about how Perl goes about executing CHECK and INIT block that goes beyond what is in perlmod, and would help me understand why my blocks and sometimes executed and sometimes not?
By the way, I just want to understand this—I am not specifically asking a solution to my original circular dependency problem, as I have a workaround that I am reasonably happy about:
our $x_instance;
sub get_x_instance {
$x_instance //= new X;
return $x_instance;
}
INIT blocks are executed immediately before the run time phase is started in the order the compiler encountered them during the compilation phase.
If you use use require (or do) at run time to compile a Perl file that includes an INIT block then the block won't be executed.
It is rare that there is a real reason to use require in preference to use.
Despite your confidence, there must be a place where you are attempting to load a module at run time that contains an INIT block. I suggest you install and use Carp::Always so that the Too late to run INIT block message is accompanied by a stack backtrace that will help you find the erroneous call.
I am currently working on a tool written in M-Script that executes a set of checks on a given simulink model. This tool does not compile/execute the model, I'm using find_system and get_param to retrieve all the information I need in order to run the routines of my tool.
I've reached a point where I need to determine whether a certain block has direct-feedthrough or not. I am not entirely sure how to do this. Two possible solutions come to mind:
A property might store this information and might be accessible via get_param. After investigating this, I could not find any such property.
Some block types have direct-feedthrough (Sum, Logic, ...), some other do not (Unit Delay, Integrator), so I could use the block type to determine whether a block has direct-feedthrough or not. Since I'm not an experienced Simulink modeller, I'm not sure if its possible to tell whether a block has direct-feedthrough by solely looking at its block type. Also, this would require a lookup table including all Simulink block types. An impossible task, since additional block types might get added to Simulink via third party modules.
Any help or pointers to possible solutions are greatly appreciated.
after some further research...
There is an "official solution" by Matlab:
just download the linked m-file
It shows that my idea was not that bad ;)
and for the record, my idea:
I think it's doable quite easily. I cannot present you some code yet, but I'll see what I can do. My idea is the following:
programatically create a new model
Add a Constant source block and a Terminator
add the Block you want to get to know the direct feedthrough ability in the middle
add_lines
run the simulation and log the states, which will give you the xout variable in the workspace.
If there is direct feedthrough the vector is empty, otherwise not.
probably you need to include some try/catch error catching for special cases
This way you can analyse a block for direct feedthrough by just migrating it to another model, without compiling your actual main model. It's not the fastest solution, but I can not imagine that performance matters that much for you.
Here we go, this script works fine for my examples:
function feedthrough = hasfeedthrough( input )
% get block path
blockinfo = find_system('simulink','Name',input);
blockpath = blockinfo{1};
% create new system
new_system('feed');
open_system('feed');
% add test model elements
src = add_block('simulink/Sources/Constant','feed/Constant');
src_ports = get_param(src,'PortHandles');
src_out = src_ports.Outport;
dest = add_block('simulink/Sinks/To Workspace','feed/simout');
dest_ports = get_param(dest,'PortHandles');
dest_in = dest_ports.Inport;
test = add_block(blockpath,'feed/test');
test_ports = get_param(test,'PortHandles');
test_in = test_ports.Inport;
test_out = test_ports.Outport;
add_line('feed',src_out,test_in);
add_line('feed',test_out,dest_in);
% setup simulation
set_param('feed','StopTime','0.1');
set_param('feed','Solver','ode3');
set_param('feed','FixedStep','0.05');
set_param('feed','SaveState','on');
% run simulation and get states
sim('feed');
% if condition for blocks like state space
feedthrough = isempty(xout);
if ~feedthrough
a = simout.data;
if ~any(a == xout);
feedthrough = ~feedthrough;
end
end
delete system
close_system('feed',1)
delete('feed');
end
When enter for example 'Gain' it will return 1, when you enter 'Integrator' it will return 0.
Execution time on my ancient machine is 1.3sec, not that bad.
Things you probably still have to do:
add another parameter, to define whether the block is continuous or discrete time and set the solver accordingly.
test some "extraordinary" blocks, maybe it's not working for everything. Also I haven implemented anything which could deal with logic, but actually the constant is 1 so it should work as well.
Just try out everything, at least it's a good base for you to work on.
A famous exception is the StateSpace Block which can have direct feedthrough AND states. But there are not sooo much standard blocks with this "behaviour". If you also have to deal with third party blocks you could get into some trouble, I have to admit that.
possible solution for the state space: if one compares xout with yout than one can find another indicator for direct feedthrough: if there is, the vectors are not equal. If so, than they are equal. Just an example, but I can imagine that it is possible to find more general ways to test things like that.
besides the added simout block above one needs the condition:
% if condition for blocks like state space
feedthrough = isempty(xout);
if ~feedthrough
a = simout.data;
if ~any(a == xout);
feedthrough = ~feedthrough;
end
end
From the documentation:
Tip
To determine if a block has direct feedthrough:
Double-click the
block. The block parameter dialog box opens.
Click the Help button in
the block parameter dialog box. The block reference page opens.
Scroll
to the Characteristics section of the block reference page, which
lists whether or not that block has direct feedthrough.
I couldn't find a programmatic equivalent though...
Based on a similar approach to the one by #thewaywewalk, you could set up a temporary model that contains an algebraic loop, similar to,
(Note that you would replace the State-Space block with any block that you want to test.)
Then set the diagnostics to error out if there is an algebraic loop,
If an error occurs when the model is compiled
>> modelname([],[],[],'compile');
(and you should check that it is the Algebraic Loop error that has occured), then the block has direct feed though.
If no error occurs then the block does not have direct feed though.
At this point you would need to terminate the model using
>> modelname([],[],[],'term');
If the block has multiple inports or outprts then you'll need to iterate over all combinations of them.
Reading the man page for close, if it's interrupted by a signal then the fd's state is unspecified. Is there a best practice for handling this case, or is it assumed to become the OS's problem afterwards.
I assume that failure after EIO closes the fd appropriately.
If you want your program to run for a long time, a possible file descriptor leak is never only the operating system's problem. Short-lived programs which don't use many file descriptors of course have the option of terminating the the descriptors unclosed, and rely on the kernel closing them when the program terminates. So, for the rest of my answer I'll assume your program is long-running.
If your program is not multi-threaded, you have a very easy situation:
int close_some_fd(int fd, int *was_interrupted) {
*was_interrupted = 0;
/* this is point X to which I will draw your attention later. */
for (;;) {
if (0 == close(fd))
return 0; /* Success. */
if (EIO != errno)
return -1; /* Some failure, not interrupted by a signal. */
/* Our close attempt was interrupted. */
*was_interrupted = 1;
int fdflags = 0;
/* Just use the fd to find out if it is still open. */
if (0 != fcntl(fd, F_GETFD, &fdflags))
return 0; /* Interrupted, but file is also closed. So we are done. */
}
}
On the other hand, if your code is multi-threaded, some other thread (and perhaps one you don't control, such as a name service cache) may have called dup, dup2, socket, open, accept or some other similar function that makes the kernel allocate a file descriptor.
To make a similar approach work in such an environment you will need to be able to tell the difference between the file descriptor you started with and a file descriptor newly opened by another thread. Knowing that there is already a lower-numbered fd which still isn't open is enough to discount may of those, but in a multi-threaded environment you don't have a simple way of figuring out that this is still the case.
One option is to rely on some common aspect to all the file descriptors your program works with. For example if it never uses O_CLOEXEC, you can use fcntl to set the O_CLOEXEC flag at the point marked X in the code, and then you just change the existing call to fcntl like this:
if (0 = fcntl(fd, F_GETFD, &fdflags)) {
if (fdflags & O_CLOEXEC) {
/* open, and still marked with O_CLOEXEC, unused elsewhere. */
continue;
} else {
return 0; /* Interrupted, but file is also closed. So we are done. */
}
}
You can adjust this approach to use something other than O_CLOEXEC (perhaps fstat for example, recording st_dev and st_ino), but if you can't be sure what the rest of your multithreaded program is doing, this general idea is likely to be unsatisfying.
There's another approach which is also a bit problematic, but which might serve. This is that, instead of closing your file descriptor, you use sendmsg to pass the file descriptor across a Unix domain socket to a separate, single-threaded special-purpose server whose only job is to close the file descriptor. Yes, this is a bit icky. Some entertaining properties of this approach though are:
In order to avoid uncertainty over whether your fd was really passed to the server and closed successfully, you should probably read from a return channel of fd-closed-OK messages coming back from the server. This avoids needing to block signal delivery while you are executing sendmsg too. However, it means a user-space context-switch overhead for every file descriptor close (unless you batch them up to amortise the cost). You need to avoid a situation where thread A may be reading an fd-closed-OK report corresponding to a request made from thread B. You can avoid that problem by serialising close operations (which will limit performance) or demultiplexing the responses (which is complex). Alternatively, you could use some other IPC mechanism to avoid the need to serialise or demultiplex (SYSV semaphores for example).
For a high-volume process this will place an entertainingly high load on the kernel's file descriptor garbage collector apparatus, which is normally not highly stressed and may therefore give you some interesting symptoms.
As for what I'd do personally in the applications I work with most often, I'd figure out to what extent I could make assumptions about what kind of file descriptor I was closing. If, for example, they were normally sockets, I'd just try this out with a test program and figure out whether EIO normally leaves the file descriptor closed or not. That is, determine whether the theoretically unspecified state of the file descriptor on EIO is, in practice, predictable. This will not work well if the fd could be anything (e.g. disk file, socket, tty, ...). Of course, if you're using some open-source operating system you may just be able to read the kernel source and determine what the possible outcomes are.
Certainly I'd try the above experiment-based system before worrying about sharding the fd-close servers to scale out on fd-closing.
when reading Ada.Real_Time.Clock right after power-up it shows a value that isn't close to zero and sometimes even negative.
As far as I know Ada.Real_Time.Clock suppose to reset on power-up.
How can I reset Ada.Real_Time.Clock?
Thanks.
The Ada 2005 LRM declares that "real time is defined to be the physical time as observed in the external environment. [emphasis added--MC]
"It is not specified by the language whether the time values are synchronized with any standard time reference. For example, E can correspond to the time of system initialization or it can correspond to the epoch of some time standard." (D.8[18-19])
As it states, Ada does not require that "E", the start of the epoch serving as the "zero time" for real-time Time values, correspond to any particular starting point; it's left up to the compiler implementer.
Whatever specific numeric values you observe for the instances of Time you're seeing, whether near or far from zero, positive or negative, are dependent solely on the compiler implementer's choice of E, how it represents times values, and how it correspondingly implements the real-time capability.
Therefore you should avoid writing code that depends on specific, knowable values of Time, nor code that requires Time values to be intimately manipulable.
Real_Time.Time values should be considered abstract quantities.
Agreeing with Marc. While I have seen some platforms that use time since boot (particularly on Intel platforms, where I think they like to use the processor's iteration counter), that is entirely up to the compiler vendor.
If you need something like "time since startup" and your platform isn't giving you that, then the thing to do would be to grab Real_Time.Clock when you start up, and subtract that value from all further reads from Real_Time.Clock.
You can look at exactly what facilites are defined for the Real_Time package, including all the LRM sections Marc was quoting you, at its LRM page here.
It was long ago but if it helps someone...
I reseted the clock by writing 0 to the time base registers of the MCU.
That's a lovely explanation, but what if someone is trying to write unit tests against code which implements the real_Time clock? For instance, I know that my function foo does an internal comparison against Ada.Real_Time.Clock to check for time spans. Before executing foo with the appropriate inputs I want to reset the clock to force foo down a specific path internally and verify the resulting out parameter has changed.
return_value := foo;
assert (return_value = path1, "tested foo path1");
Reset_Clock;
return_value := foo;
assert (return_value = path2, "tested foo path2");