In our simulation we added two fields to cMessage class as protected:
/* sequence number for log files */
long seqNo = 0;
/* timestamp at sending message */
simtime_t sendingTime;
and we add the following publics methods
public:
void setSeqNo(long n) {
this->seqNo = n;
}
long getSeqNo() {
return this->seqNo;
}
void setSentTime(simtime_t t) {
this->sendingTime = t;
}
simtime_t getSentTime() {
return this->sendingTime;
}
Now, when the server simulated application runs, before each message seding it performs:
pkt->setSeqNo(numPkSent);
pkt->setSentTime(simTime());
fprintf(this->analyticsCorrespondentNode, "PKT %u SENT AT TIME %f TO NODE %s \n", numPkSent, pkt->getSentTime().dbl(), d->clientAddr.get4().str().c_str());
On the other hand, when the message is received by the simulated application client if performs:
double recvTime = simTime().dbl();
fprintf(this->analyticsMobileNode, "RECEIVED PKT num. %d SENT AT TIME: %f RECEIVED AT TIME %f TRANSMISSION TIME ELAPSED %f \n", msg->getSeqNo(), msg->getSentTime().dbl(), recvTime, recvTime - msg->getSentTime().dbl());
The problem is that SeqNo is correctly written by the client as it had been set by the server before sending. Instead, the methods
msg->getSentTime.dbl()
always returns 0 in the client log file while it is correctly set by the server in the server log file. I don't understand why, maybe there's something strange happening in the conversion between cMessage to cPacket in the client application...do you know this?
In order to add own fields to a packet definition you should only prepare the definition in *.msg file. For example file FooPacket.msg:
packet FooPacket {
long seqNo;
simtime_t sendingTime;
// other fields...
}
Then, in your source file *.cc add:
#include "FooPacket_m.h"
The class FooPacket which derives from cPacket as well as all setter and getter methods will be generated automatically during the compilation - you will see the following files: FooPacket_m.h and FooPacket_m.cc.
When your client receives a message, you should check whether the type is the same as you expected and then cast it to FooPacket type. For example this way:
void handleMessage(cMessage *msg) {
if (dynamic_cast<FooPacket* >(msg)) {
FooPacket *pkt = check_and_cast<FooPacket* >(msg);
simtime_t t = pkt->getSendingTime();
}
// ...
}
It could be the conversion from cMessage to cPacket. Have you tried this?
Packet pk = check_and_cast<Packet *>(msg);
pk->getSentTime.dbl();
Also you can try to check if there is a problem with simtime_t double somewhere, try double for sentTime parameter
Related
I'd like to generate logging messages from within a C function embedded in a DML method. Take the example code below where the fib() function is called from the write() method of the regs bank. The log methods available to C all require a pointer to the current device.
Is there a way to get the device that calls the embedded function? Do I need to pass the device pointer into fib()?
dml 1.2;
device simple_embedded;
parameter documentation = "Embedding C code example for"
+ " Model Builder User's Guide";
parameter desc = "example of C code";
extern int fib(int x);
bank regs {
register r0 size 4 #0x0000 {
parameter allocate = false;
parameter configuration = "none";
method write(val) {
log "info": "Fibonacci(%d) = %d.", val, fib(val);
}
method read() -> (value) {
// Must be implemented to compile
}
}
}
header %{
int fib(int x);
%}
footer %{
int fib(int x) {
SIM_LOG_INFO(1, mydev, 0, "Generating Fibonacci for %d", x);
if (x < 2) return 1;
else return fib(x-1) + fib(x-2);
}
%}
I want to log from an embedded C function.
I solved this by passing the Simics conf_object_t pointer along to C. Just like implied in the question.
So you would use:
int fib(conf_object_t *mydev, int x) {
SIM_LOG_INFO(1, mydev, 0, "Generating Fibonacci for %d", x);
}
And
method write(val) {
log "info": "Fibonacci(%d) = %d.", val, fib(dev.obj,val);
}
Jakob's answer is the right one if your purpose is to offload some computations to C code (which makes sense in many situations, like when functionality is implemented by a lib).
However, if you just want a way to pass a callback to an API that asks for a function pointer, then it is easier to keep the implementation within DML and use a method reference, like:
method init() {
SIM_add_notifier(obj, trigger_fib_notifier_type, obj, &trigger_fib,
&dev.regs.r0.val);
}
method trigger_fib(conf_object_t *_, lang_void *aux) {
value = *cast(aux, uint64 *);
local int result = fib(value);
log info: "result: %d", result;
}
method fib(int x) -> (int) {
log info: "Generating Fibonacci for %d", x;
if (x < 2) return 1;
else return fib(x-1) + fib(x-2);
}
I'm developing an UEFI app using the TPM2. getCapabilities works, but everything else is shoved onto this submitCommand() function. everything I try there returns EFI_ABORTED as status.
I tried several commands, like read_PCR and get_random_number, but it appears to occur for all commands (TPM2 spec part 3). I chose the random number command because it's a simple command without authorization or encryption that should always return when executed correctly.
struct TPM2_ {
EFI_HANDLE image;
EFI_BOOT_SERVICES *BS;
EFI_TCG2_PROTOCOL *prot;
UINT32 activePCRbanks;
};
struct TPM2_Rand_Read_Command {
TPMI_ST_COMMAND_TAG tag;
UINT32 commandSize;
TPM_CC commandCode;
UINT16 bytesRequested;
};
struct TPM2_Rand_Read_Response {
TPM_ST tag;
UINT32 responseSize;
TPM_RC responseCode;
TPM2B_DIGEST randomBytes;
};
UINTN tpm_get_random(TPM2 * tpm) {
struct TPM2_Rand_Read_Command cmd;
struct TPM2_Rand_Read_Response resp;
cmd.tag = __builtin_bswap16(TPM_ST_NO_SESSIONS); //x86 is little endian, TPM2 is big-endian, use bswap to convert!)
cmd.commandCode = __builtin_bswap32(TPM_CC_GetRandom);
cmd.commandSize = __builtin_bswap32(sizeof(struct TPM2_Rand_Read_Command));
cmd.bytesRequested = __builtin_bswap16(4);
EFI_STATUS stat = tpm->prot->SubmitCommand(tpm->prot,sizeof(struct TPM2_Rand_Read_Command), (UINT8*)&cmd,sizeof(struct TPM2_Rand_Read_Response),(UINT8*)&resp); //responds 0x15 || 21
Print(L"statreadrand: %x \t %d \r\n", stat, *((UINT32*)resp.randomBytes.buffer));
CHECK_STATUS(stat, L"SubmitReadCommand");
return 0;
}
TPM2* tpm_create(EFI_BOOT_SERVICES *BS, EFI_HANDLE image) {
TPM2* tpm = calloc(1, sizeof(TPM2));
EFI_GUID prot_guid = (EFI_GUID)EFI_TCG2_PROTOCOL_GUID;
tpm->BS = BS;
tpm->image = image;
EFI_STATUS stat = tpm->BS->LocateProtocol(&prot_guid, NULL, (void **)&tpm->prot);
CHECK_STATUS(stat, L"LocateTPMProtocol");
return tpm;
}
I expect the SubmitCommand function to return EFI_SUCCESS (0) and fill the response struct with 4 random bytes. But the function returns EFI_ABORTED (21)
Does anyone know how to solve this?
EDIT: tried different toolchains (GNU-EFI/ plain GCC / EDK2) all give the same behaviour.
The particular PC had this exact problem. probably the TPM was locked.
When using a different PC With a TPM2 the problem didn' t occur and instead, I just got a random number back.
Hello Dear participants of stackoverflow,
I'm new to kernel space development and still in the beginning of the road.
I developed a basic char device driver that can read open close etc . But couldn't find a proper source and how to tutorial for Poll/select mechanism sample.
I've written the sample code for poll function below:
static unsigned int dev_poll(struct file * file, poll_table *wait)
{
poll_wait(file,&dev_wait,wait);
if (size_of_message > 0 ){
printk(KERN_INFO "size_of_message > 0 returning POLLIN | POLLRDNORM\n");
return POLLIN | POLLRDNORM;
}
else {
printk(KERN_INFO "dev_poll return 0\n");
return 0;
}
}
It works fine but couldn't undestand a few things.
When I call select from user space program as
struct timeval time = {5,0 } ;
select(fd + 1 , &readfs,NULL,NULL,&time);
the dev_poll function in driver called once and return zero or POLLIN in order to buffer size . And then never called again. In user space , after 5 seconds the program continue if dev_poll returned 0.
What I couldn't understand is here , How the driver code will decide and let user space program if there is something in buffer that is readable withing this 5 seconds , if it's called once and returned immediately.
Is there anyway in kernel module to gather information of timeval parameter that comes from userspace ?
Thank you from now on.
Regards,
Call poll_wait() actually places some wait object into a waitqueue, specified as a second parameter. When wait object is fired (via waitqueue's wake_up or similar function), the poll function is evaluated again.
Kernel driver needn't to bother about timeouts: when time is out, the wait object will be removed from the waitqueue automatically.
Hello dear curious people like me about poll . I came up with a solution.
From another topic on stackowerflow a guy said that the poll_function is called multiple times if kernel need to last situation. So basically I implement that code .
when poll called call wait_poll(wait_queue_head);
when device have buffered data(this is usually in driver write function).
call wake_up macro with wait_queue_head paramater.
So after this step poll function of driver is called again .
So here you can return whatever you want to return. In this case POLLIN | POLLRDNORM..
Here is my sample code for write and poll in the driver.
static unsigned int dev_poll(struct file * file, poll_table *wait)
{
static int dev_poll_called_count = 0 ;
dev_poll_called_count ++;
poll_wait(file,&dev_wait,wait);
read_wait_queue_length++;
printk(KERN_INFO "Inside dev_poll called time is : %d read_wait_queue_length %d\n",dev_poll_called_count,read_wait_queue_length);
printk(KERN_INFO "After poll_wait wake_up called\n");
if (size_of_message > 0 ){
printk(KERN_INFO "size_of_message > 0 returning POLLIN | POLLRDNORM\n");
return POLLIN | POLLRDNORM;
}
else {
printk(KERN_INFO "dev_poll return 0\n");
return 0;
}
}
static ssize_t dev_write(struct file *filep, const char *buffer, size_t len, loff_t *offset){
printk(KERN_INFO "Inside write \n");;
int ret;
ret = copy_from_user(message, buffer, len);
size_of_message = len ;
printk(KERN_INFO "EBBChar: Received %zu characters from the user\n", size_of_message);
if (ret)
return -EFAULT;
message[len] = '\0';
printk(KERN_INFO "gelen string %s", message);
if (read_wait_queue_length)
{
wake_up(&dev_wait);
read_wait_queue_length = 0;
}
return len;
}
I'm trying to get the ip address of each of my clients that connect to my server. I save this into fields of a struct which I sent to a thread. I'm noticing that sometimes I get the right ip and sometimes the wrong one. My first peer to connect usually has an incorrect ip...
The problem is that inet_ntoa() returns a pointer to static memory that is overwritten each time you call inet_ntoa(). You need to make a copy of the data before calling inet_ntoa() again:
struct peerInfo{
char ip[16];
int socket;
};
while((newsockfd = accept(sockfd,(struct sockaddr *)&clt_addr, &addrlen)) > 0)
{
struct peerInfo *p = (struct peerInfo *) malloc(sizeof(struct peerInfo));
strncpy(p->ip, inet_ntoa(clt_addr.sin_addr), 16);
p->socket = newsockfd;
printf("A peer connection was accepted from %s:%hu\n", p->ip, ntohs(clt_addr.sin_port));
if (pthread_create(&thread_id , NULL, peer_handler, (void*)p) < 0)
{
syserr("could not create thread\n");
free(p);
return 1;
}
printf("Thread created for the peer.\n");
pthread_detach(thread_id);
}
if (newsockfd < 0)
{
syserr("Accept failed.\n");
}
From http://linux.die.net/man/3/inet_ntoa:
The inet_ntoa() function converts the Internet host address in, given
in network byte order, to a string in IPv4 dotted-decimal notation.
The string is returned in a statically allocated buffer, which
subsequent calls will overwrite.
Emphasis added.
hi guys how can i collect the packet length for each packet in the pcap file? thanks a lot
I suggest a high-tech method, which very few people know: reading the documentation.
man pcap tells us there are actually two different lengths available:
caplen a bpf_u_int32 giving the number of bytes of the packet that are
available from the capture
len a bpf_u_int32 giving the length of the packet, in bytes (which
might be more than the number of bytes available from the cap-
ture, if the length of the packet is larger than the maximum num-
ber of bytes to capture)
An example in C:
/* Grab a packet */
packet = pcap_next(handle, &header);
if (packet == NULL) { /* End of file */
break;
}
printf ("Got a packet with length of [%d] \n",
header.len);
Another one in Python with the pcapy library:
import pcapy
reader = pcapy.open_offline("packets.pcap")
while True:
try:
(header, payload) = reader.next()
print "Got a packet of length %d" % header.getlen()
except pcapy.PcapError:
break
Those two examples below work fine:
using C, WinPcap
using python, SCAPY
(WinPcap)(Compiler CL , Microsoft VC)
I have wrote this function (in C) to get packet size and it works fine.
Don't forget to include pcap.h and set HAVE_REMOTE in compiler preprocessors
u_int getpkt_size(char * pcapfile){
pcap_t *indesc;
char errbuf[PCAP_ERRBUF_SIZE];
char source[PCAP_BUF_SIZE];
u_int res;
struct pcap_pkthdr *pktheader;
u_char *pktdata;
u_int pktsize=0;
/* Create the source string according to the new WinPcap syntax */
if ( pcap_createsrcstr( source, // variable that will keep the source string
PCAP_SRC_FILE, // we want to open a file
NULL, // remote host
NULL, // port on the remote host
pcapfile, // name of the file we want to open
errbuf // error buffer
) != 0)
{
fprintf(stderr,"\nError creating a source string\n");
return 0;
}
/* Open the capture file */
if ( (indesc= pcap_open(source, 65536, PCAP_OPENFLAG_PROMISCUOUS, 1000, NULL, errbuf) ) == NULL)
{
fprintf(stderr,"\nUnable to open the file %s.\n", source);
return 0;
}
/* get the first packet*/
res=pcap_next_ex( indesc, &pktheader, &pktdata);
if (res !=1){
printf("\nError Reading PCAP File");
return 0;
}
/* Get the packet size*/
pktsize=pktheader->len;
/* Close the input file */
pcap_close(indesc);
return pktsize;
}
Another wroking Example in Python using the wonderful SCAPY
from scapy.all import *
pkts=rdpcap("data.pcap",1) # reading only 1 packet from the file
OnePkt=pkts[0]
print len(OnePkt) # prints the length of the packet