I'm using the SharpPcap + PacketDotNet libraries to process some .pcap files and came across a bug in the way the timestamps are calculated.
Take this Timeval property, which is something along these lines:
PosixTimeval Timeval
{
DateTime Date;
ulong Seconds;
ulong MicroSeconds;
}
The problem is as follows: Suppose you have a trace open in Wireshark with one of the packets with a timestamp of "0.002". Once you open it within one of your programs, it retrieves the packet and its Timeval is setup such that Seconds = 0 and MicroSeconds = 002 = 2. This is done under the hood, so there is no way to avoid it as far as I can tell.
My question is if that problem is common to other libraries (and maybe all of them?) who manipulate the pcap file format, which I think are built around the same collection of c/c++ functions, or if this is a problem only with the ones I'm using.
I'm the author of sharppcap/packet.net.
What is the bug that you are seeing with the timestamp values? The conversion you mentioned seems correct. 0.002 seconds is 2 milliseconds.
The timestamp values should be the full unix timeval when the packet was captured. Certainly a timeval of 0.002 doesn't make sense as an absolute time, only a relative one.
I'll add a unit test to sharppcap to validate the packet timeval if there isn't one already.
Related
I am using a STM32L476RG board and HAL SPI functions:
HAL_SPI_Transmit(&hspi2, &ReadAddr, 1, HAL_MAX_DELAY);
HAL_SPI_Receive(&hspi2, pBuffer, 4, HAL_MAX_DELAY);
I need to receive data from accelerometer's buffer with maximum speed and I have a problem with delay in these functions. As you can see on the oscilloscope screenshots, there are several microseconds during which nothing happens. I have no idea how to minimize the transmission gap.
I tried using HAL_SPI_Receive_DMA function and this delay was even bigger. Do you have any idea how to solve this problem using HAL functions or any pointers on how I could write my SPI function without these delays?
TL;DR Don't use HAL, write your transfer functions using the Reference Manual.
HAL is hopelessly overcomplicated for time-critical tasks (among others). Just look at the HAL_SPI_Transmit() function, it's over 60 lines of code till it gets to actually touching the Data Register. HAL will first mark the port access structure as busy even when there is no multitasking OS in sight, validates the function parameters, stores them in the hspi structure for no apparent reason, then goes on figuring out what mode SPI is in, etc. It's not necessary to check timeouts in SPI master mode either, because master controls all bus timings, if it can't get out a byte in a finite amount of time, then the port initialization is wrong, period.
Without HAL, it's a lot simpler. First, figure out what should go into the control registers, set CR1 and CR2 accordingly.
void SPIx_Init() {
/* full duplex master, 8 bit transfer, default phase and polarity */
SPIx->CR1 = SPI_CR1_MSTR | SPI_CR1_SPE | SPI_CR1_SSM | SPI_CR1_SSI;
/* Disable receive FIFO, it'd complicate things when there is an odd number of bytes to transfer */
SPIx->CR2 = SPI_CR2_FRXTH;
}
This initialization assumes that Slave Select (NSS or CS#) is handled by separate GPIO pins. If you want CS# managed by the SPI peripheral, then look up Slave select (NSS) pin management in the Reference Manual.
Note that a full duplex SPI connection can not just transmit or receive, it always does both simultaneously. If the slave expects one command byte, and answers with four bytes of data, that's a 5-byte transfer, the slave will ignore the last 4 bytes, the master should ignore the first one.
A very simple transfer function would be
void SPIx_Transfer(uint8_t *outp, uint8_t *inp, int count) {
while(count--) {
while(!(SPIx->SR & SPI_SR_TXE))
;
*(volatile uint8_t *)&SPIx->DR = *outp++;
while(!(SPIx->SR & SPI_SR_RXNE))
;
*inp++ = *(volatile uint8_t *)&SPIx->DR;
}
}
It can be further optimized when needed, by making use of the SPI fifo, interleaving writes and reads so that the transmitter is always kept busy.
If speed is critical, don't use generalized functions, or make sure they can be inlined when you do. Use a compiler with link-time optimization enabled, and optimize for speed (quite obviously).
You can use HAL_SPI_TransmitReceive(&hspi2, ReadAddr, pBuffer, 1 + 4, HAL_MAX_DELAY); instead of a HAL_SPI_Transmit and a HAL_SPI_Receive. This will avoid the time between transmit and receive.
You can also try changing compilation settings to optimize the speed.
You can also check the accelerometer's datasheet, may be you can read all the buffer with a single frame, something lie this:
HAL_SPI_TransmitReceive(&hspi2, ReadAddr, pBuffer, 1 + (4 * numOfSamples), HAL_MAX_DELAY);
What worked for me:
Read SPI registers directly
Optimize your function for speed
For example function (code); See solution by “JElli.1” in ST- Community >>
ST Community answer
TL;DR
I'm trying to talk to a Minecraft server with a client written in Scala, using Akka I/O over TCP, and would like to know if Minecraft uses an official, standardised protocol in it's network communication?
Minecraft's own documentation covers the contents of each packet, but fails to explain how the packets themselves are encoded, or even how they should be formed.
A little back story
As part of a personal project that I'm working on, written in Scala, I need to create an interface capable of mocking a Minecraft client, and performing actions against a Minecraft server. After weeks of research, I came across a lot of Java libraries that were almost what I was looking for, but none that quite suited my exact needs; long story short, I did the classic, "Oh well, why not write it myself and enjoy the learning curve"...
The issue
The Minecraft protocol documentation is thorough in some respects, but lacking in others, many assumptions are made throughout and a lot of key information is missing or even incorrect; a detailed network specification being the most notable in my case.
One attempt to talk to the Minecraft server had me playing around with Google's protocol buffers, using ScalaPB to compile them to usable case classes, but the data types were a pain to resolve between Google's own documentation and Minecraft's.
message Handshake {
<type?> protocolVersion = 1;
<type?> host = 2;
<type?> port = 3;
<type?> nextState = 4;
}
The host is a string, so that's an easy win, but both the protocolVersion and nextState are variable integers, which are not encoded as expected when I compared them with valid packets generated by another client with identical contents (I've been using a third-party library to compare the hexadecimal output of encoded packets).
My hacky solution
In a ditch attempt to achieve my goals, I've simply written methods like the one below (this is also a first iteration, so be kind!) to generate the desired encoding for each of the types declared in Minecraft's documentation that are not supported natively in Scala, and although this works, it just smells like I'm missing something potentially obvious that others might know about.
def toVarint(x: Int): Array[Byte] = {
var number = x
var output = ArrayBuffer[Int]()
while (number >= Math.pow(2, 31)) {
output += number & 0xFF | 0x80
number /= 128
}
while ((number & ~0x7F) > 0) {
output += number 0xFF | 0x80
number >>>= 7
}
output += number | 0
output.map(_.toByte).toArray
}
This is old, but I'll answer anyway! The wiki you referenced has (at least now) a section on the packet format (both before & after compression), as well as explanations including example code for how to handle their varint & varlong types!
To summarise, each packet is length prefixed with a varint (in both compression modes), so you just need to read a varint from the stream, allocate that much space & read that many bytes from the stream to the buffer.
Each byte of Minecraft varints have a "another byte to follow?" flag bit, followed by 7 bits of actual data. Where those 7 bits are just the bits from a standard int (so on receive, you essentially just omit those flag bits & write the rest to a standard int type)
Currently in my degree we're starting to work with sockets.
I Have a couple of questions regarding polling for input from sockets,
using the select() function.
int select( int nfds,
fd_set *readfds,
fd_set *writefds,
fd_set *exceptfds,
const struct timespec *timeout);
We give select "nfds" param, which would normally would
be the maximum sockets number we would like to monitor. How can i watch only one specific socket instead of the range of 0 to nfds_val sockets ?
What are the file descriptors objects that we use? what is their purpose,
and why can't we just point "select" to the relevant socket structure?
I've read over the forum regarding Blocking and Non-Blocking mode of select, but couldn't understand the meaning or uses of each, nor how to implement such, would be glad if someone could explain.
Last but not least (only for the time being :D ) - When binding a socketaddr_in to socket number, why does one needs to cast to socketaddr * and not leave it as sockaddr_in * ?
I mean except for the fact that bind method expects this kind of pointer type ;)
Would appreciate some of the experts answers here :)
Thank you guys and have a great week!
We give select "nfds" param, which would normally would be the maximum sockets number we would like to monitor. How can i watch only one specific socket instead of the range of 0 to nfds_val sockets ?
Edit. (sorry, the previous text here was wrong) Just provide your socket descriptor + 1. I'm pretty sure it doesn't mean OS will check all the descriptors in [0, 1... descriptor] range.
What are the file descriptors objects that we use? what is their purpose, and why can't we just point "select" to the relevant socket structure?
File descriptors are usually integer values given to the user by OS. OS uses descriptors to control physical and logical resources - one file descriptor means OS has given you something file-like to control. Since Berkeley Sockets have read and write operations defined, they are file-like and socket objects essentially are plain file descriptors.
Answering why can't we just point "select" to the relevant socket structure? - we actually can. What exactly to pass to select depends on OS and language. In C you place your socket descriptor (plain int value most probably) into a fd_set. fd_set is then passed to select.
Edit.
An tiny example for Linux:
fd_set set;
FD_ZERO(&set);
FD_SET(socket_fd, &set);
// check if socket_fd is ready for reading
result = select(socket_fd + 1, &set, NULL, NULL, NULL);
if (result == -1) report_error(errno);
Docs.
Windows has similar code.
I've read over the forum regarding Blocking and Non-Blocking mode of select, but couldn't understand the meaning or uses of each, nor how to implement such, would be glad if someone could explain.
A blocking operation makes your thread wait until it is done. It's 99% of functions you use. If there are sockets ready for some IO, blocking select will return something immediately. It there are no such sockets, the thread will wait for them. Non-blocking select, in the latter case, won't wait and will return -1 (error).
As an example, try to implement single threaded server that is capable of working with multiple clients, including long operations like file transfer happening simultaneously. You definitely don't want to use blocking socket operations in this case.
Last but not least (only for the time being :D ) - When binding a socketaddr_in to socket number, why does one needs to cast to socketaddr * and not leave it as sockaddr_in * ? I mean except for the fact that bind method expects this kind of pointer type ;)
Probably due to historical reasons, but I'm not sure. And there seems to be a fine answer on SO already.
I'm currently studying OS's and i came across a question that i cant quite figure it out.
I have this piece of pseudo-code:
for(;;) {
mask = teskmask;
select(MAXSOCKS,&mask,0,0);
if(FD_ISSET(strmfd,&mask)) {
clilen=sizeof(cliaddr);
newfd=accept(strmfd,(struct sockaddr*)&cliaddr,&clilen);
echo(newfd);
close(newfd);
}
if (FD_ISSET(dgrmfd,&mask)) echo (dgrmfd);
}
Note: consider MAXSOCKS to be defined as whatever (doesn't matter here), strmfd to be a Stream socket, dgrmfd as a datagram socket, clilen is the size of client address,
echo(newfd); is just a function to echo whats in the socket.
So my question is:
what is/whats for the testmask and mask and how is it used here?
I know that select blocks the process until some socket is available to read/write or exceptions.
The simplest way to get answer to your question is just to read select(2) manual page, which says:
int select(int nfds,
fd_set *readfds, fd_set *writefds, fd_set *exceptfds,
struct timeval *timeout);
or a quite similar declaration. So "testmask" is fd_set. Select() gets these masks on input and refills them on output, unless an error is signaled.
In all Unix systems I've seen fd_set is bit array up to FD_SETSIZE bits, and it could be hard to extend this to larger value, so a descriptor greater or equal to FD_SETSIZE can't be select()ed without target-specific hacking. So poll() is preferred for a large descriptor set. (And, platform-specific solutions as kqueue (*BSD, MacOSX), epoll (Linux), /dev/poll (Solaris) are better if relatively small subset of all descriptors is ready on each cycle; for Linux it's ~30% when poll() is more efficient than epoll-based scheme, measured for systems ~3 years ago.)
In Windows systems, fd_set is array of socket numbers (integers) with unlimited value, but limited total count (AFAIR, 64).
Using only standard macros to deal with fd_set gives code which is portable between Unix and Windows. But this code could be inefficient for both in different manner.
Actually the proper way to copy the select mask is to use the FD_COPY macro.
I have just posted an article about the difference between select, poll and epoll here: http://www.ulduzsoft.com/2014/01/select-poll-epoll-practical-difference-for-system-architects/ - I suggest you take a look on it as it covers many things you asked, and there's a chance it covers something you missed.
The prototype is:
int select (int nfds,
fd_set *read-fds,
fd_set *write-fds,
fd_set *except-fds,
struct timeval *timeout);
I've been struggling to understand this function for quite some time. My question is, if it checks all the file descriptors from 0 to nfds-1, and will modify the read-fds, write-fds and except-fds when return, why do I need to use FD_SET to add file descriptors to the set at the begining, it will check all the file descriptors anyway, or not?
It won't check from 0 to nfds-1. The first argument just provides an upper bound on how large, numerically, the file descriptors used are. This is because the set itself might be represented as a bitvector, without a way to know how many bits are actually used. Specifying this as a separate argument helps select() avoid checking file descriptors that are not in use.
Also, a descriptor that is not in e.g. the read set when you call select() is not being checked at all, so it cannot appear in the set when the call returns, either.
I once had the same doubt as yours. You can look at following question and answers:
Query on Select System Call