Why tcpdump -dd accepting packet instruction choosese exactly 0x4000 as size? - packet

Why tcpdump -dd always use 0x4000 as the size of packet to return in the accepting case? I know it's big enough to return the entire packet. But why exactly that value and not for example 65536

When in doubt, just search for the value in the source code, in our case in libpcap (by the way: it's 0x40000).
/*
* Maximum snapshot length.
*
* Somewhat arbitrary, but chosen to be:
*
* 1) big enough for maximum-size Linux loopback packets (65549)
* and some USB packets captured with USBPcap:
*
* http://desowin.org/usbpcap/
*
* (> 131072, < 262144)
*
* and
*
* 2) small enough not to cause attempts to allocate huge amounts of
* memory; some applications might use the snapshot length in a
* savefile header to control the size of the buffer they allocate,
* so a size of, say, 2^31-1 might not work well.
*
* We don't enforce this in pcap_set_snaplen(), but we use it internally.
*/
#define MAXIMUM_SNAPLEN 262144

Related

Calculate the size of FAT table

Given a disk block has the size of 4096M formatted to FAT. The size of each block is 64K. Calculate the size of the FAT table.
My solution:
Number of blocks = disk size / block size = (4096 * 2^20) / (64 * 2^10) = 2^16 blocks.
Assume using FAT16, since we have 2^16 blocks -> have 2^16 entries, each entry needs to store 16 bits.
=> Size of FAT table = 2^16 * 16 = 2^20 bits = 128KB.
I'm preparing for the final exam and the funny thing is that my teacher told me to self-study virtual memory so I'm not sure that my solution and explanation are correct or not. Please help me point out if I'm doing wrong. Thanks for reading.
I've already had the answer to this question and found out I was correct. Thank you and peace out!

Find physical address from logical address given page table

Following is a page table -
enter image description here
Assume that a page is of size 16000 bytes. How do I calculate the physical address for say the logical address 1000.
Here is what I have worked out yet.
Logical memory = 8 pages
Logical memory size = 8 x 16000 bytes
Physical memory = 8 frames
physical memory size = 8 x 16000 bytes
Now given a logical address of 1000 it will map to the first page which is in frame 3
so considering frame0, frame1, frame2 all of 16000 x 3 bytes.
1000 will be at location 16000 x 3 + 1000
so the physical address will be = 49000 byte
Is this a correct approach?
Is this a correct approach?
Yes. To clarify:
Given a logical address; split it into pieces like:
offset_in_page = logical_address % page_size;
page_table_index = logical_address / page_size;
Then get the physical address of the page from the page table:
physical_address_of_page = page_table[page_table_index].physical_address = page_table[page_table_index].frame * page_size;
Then add the offset within the page to get the final physical address:
physical_address = physical_address_of_page + offset_in_page;
Notes:
a CPU (or MMU) would do various checks using other information in the page table entry (e.g. check if the page is present, check if you're writing to a "read-only" page, etc). When doing the conversion manually you'd have to do these checks too (e.g. when converting a logical address into a physical address the correct answer can be "there is no physical address because the page isn't present").
modulo and division (and multiplication) are expensive. In real hardware the page size will always be a power of 2 so that the modulo and division can be replaced with masks and shifts. The page size will never be 16000 bytes (but may be 16384 bytes or 0x4000 bytes or "2 to the power of 14" bytes, so that the CPU can do offset_in_page = logical_address & 0x3FFF; and page_table_index = logical_address >> 14;). For similar reasons, page table entries are typically constructed by using OR to merge the physical address of a page with other flags (present/not preset, writable/read-only, ...) and AND will be used to extract the physical address from a page table entry (like physical_address_of_page = page_table[page_table_index] & 0xFFFFC000;) and there won't be any "frame number" involved in any calculations.
for real systems (and realistic theoretical examples) it's much easier to use hexadecimal for addresses (to make it easier to do the masks and shifts in your head = e.g. 0x1234567 & 0x03FFFF = 0x0034567 is easy). For this reason (and similar reasons, like determining location in caches, physical address routing and decoding in buses, etc) logical and physical addresses should never be use decimal.
for real systems, there's almost always multiple levels of page tables. In this case approach is mostly the same - you split the logical address into more pieces (e.g. maybe offset_in_page and page_table_index and page_directory_index) and do more table lookups (e.g. maybe page_table = page_directory[page_directory_index].physical_address; then physical_address_of_page = page_table[page_table_index].physical_address;).

About 8086 Microprocessor Memory

It is said that 8086 Microprocessor has 1MB Memory and 20-bit address, 16- bit data bus . My doubt is that if it is 1MB memory that means (2^20 * 2^3) (1 byte = 8 bits) bits or 2^ 23 bits is the whole memory size. Then as 8086 is a 16- bit register then 2^20 ( from address lines) * 2^4( 16- bit size) is the memory i.e 2^ 24 bits which is not what I calculated above.
So there is a false in my assessment , what is that ?.
Each of the 2^20 addresses refers to an 8-bit Byte.
Some of the 8086's machine instructions operate on Bytes (8-bits) (using registers AH, AL, BH, BL, ...) and other machine instructions operate on Words (16-bits) (using registers AX, BX, ...).
When using a Word instruction, two adjacent bytes in memory (addresses (a) and (a+1)) are treated as a Word datum. I do not recall if the 8086 enforces even address alignment for Word-datum memory references. But, 2^20 Bytes contains only 2^19 Words (aligned to even addresses).
Bits are conserved:
(2^20 * 2^3) = (2^19 * 2^4) = 2^23

Using Datalogger- and EepromService together

I am trying to store some data on the device that I do not want to be overwritten by when the datalogger is full. And have run into some minor issues. I was looking for the "eeprom_logbook_app" but could not find it in firmware version 1.6.2 of the device-lib.
I have defined how much space I want for my persistent data and in App.cpp I have used the LOGBOOK_MEMORY_AREA(offset, size) macro.
where I have used the size of what I want to store as the offset and set the size to be
(2097152 + 1048576) - (size of data I want to store)
as this was what was returned when I asked the sensor for the eeprom size. (The eeprom is devided between 2 ICs one with 1MB capasity and one with 2MB capacity?)
Then I remembered that there was some talk about ExtflashChunkStorage::StorageHeader being stored as the first 256 bytes in this answer.
So my question is where the data will be offset from and what is the max size I can set as size so that I can subtract the correct amount to fit my data? I presume I at least need to take another 256 bytes off from the size to get my correct storage size.
As stated in my comment I got this working the only thing you need to do is using the LOGBOOK_MEMORY_AREA(offset, size) function.
let's say you want to set aside 256 bits for your own config you could then do this:
#define RESERVED_CONFIG 256
#define TOTAL_MEMMORY_SIZE (2097152 + 1048576)
static const uint32_t offset = RESERVED_CONFIG;
static const uint32_t size = TOTAL_MEMMORY_SIZE -offset;
LOGBOK_MEMORY_AREA(offset, size);
This will set aside 256 bytes on the start of the EEPROM memory and offset the logbook to accommodate this. As a result, the logbook header will also be moved to the start of the logbook's memory area.

Main memory bandwidth measurement

I want to measure the main memory bandwidth and while looking for the methodology, I found that,
many used 'bcopy' function to copy bytes from a source to destination and then measure the time which they report as the bandwidth.
Others ways of doing it is to allocate and array and walk through the array (with some stride) - this basically gives the time to read the entire array.
I tried doing (1) for data size of 1GB and the bandwidth I got is '700MB/sec' (I used rdtsc to count the number of cycles elapsed for the copy). But I suspect that this is not correct because my RAM config is as follows:
Speed: 1333 MHz
Bus width: 32bit
As per wikipedia, the theoretical bandwidth is calculated as follows:
clock speed * bus width * # bits per clock cycle per line (2 for ddr 3
ram) 1333 MHz * 32 * 2 ~= 8GB/sec.
So mine is completely different from the estimated bandwidth. Any idea of what am I doing wrong?
=========
Other question is, bcopy involves both read and write. So does it mean that I should divide the calculated bandwidth by two to get only the read or only the write bandwidth? I would like to confirm whether the bandwidth is just the inverse of latency? Please suggest any other ways of measuring the bandwidth.
I can't comment on the effectiveness of bcopy, but the most straightforward approach is the second method you stated (with a stride of 1). Additionally, you are confusing bits with bytes in your memory bandwidth equation. 32 bits = 4bytes. Modern computers use 64 bit wide memory buses. So your effective transfer rate (assuming DDR3 tech)
1333Mhz * 64bit/(8bits/byte) = 10666MB/s (also classified as PC3-10666)
The 1333Mhz already has the 2 transfer/clock factored in.
Check out the wiki page for more info: http://en.wikipedia.org/wiki/DDR3_SDRAM
Regarding your results, try again with the array access. Malloc 1GB and traverse the entire thing. You can sum each element of the array and print it out so your compiler doesn't think it's dead code.
Something like this:
double time;
int size = 1024*1024*1024;
int sum;
*char *array = (char*)malloc(size);
//start timer here
for(int i=0; i < size; i++)
sum += array[i];
//end timer
printf("time taken: %f \tsum is %d\n", time, sum);