How to Calculate the number of bytes in a sector of the hard disk - operating-system

I want to know how to calculate the no of bytes in a sector of the hard disk

For Linux you can use below command:
# cat /sys/block/sda/queue/hw_sector_size
512
Where /dev/sda is your hard disk device name.
For Windows you can use IOCTL: IOCTL_DISK_GET_DRIVE_GEOMETRY
https://learn.microsoft.com/en-us/windows/win32/api/winioctl/ni-winioctl-ioctl_disk_get_drive_geometry
Sample code for Windows:
{
DISK_GEOMETRY diskGeometry;
GET_LENGTH_INFORMATION lengthInfo;
DWORD bytesReturned;
BOOL ret;
ret = DeviceIoControl(
hDevice, // file handle to the physical device
IOCTL_DISK_GET_DRIVE_GEOMETRY,
NULL,
0,
&diskGeometry,
sizeof(DISK_GEOMETRY),
&bytesReturned,
NULL);
if (TRUE != ret) {
// Log error and exit
}
bytesPerSector = diskGeometry.BytesPerSector;
}

Related

"Program too large" threshold greater than actual instruction count

I've written a couple production BPF agents, but my approach is very iterative until I please the verifier and can move on. I've reached my limit again.
Here's a program that works if I have one fewer && condition -- and breaks otherwise. The confusing part is that the warning implies that 103 insns is greater-than at most 4096 insns. There's obviously something I'm misunderstanding about how this is all strung together.
My ultimate goal is to do logging based on a process' environment -- so alternative approaches are welcome. :)
Error:
$ sudo python foo.py
bpf: Argument list too long. Program too large (103 insns), at most 4096 insns
Failed to load BPF program b'tracepoint__sched__sched_process_exec': Argument list too long
BPF Source:
#include <linux/mm_types.h>
#include <linux/sched.h>
#include <linux/version.h>
int tracepoint__sched__sched_process_exec(
struct tracepoint__sched__sched_process_exec* args
) {
struct task_struct* task = (typeof(task))bpf_get_current_task();
const struct mm_struct* mm = task->mm;
unsigned long env_start = mm->env_start;
unsigned long env_end = mm->env_end;
// Read up to 512 environment variables -- only way I could find to "limit"
// the loop to satisfy the verifier.
char var[12];
for (int n = 0; n < 512; n++) {
int result = bpf_probe_read_str(&var, sizeof var, (void*)env_start);
if (result <= 0) {
break;
}
env_start += result;
if (
var[0] == 'H' &&
var[1] == 'I' &&
var[2] == 'S' &&
var[3] == 'T' &&
var[4] == 'S' &&
var[5] == 'I' &&
var[6] == 'Z' &&
var[7] == 'E'
) {
bpf_trace_printk("Got it: %s\n", var);
break;
}
}
return 0;
}
Basic loader program for reproducing:
#!/usr/bin/env python3
import sys
from bcc import BPF
if __name__ == '__main__':
source = open("./foo.c").read()
try:
BPF(text=source.encode("utf-8")).trace_print()
except Exception as e:
error = str(e)
sys.exit(error)
bpf: Argument list too long. Program too large (103 insns), at most 4096 insns
Looking at the error message, my guess would be that your program has 103 instructions and it's rejected because it's too complex. That is, the verifier gave up before analyzing all instructions on all paths.
On Linux 5.15 with a privileged user, the verifier gives up after reading 1 million instructions (the complexity limit). Since it has to analyze all paths through the program, a program with a small number of instructions can have a very high complexity. That's particularly the case when you have loops and many conditions, as is your case.
Why is the error message confusing? This error message is coming from libbpf.c:
if (ret < 0 && errno == E2BIG) {
fprintf(stderr,
"bpf: %s. Program %s too large (%u insns), at most %d insns\n\n",
strerror(errno), attr->name, insns_cnt, BPF_MAXINSNS);
return -1;
}
Since the bpf(2) syscall returns E2BIG both when the program is too large and when its complexity is too high, libbpf prints the same error message for both cases, always with at most 4096 instructions. I'm confident upstream would accept a patch to improve that error message.

getting illegal instructions when vectorized code writes to PCI

I am writing a program that writes to a device's range of HW registers. I am using mmap to map the HW addresses to virtual address (user space). I tested the result from the mmap and it is OK. I implemented a copy of a buffer into the device:
void bufferCopy(void *dest, void *src, const size_t size) {
uint8_t *pdest = static_cast<uint8_t *>(dest);
uint8_t *psrc = static_cast<uint8_t *>(src);
size_t iters = 0, tailBytes = 0;
/* iterate 64bit */
iters = (size / sizeof(uint64_t));
for (size_t index = 0; index < iters; ++index) {
*(reinterpret_cast<uint64_t *>(pdest)) =
*(reinterpret_cast<uint64_t *>(psrc));
pdest += sizeof(uint64_t);
psrc += sizeof(uint64_t);
}
.
.
.
but when running it on QEMU I get illegal instruction exception. When I debugged got it crashes on the next instruction (below is the asm of the main loop):
movdqu (%rsi,%rax,1),%xmm0
movups %xmm0,(%rdi,%rax,1) <----- this instruction crashes ...
add $0x10,%rax
cmp %rax,%r9
jne 0x7ffff7eca1e0 <_ZN12_GLOBAL__N_110bufferCopyEPvS0_m+64>
any ideas why ? my guess that you can write to PCI only 32/64 bit.
The compile doesn’t know my limitations, so it optimize my code and create vectorized loop (each iteration loads 128 bit and saves 128 bit). Is is making sense ?? can I write to PCI with vectorized instructions ?
Also, whether it is a missing feature in QEMU or a bug or just a recommendation, how can I prevent from the compiler to generate those vector instructions ?

PCI configuration space registers - write values

I am developing a network driver (RTL8139) for a selfmade operating system and have problems in writing values to the PCI configuration space registers.
I want to change the value of the interrupt line (offset 0x3c) to get another IRQ number and enable bus master (set bit 2) of the command register (offset 0x04).
When I read back the values I see that my values are correctly written. But instead of using the new IRQ number (in my case 6) it uses the old value (11).
Also bus master for DMA is not working (my packets I would like to send have the correct size (set by IO-Port) but they have no content (all values just 0, transceive buffer is on physical memory and has non zero values). This always worked with my code as expected after I double checked the physical address. I need this to let the network controller access to my physical memory where my buffers for receive/transceive are. (Bus mastering needed for RTL8139)
Do I have to do something else to confirm my changes to the PCI-device?
As emulator I use qemu.
For reading/writing I wrote the following functions:
uint16_t pci_read_word(uint16_t bus, uint16_t slot, uint16_t func, uint16_t offset)
{
uint64_t address;
uint64_t lbus = (uint64_t)bus;
uint64_t lslot = (uint64_t)slot;
uint64_t lfunc = (uint64_t)func;
uint16_t tmp = 0;
address = (uint64_t)((lbus << 16) | (lslot << 11) |
(lfunc << 8) | (offset & 0xfc) | ((uint32_t)0x80000000));
outportl (0xCF8, address);
tmp = (uint16_t)((inportl (0xCFC) >> ((offset & 2) * 8)) & 0xffff);
return (tmp);
}
uint16_t pci_write_word(uint16_t bus, uint16_t slot, uint16_t func, uint16_t offset, uint16_t data)
{
uint64_t address;
uint64_t lbus = (uint64_t)bus;
uint64_t lslot = (uint64_t)slot;
uint64_t lfunc = (uint64_t)func;
uint32_t tmp = 0;
address = (uint64_t)((lbus << 16) | (lslot << 11) |
(lfunc << 8) | (offset & 0xfc) | ((uint32_t)0x80000000));
outportl (0xCF8, address);
tmp = (inportl (0xCFC));
tmp &= ~(0xFFFF << ((offset & 0x2)*8)); // reset the word at the offset
tmp |= data << ((offset & 0x2)*8); // write the data at the offset
outportl (0xCF8, address); // set address again just to be sure
outportl(0xCFC,tmp); // write data
return pci_read_word(bus,slot,func,offset); // read back data;
}
I hope somebody can help me.
The "interrupt line" field (at offset 0x03C in PCI configuration space) literally does nothing.
The Full Story
PCI cards could use up to 4 "PCI IRQs" at the PCI slot; and use them in order (so if you have ten PCI cards that all have one IRQ, then they'll all use the first PCI IRQ at the slot).
There's a tricky "barber pole" arrangement to connect "PCI IRQ at the slot" to "PCI IRQ at the host controller" that is designed to reduce IRQ sharing. If you have ten PCI cards that all have one IRQ, then they'll all use the first PCI IRQ at the slot, but the first IRQ at each PCI slot will be connected to a different PCI IRQ at the host controller. All of this is hard-wired and can not be changed by software.
To complicate things more; to connect PCI IRQs (from the PCI host controller) to the legacy PIC chips, a special "PCI IRQ router" was added. In theory the configuration of the "PCI IRQ router" can be changed by software (if you can find the documentation that describes a table that describes the location, capabilities and restrictions of the "PCI IRQ router").
Without the firmware's help it'd be impossible for an OS to figure out which PIC chip input a PCI device actually uses. For that reason, the firmware figures it out during boot and then stores "PIC chip input number" somewhere for the OS to find. That is what the "interrupt line" register is - it's just an 8-bit register that can store anything you like (that the BIOS/firmware uses to store "PIC chip input number").

Cortex-M3 heap-stack organization using keil

Trying to run blinky sample for Atmel sam3s and inspecting the stack pointer...
SP has the value 0x20000238 at the start of main function which is equal too Ram base + RW + ZI for this sample.
The base RAM address for this chip is : 0x20000000
Total ram size is: 0x10000
I expected the sp to be initialized on 0x20010000 and coming down.
Can anyone explain if I am wrong or not?
As Pait said (he/she should have answered the question so I could accept it, I think),
I was wrong to think the stack will be placed at the end of RAM by default. But I think it is wise to make it be placed there.
This is how I do it for my SAM3S micro, by changing the scatter file
LR_IROM1 0x00400000 0x00080000 { ; load region size_region
ER_IROM1 0x00400000 0x00080000 { ; load address = execution address
*.o (RESET, +First)
*(InRoot$$Sections)
.ANY (+RO)
}
RW_IRAM1 0x20000000 0x00010000 { ; RW data
.ANY (+RW +ZI)
}
RW_STACK 0x2000C000 UNINIT 0x4000 { ; STACK data
*.o (STACK)
}
}
RAM base = 0x20000000
Total RAM = 64 kb
Intended stack size = 16 kb

iPhone Unable to receive data using UDP recvfrom

I am writing an application which is continuously sending and receiving data. My initial send/receive is running successfully but when I am expecting data of size 512 bytes in the recvfrom I get its return value as -1 which is "Resource temporarily unavailable." and errno is set to EAGAIN. If I use a blocking call i.e. without Timeout the application just hangs in recvfrom. Is there any max limit on recvfrom on iPhone? Below is the function which receives data from the server. I am unable to figure out what can be going wrong.
{ struct timeval tv;
tv.tv_sec = 3;
tv.tv_usec = 100000;
setsockopt (mSock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof tv);
NSLog(#"Receiving.. sock:%d",mSock);
recvBuff = (unsigned char *)malloc(1024);
if(recvBuff == NULL)
NSLog(#"Cannot allocate memory to recvBuff");
fromlen = sizeof(struct sockaddr_in);
n = recvfrom(mSock,recvBuff,1024,0,(struct sockaddr *)&from, &fromlen);
if (n == -1) {
[self error:#"Recv From"];
return;
}
else
{
NSLog(#"Recv Addr: %s Recv Port: %d",inet_ntoa(from.sin_addr), ntohs(from.sin_port));
strIPAddr = [[NSString alloc] initWithFormat:#"%s",inet_ntoa(from.sin_addr)];
portNumber = ntohs(from.sin_port);
lIPAddr = [KDefine StrIpToLong:strIPAddr];
write(1,recvBuff,n);
bcopy(recvBuff, data, n);
actualRecvBytes = n;
free(recvBuff);
}
}
Read the manpage:
If no messages are available at the socket, the receive call waits for a message to arrive, unless the socket is nonblocking (see fcntl(2)) in which case the value -1 is returned and the external variable errno set to EAGAIN.
I was writing a UDP application and think I came across a similar issue. Peter Hosey is correct in stating that the given result of recvfrom means that there is no data to be read; but you were wondering, how can there be no data?
If you are sending several UDP datagrams at a time from some host to your iphone, some of those datagrams may be discarded because the receive buffer size (on the iphone) is not large enough to accommodate that much data at once.
The robust way to fix the problem is to implement a feature that allows your application to request a retransmission of missing datagrams. A not as robust solution (that doesn't solve all the issues that the robust solution does) is to simply increase the receive buffer size using setsockopt(2).
The buffer size adjustment can be done as follows:
int rcvbuf_size = 128 * 1024; // That's 128Kb of buffer space.
if (setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF,
&rcvbuf_size, sizeof(rcvbuf_size)) == -1) {
// put your error handling here...
}
You may have to play around with buffer size to find what's optimal for your application.
For me it was a casting issue. Essentially a was assigning the returned value to an int instead of size_t
int rtn = recvfrom(sockfd,... // wrong
instead of:
size_t rtn = recvfrom(sockfd,...// correct