When you use !address command to find the module that owns an memory address, it shows both a Allocation Base and Base Address.
So Allocation Base is where the DLL image gets loaded (same as the output of the lm command), what about the Base Address then?
AllocationBase refers the start address of the allocated block in memory.
This block can hold segments of different types.
When checking for a specific address, the base address will tell you where the block it belongs to starts and the base address will point to the segment start address.
Check this link, a great tutorial from MSDN:
Memory User Mode Tutorial
Related
When we pin a MAP, we can able to share information from userspace to ebpf but it is system wide.
But if i want to share different value to different instance from tc ingress/egress (array map with size of 1)
Is there any way to pass argument ?
Map (unpinned unique per instance) - update from userspace
Any other way to communicate from userspace to kernel (while attaching or after)
Really appreciate your help.
Pinning a map doesn't make it system wide. Every map is always accessible system wide, pinning just adds a reference to the file system to make it easier to find and to make sure a map isn't removed even when not in use by any program.
Is there any way to pass argument ?
No, once a program is loaded, only the kernel can pass arguments(contexts) to a program, userspace can only use maps to communicate with eBPF programs.
Map (unpinned unique per instance) - update from userspace
Any userspace program with the right permissions can update any map as long as you can obtain a file descriptor to the map. Map FDs can be obtained:
By creating a new map
By opening a map pin
By using its unique ID (directly or by looping over all loaded maps)
By IPC from another process that already has the FD
Any other way to communicate from userspace to kernel (while attaching or after)
Maps are it. You can rewrite the program before loading it into the kernel by setting constants at specific locations, but not at attach time. One way which might be interesting is that on newer kernels, global data is supported. Which allow you to change the value of variables defined in the global scope from userspace. In this case the global data is packed in a array map with a single key/value.
Is it possible to share an ebpf Map between two network interfaces.
I want to write an XDP program and hook it on two devices namely eth0 and eth1. The implementation requires that they both use the same map.
Is it possible to load the same program, hooking them at eth0 and eth1 and use the same Map.
Thank you all!
Yes, this is entirely possible. An eBPF map is not attached to an interface, it is created in the kernel and then referenced by one or several of:
A file descriptor, from the application that created the map,
An eBPF program that uses the map,
A reference in another map of a dedicated type (array-of-maps, hash-of-maps),
A pinned path in the eBPF virtual file system.
eBPF maps can be shared between consecutive program runs, between kernel and user space, or - as for your use case - between different eBPF programs, no matter what interfaces they are attached to. Note that a given program could even be detached and later re-attached to another interface, anyway.
For reusing a map for several programs, this is done by pointing to the same map when you load your programs: What happens when you load one is that you get a handle to the map (its id or its pinned path, for example), get a file descriptor to the map from that handle, and place this file descriptor in your eBPF bytecode before loading it. Then the kernel translates the file descriptor into the relevant memory address.
What happens most of the time in practice is that this “relocation” step (placing the file descriptor in the bytecode) is handled for you by the framework you are using to load your programs, such as libbpf or bcc tools. For example, libbpf has a bpf_map__reuse_fd(struct bpf_map *map, int fd) function to explicitly reuse a given file descriptor for a specific map used by a program, after parsing an object file to extract the bytecode but before loading it.
I am attempting to use libMPSSE to perform I2C communications. The example code listed in the attached document connects to a 24LC024H EEPROM device.
The address for the device used in the example as defined in it's documentation is 1010XXX_ where the X's are configurable. In the examples associated diagram you can see the values are configured to be 1. It also states that the R/W bit (_) should not be included meaning the address passed to the library should be 10101110. The address actually used in the example code is 0x57 which is 01010111.
I do not see how we got from A to B here. I cannot figure out how to format the address of the device I am trying to communicate with nor can I find any documentation spelling it out. The only documenation on the address parameter says:
Address of the I2C slave. This is a 7bit value and it
should not contain the data direction bit, i.e. the
decimal value passed should be always less than 128
This confusing since the data direction bit is usually the LSB.
I was updating my question to clarify what the address should be and a coincidence in the editor cause the answer to smack me in the face.
By "should not be included" they do not mean that the bit should be zero but rather by completely nonexistent. To them this means shifting the address bits down to remove it as the LSB. It also implies that the MSB should always be zero even though it's not explicitly defined anywhere.
The options are :
Indexed addressing
Base register addressing
PC relative addressing
Indexed and Base register addressing both work by adding the content of their respective register (Index / Base register) to the address mentioned in the address code.
[Though the subtle difference is Index register has its content as "index" of the array while the Base register has its content as "base" address of the array]
To make the code relocatable, only the content of the Base / Index register needs to be changed, but that too can only be accomplished by executing some additional code.
PC relative mode just references the other instructions relative to the current PC contents.
So, is option 3 the best answer ?
Thanks !!
There are multiple parts:
- Data: this is about loading values that are in a binary
- Code: this is about jumps and calls
Data doesn't matter much. Code is more interesting.
In the absolute case you call . You write the code so is right most of the time. If it is not, the loader patches it right there in the code.
In the relative case, you have to call which then calls So one extra hop.
In the end 3 is more flexible but might have a small runtime overhead.
Another reason why PC relative all the time can have extra costs is that the PC distance might not be the whole address space.
Absolute addressing can be a small optimization. But also costs more to start when things go wrong.
Introduction:
We have an application in which Linux running on an ARM accepts data from an external processor which DMA's the data into the ARM's memory space. The ARM then needs to access that data from user-mode code.
The range of addresses must be physically contiguous as the DMA engine in the external processor does not support scatter/gather. This memory range is initially allocated from the ARM kernel via a __get_free_pages(GFP_KERNEL | __GFP_DMA,order) call as this assures us that the memory allocated will be physically contiguous. Then a virt_to_phys() call on the returned pointer gives us the physical address that is then provided to the external processor at the beginning of the process.
This physical address is known also to the Linux user mode code which uses it (in user mode) to call the mmap() API to get a user mode pointer to this memory area. Our Linux kernel driver then sees a corresponding call to its mmap routine in the driver's file_operations structure. The driver then retains the vm_area_struct "vma" pointer that is passed to it in the call to its mmap routine for use later.
When the user mode code receives a signal that new data has been DMA'd to this memory address it then needs to access it from user mode via the user mode pointer we got from the call to mmap() mentioned above. Before the user mode code does this of course the cache corresponding to this memory range must be flushed. To accomplish this flush the user mode code calls the driver (via an ioctl) and in kernel mode a call to flush_cache_range() is made:
flush_cache_range(vma,start,end);
The arguments passed to the call above are the "vma" which the driver had captured when its mmap routine was called and "start" and "end" are the user mode addresses passed into the driver from the user mode code in a structure provided to the ioctl() call.
The Problem:
What we see is that the buffer does not seem to be getting flushed as we are seeing what appears to be stale data when accesses from user mode are made. As a test rather than getting the user mode address from a mmap() call to our driver we instead call the mmap() API to /dev/mem. In this case we get uncached access to the buffer (no flushing needed) and then everything works perfectly.
Our kernel version is 3.8.3 and it's running on an ARM 9. Is there a logical eror in the approach we are attempting?
Thanks!
I have a few question after which i might be able to answer:
1) How do you use "PHYSICAL" address in your mmap() call? mmap should have nothing to do with physical addresses.
2)What exactly do you do to get user virtual addresses in your driver?
3)How do you map these user virtual addresses to physical addresses and where do you do it?
4)Since you preallocate using get_free_pages(), do you map it to kernel space using ioremap_cache()?