How to get ioctl command value of the given driver? - linux-device-driver

How to get ioctl command value (integer value) of the given driver, which is not part of kernel source tree.
Example
#define ioctl_cmd _IOW('a', 1, struct example*)
I need an integer value of the command ioctl_cmd without actually modifying the driver.

The _IOW(type,nr,size) macro is defined for userspace code by #include <linux/ioctl.h>. The actual source of the macro is in "/usr/include/asm-generic/ioctl.h".
One way to get the integer value of the ioctl command value is to print it to the terminal in a C program:
#include <stdio.h>
#include <linux/ioctl.h>
#include "your_driver_ioctls.h" // defines `ioctl_cmd`
int main(void)
{
printf("ioctl_cmd = %u (0x%x)\n", ioctl_cmd, ioctl_cmd);
}
Alternatively, you can look at the definition of _IOW in the source to see how the ioctl command code is composed:
Bits 31 to 30 indicate the direction of transfer of the memory pointed to by the optional third argument of the ioctl() call:
_IOC_NONE = 0 (no direction)
_IOC_WRITE = 1 (userland is writing to kernel)
_IOC_READ = 2 (userland is reading from kernel)
_IOC_WRITE | _IOC_READ = 3 (userland is writing to and reading from kernel)
The _IOW(type,nr,size) macro sets the direction to _IOC_WRITE.
Bits 29 to 16 indicate the 14-bit size of the memory pointed to by the optional third argument of the ioctl() call. The _IOW(type,nr,size) macro sets this to the size of the type specified in the third parameter of the macro call (sizeof(size)).
Bits 15 to 8 indicate the 8-bit "type number" of the ioctl command code. Historically, a single ASCII character value was used for the type number, but any unsigned number up to 255 can actually be used. All the ioctl command codes defined for a device generally use the same type number. The _IOW(type,nr,size) macro sets this to the first parameter of the macro call (type).
Bits 7 to 0 indicate the 8-bit "function number" of the ioctl command code. The _IOW(type,nr,size) macro sets this to the second parameter of the macro call (nr).
Note that the above way of defining ioctl command codes is mostly just a convention. In particular, earlier subsystems such as TTY use a simpler scheme consisting of just a "type number" and a "function number").
Your #define ioctl_cmd _IOW('a', 1, struct example*) is unusual because it says that the optional third argument of the ioctl() call points to a struct example* and the size of that would be 4 or 8 (depending on the size of pointers in userspace). More conventionally, it would be defined as _IOW('a', 1, struct example).

Related

sprintf expects argument of type char * - but type IS char

Thats code:
void bleAdvData(char *advData, uint8_t size){
char command[18+size];
uint8_t commandUint[18+size];
sprintf(command, "AT+BLEADVDATA=\"%s\"\r\n", *advData);
Warning in sprintf line:
Argument %s expects argument of type "char *", but argument 3 has type int
Why?
And what i have to do:
I want to trasfer a string(advData) wtich the length of "size" into a function for get a string command like:
AT+BLEADVDATA="advData"\r\n
Your variable advData is defined as char * in the argument list. This is a pointer to an address where character data is stored. However, in your sprintf() you use *advData, ie the actual place where advData points to, not the address itself.
Take the * off in the sprintf(), and all should be fine.
To clarify: char *advData on the first line makes advData a char *.
But then you added an asterisk to advData so you have * (char *advData).
So you want this:
sprintf(command, "AT+BLEADVDATA=\"%s\"\r\n", advData);
That extra asterisk "dereferences" the advData so you're now trying to pass in the first character of the string
sprintf then complains since that's not a valid string. If you ran this it'd either crash or on the ESP32 give you gibberish.
I recommend using the Warnings as Errors option on the ESP32. It's very rare that a warning won't be meaningful, and the ESP32 doesn't crash as easily as a program running on a modern PC OS.
That leads to really hard to find bugs where stuff randomly works or crashes with no clear pattern.

why printf('c') cause Segmentation fault?

This is my test code.
#include <stdio.h>
int main() {
printf('c');
return 0;
}
SO: ubuntu16.04
Compiler version: gcc5.3
Running the code above cause Segmentation fault error in "movdqu (%rdi),%xmm0 ".
I had google it, but I want to know why cause Segmentation fault
Because you are trying to pring a char, not a string. First argument of printf() function is a format string.
Strings are quoted in "", chars in ''.
I fond the error when use GDB debug the program.
image
SHORT:
This is prototype of printf function in C:
int printf ( const char * format, ... );
You should pass c-string (like "this is my message") instead of char.
DETAILED:
This is prototype of printf function in C:
int printf ( const char * format, ... );
This means that the first argument should be a pointer to a null-terminated array of char. In fact, printf reads value of first argument which is address of an c-string in memory, then go to that address and reads bytes by bytes to reach null character. In two condition this code causes segmentation fault:
The address is pointed by the first argument of printf is outbound of memory address of your program.
printf can't find any null characters beginning from the specified address before reaching end of memory boundary of your program.
Please be careful about using non-pointer variables in place of pointers. This cause your program to crash without a argumentive reason.

How to truncate a 2's complement output

I have data written into short data type. The data written is of 2's complement form.
Now when I try to print the data using %04x, the data with MSB=0 is printed fine for eg if data=740, print I get is 0740
But when the MSB=1, I am unable to get a proper print. For eg if data=842, print I get is fffff842
I want the data truncated to 4 bytes so expected output is f842
Either declare your data as a type which is 16 bits long, or make sure the printing function uses the right format for 16 bits value. Or use your current type, but do a bitwise AND with 0xffff. What you can do depends on the language you're doing it in really.
But whichever way you go, check your assumptions again. There seems to be a few issues in your question:
2s-complement applies to signed numbers only. There are no negative numbers in your question.
Assuming you mean C's short - it doesn't have to be 16 bits long.
"I get is fffff842 I want the data truncated to 4 bytes" - fffff842 is 4 bytes long. f842 is 2 bytes long.
2-bytes long value 842 does not have the MSB set.
I'm assuming C (or possibly C++) as the language here.
Because of the default argument promotions involved when calling a variable argument function (such as printf), your use of a short will result in an integer promotion, which states that "If an int can represent all values of the original type (as restricted by the width, for a
bit-field), the value is converted to an int".
A short is converted to an int by means of sign-extension, and 0xf842 sign-extended to 32 bits is 0xfffff842.
You can use a bitwise AND to mask off the most significant word:
printf("%04x", data & 0xffff);
You could also add the h length specifier to state that you only want to print an (unsigned) short worth of bits from an int:
printf("%04hx", data);

Why is this macro replaced as 20 instead 10?

1. #define NUM 10
2. #define FOO NUM
3. #undef NUM
4. #define NUM 20
5.
6. FOO
When I only run the preprocessor, the output file contains 20.
However, from what I understand, the preprocessor simply does text replacement. So this is what I think is happening (which is obviously wrong but idky):
NUM is defined as 10.
Therefore, in line 2, NUM is replaced as 10. So now we have "#define FOO 10".
NUM is undefined.
NUM is redefined and now is 20.
FOO is replaced according to line 2, which was before line 4's redefinition, and is 10.
So I think the output should be 10 instead of 20. Can anything explain where it went wrong?
The text replacement is done where the macro is used, not where you wrote the #define. At the point you use FOO, it replaces FOO with NUM and NUM is currently defined to be 20.
In the interests of collecting all the relevant specifications from the standards, I extracted this information from a comment thread, and added C++ section numbers, based on draft N4527 (the normative text is identical in the two standards). The standard(s) are absolutely clear on the subject.
#define preprocessor directives do not undergo macro replacement.
(C11 §6.10¶7; C++ §16[cpp] ¶6): The preprocessing tokens within a preprocessing directive are not subject to macro expansion unless otherwise stated.
After a macro is replaced with its replacement text, the new text is rescanned. Preprocessor tokens in the replacement are expanded as macros if there is an active macro definition for the token at that point in the program.
(C11 §6.10.3¶9; C++ §16.3[cpp.replace] ¶9) A preprocessing directive of the form
# define identifier replacement-list new-line
defines an object-like macro that causes each subsequent instance of the macro name to be replaced by the replacement list of preprocessing tokens that constitute the remainder of the directive. The replacement list is then rescanned for more macro names as specified below.
A macro definition is active from the line following the #define until an #undef for the macro name, or the end of the file.
(C11 §6.10.3.5¶1; C++ §16.3.5[cpp.scope] ¶1) A macro definition lasts (independent of block structure) until a corresponding #undef directive is encountered or (if none is encountered) until the end of the preprocessing translation unit. Macro definitions have no significance after translation phase 4.
If we look at the program:
#define NUM 10
#define FOO NUM
#undef NUM
#define NUM 20
FOO
we see that the macro definition of NUM in line 1 lasts exactly to line 3. There is no replaceable text in those lines, so the definition is never used; consequently, the program is effectively the same as:
#define FOO NUM
#define NUM 20
FOO
In this program, at the third line, there is an active definition for FOO, with replacement list NUM, and for NUM, with replacement list 20. The FOO is replaced with its replacement list, making it NUM, and then that is once again scanned for macros, resulting in NUM being replaced with its replacement list 20. That replacement is again rescanned, but there are no defined macros, so the end result is that the token 20 is left for processing in translation phase 5.
In:
FOO
the preprocessor will replace it with NUM, then it will replace NUM with what it is currently defined as, which is 20.
Those initial four lines are equivalent to:
#define FOO NUM
#define NUM 20
The C11 standard says (and other versions of C, and C++, say similarly):
A preprocessing directive of the form # define identifier replacement-list new-line defines an object-like macro that causes each subsequent instance of the macro name to be replaced by the replacement list of preprocessing tokens that constitute the remainder of the directive. The replacement list is then rescanned for more macro names as specified below.
However it also says in another part (thanks to rici for pointing this out).
The preprocessing tokens within a preprocessing directive are not subject to macro expansion unless otherwise stated.
So a subsequent instance of the macro name which is found inside another #define directive is actually not replaced.
Your line #define FOO NUM defines that when the token FOO is later found (outside of another #define directive!), it will be replaced by the token NUM .
After a token is replaced, rescanning occurs, and if NUM is itself a macro, then NUM is replaced at that point. (And if whatever NUM expands to contains macros , then that gets expanded , and so on).
So your sequence of steps is actually:
NUM defined as 10
FOO defined as NUM
NUM undefined and re-defined as 20
FOO expands to NUM
(rescan) NUM expands to 20
This behaviour can be seen in another common preprocessor trick, to turn the defined value of a macro into a string:
#define STR(X) #X
#define STR_MACRO(X) STR(X)
#define NUM 10
puts( STR_MACRO(NUM) ); // output: 10
If we had written puts( STR(NUM) ) then the output would be NUM.
The output of 10 is possible because, as before, the second #define here does not actually expand out STR. So the sequence of steps in this code is:
STR(X) defined as #X
STR_MACRO(X) defined as STR(X)
NUM defined as 10
STR_MACRO and NUM are both expanded; the result is puts( STR(10) );
(Rescan result of last expansion) STR(10) is expanded to "10"
(Rescan result of last expansion) No further expansion possible.

Variadic macros with 0 arguments in C99

I have some debugging code that looks like the following:
#define STRINGIFY(x) #x
#define TOSTRING(x) STRINGIFY(x)
#define AT __FILE__ ":" TOSTRING(__LINE__)
void __my_error(const char*loc, const char *fmt, ...);
#define my_error(fmt, ...) __my_error(AT, fmt, ##__VA_ARGS__)
The last macro is used so I can insert the location into the debug output as to where the error occurred. However, when I call the function like this:
my_error("Uh oh!");
I would like my code to be C99, so I find when this compiles, I get the following error:
error: ISO C99 requires rest arguments to be used
I know I can solve this by changing the call to
my_error("Uh oh!", NULL);
But is there any way to make this look less ugly? Thanks!
I see two solutions to this problem. (Three if you count 'stick with gcc').
Extra special case macro
Add a new macro for when you want to print a fixed string.
#define my_errorf(str) my_error(str, NULL)
Pro: Minimum amount of extra code.
Con: It's easy to use the wrong macro (but at least you notice this at compile time).
Put fmt inside the '...'
Vararg macro's can have only __VA_ARGS__ as parameter (unlike vararg functions). So you can put the fmt argument inside the __VA_ARGS__ and change your function.
void __my_error(const char *loc, ...);
#define my_error(...) __my_error(AT, __VA_ARGS__)
Pro: One syntax/macro for all error messages.
Con: Requires rewriting of your __my_error function, which might not be possible.