I have trouble understanding what happens when calling &*pointer
int j=8;
int* p = &j;
When I print in my compiler I get the following
j = 8 , &j = 00EBFEAC p = 00EBFEAC , *p = 8 , &p = 00EBFEA0
&*p= 00EBFEAC
cout << &*p gives &*p = 00EBFEAC which is p itself
& and * have same operator precedence.I thought &*p would translate to &(*p)--> &(8) and expected compiler error.
How does compiler deduce this result?
You are stumbling over something interesting: Variables, strictly spoken, are not values, but refer to values. 8 is an integer value. After int i=8, i refers to an integer value. The difference is that it could refer to a different value.
In order to obtain the value, i must be dereferenced, i.e. the value stored in the memory location which i stands for must be obtained. This dereferencing is performed implicitly in C whenever a value of the type which the variable references is requested: i=8; printf("%d", i) results in the same output as printf("%d", 8). That is funny because variables are essentially aliases for addresses, while numeric literals are aliases for immediate values. In C these very different things are syntactically treated identically. A variable can stand in for a literal in an expression and will be automatically dereferenced. The resulting machine code makes that very clear. Consider the two functions below. Both have the same return type, int. But f has a variable in the return statement which must be dereferenced so that its value can be returned (in this case, it is returned in a register):
int i = 1;
int g(){ return 1; } // literal
int f(){ return i; } // variable
If we ignore the housekeeping code, the functions each translate into a sigle machine instruction. The corresponding assembler (from icc) is for g:
movl $1, %eax #5.17
That's pretty starightforward: Put 1 in the register eax.
By contrast, f translates to
movl i(%rip), %eax #4.17
This puts the value at the address in register rip plus offset i in the register eax. It's refreshing to see how a variable name is just an address (offset) alias to the compiler.
The necessary dereferencing should now be obvious. It would be more logical to write return *i in order to return 1, and write return i only for functions which return references — or pointers.
In your example it is indeed illogical to a degree that
int j=8;
int* p = &j;
printf("%d\n", *p);
prints 8 (i.e, p is actually dereferenced twice); but that &(*p) yields the address of the object pointed to by p (which is the address value stored in p), and is not interpreted as &(8). The reason is that in the context of the address operator a variable (or, in this case, the L-value obtained by dereferencing p) is not implicitly dereferenced the way it is in other contexts.
When the attempt was made to create a logical, orthogonal language — Algol68 —, int i=8 indeed declared an alias for 8. In order to declare a variable the long form would have been refint m = loc int := 3. Consequently what we call a pointer or reference would have had the type ref ref int because actually two dereferences are needed to obtain an integer value.
j is an int with value 8 and is stored in memory at address 00EBFEAC.
&j gives the memory address of variable j (00EBFEAC).
int* p = &j Here you define a variable p which you define being of type int *, namely a value of an address in memory where it can find an int. You assign it &j, namely an address of an int -> which makes sense.
*p gives you the value associated with the address stored in p.
The address stored in p points to an int, so *p gives you the value of that int, namely 8.
& p is the address of where the variable p itself is stored
&*p gives you the address of the value the memory address stored in p points to, which is indeed p again. &(*p) -> &(j) -> 00EBFEAC
Think about &j itself (or even &(j)). According to your logic, shouldn't j evaluate to 8 and result in &8, as well? Dereferencing a pointer or evaluating a variable results in an lvalue, which is a value that you can assign to or take the address of.
The L in "lvalue" refers to the left in "left hand side of the assignment", such as j = 10 or *p = 12. There are also rvalues, such as j + 10, or 8, which obviously cannot be assigned to.
That's just a basic explanation. In C++ there's a lot more to it, with various classes of values (but that thread might be too advanced for your current needs).
Related
I'm verifying a c program that uses arrays to store heterogeneous data - in particular, the program uses arrays to implement cons cells, where the first element of the array is an integer value, and the second element is a pointer to the next cons cell.
For example, the free operation for this list would be:
void listfree(void * x) {
if((x == 0)) {
return;
} else {
void * n = *((void **)x + 1);
listfree(n);
free(x);
return;
}
}
Note: Not shown here, but other code sections will read the values of the array and treat it as an integer.
While I understand that the natural way to express this would be as some kind of struct, the program itself is written using an array, and I can't change this.
How should I specify the structure of the memory in VST?
I've defined an lseg predicate as follows:
Fixpoint lseg (x: val) (s: (list val)) (self_card: lseg_card) : mpred := match self_card with
| lseg_card_0 => !!(x = nullval) && !!(s = []) && emp
| lseg_card_1 _alpha_513 =>
EX v : Z,
EX s1 : (list val),
EX nxt : val,
!!(~ (x = nullval)) &&
!!(s = ([(Vint (Int.repr v))] ++ s1)) &&
(data_at Tsh (tarray tint 2) [(Vint (Int.repr v)); nxt] x) *
(lseg nxt s1 _alpha_513)
end.
However, I run into troubles when trying to evaluate void *n = *(void **)x; presumably because the specification states that the memory contains an array of ints not pointers.
The issue is probably as follows, and can almost be solved as follows.
The C semantics permit casting an integer (of the right size) to a pointer, and vice versa, as long as you don't actually do any pointer operations to an integer value, or vice versa. Very likely your C program obeys those rules. But the type system of Verifiable C tries to enforce that local variables (and array elements, etc.) of integer type will never contain pointer values, and vice versa (except the special integer value 0, which is NULL).
However, Verifiable C does support a (proved-foundationally-sound) workaround to this stricter enforcement:
typedef void * int_or_ptr
#ifdef COMPCERT
__attribute((aligned(_Alignof(void*))))
#endif
;
That is: the int_or_ptr type is void*, but with the attribute "align this as void*". So it's semantically identical to void*, but the redundant attribute is a hint to the VST type system to be less restrictive about C type enforcement.
So, when I say "can almost be solved", I'm asking: Can you modify the C program to use an array of "void* aligned as void*" ?
If so, then you can proceed. Your VST verification should use int_or_ptr_type, which is a definition of type Ctypes.type provided by VST-Floyd, when referring to the C-language type of these array elements, or of local variables that these elements are loaded into.
Unfortunately, int_or_ptr_type is not documented in the reference manual (VC.pdf), which is an omission that should be correct. You can look at progs/int_or_ptr.c and progs/verif_int_or_ptr.v, but these do much more than you want or need: They axiomatize operators that distinguish odd integers from aligned pointers, which is undefined in C11 (but consistent with C11, otherwise the ocaml garbage collector could never work). That is, those axiomatized external functions are consistent with CompCert, gcc, clang; but you won't need any of them, because the only operations you're doing on int_or_pointer are the perfectly-legal "comparison with NULL" and "cast to integer" or "cast to struct foo *".
In C & Objective C, we used to dereference a pointer and get the value as follows:
p->a = 1
or int x = p->a
But I can't find an equivalent in Swift. I have a return type UnsafePointer to AudioStreamBasicDescription? whose member values I need to read.
You use the pointee property on your UnsafePointer to access the memory it points to. So your C example would read as let x = p.pointee.a.
Documentation for PyNumber_Float (here) doesn't specify what happens if you pass in a PyObject* that points to another float.
e.g.
PyObject* l = PyLong_FromLong( 101 );
PyObject* outA = PyNumber_Float(l);
outA will point to a newly created float PyObject
(or if there already exists one with that value, I think it will point to that and just increment the reference counter)
However,
PyObject* f = PyFloat_FromDouble( 1.1 );
PyObject* outB = PyNumber_Float(f);
What happens here?
Does it simply return the same pointer?
Does it first increment the reference count and then return the same pointer?
Or does it return a pointer to a new PyObject?
Is the behaviour guaranteed to be identical for the equivalent C-API calls for generating other primitives, such as Long, String, List, Dict, etc?
Finally, should the documentation clarify this situation? Would it be reasonable to file a doc-bug?
Thanks to haypo on the dev IRC channel, the following test shows that it returns the same object, with the reference counter incremented:
>>> x=1.1
>>> y=float(x)
>>> y is x, sys.getrefcount(x)-1, sys.getrefcount(y)-1
(True, 2, 2)
>>> y+=1
>>> y is x, sys.getrefcount(x)-1, sys.getrefcount(y)-1
(False, 1, 1)
Note: explanation of why refcount is one-too-high here
Note: x is y compares the memory address, "x is y" is the same as "id(x) == id(y)"
Of course it is possible that some assignment-operator optimisation is bypassing the application of float()
I'm trying to write a DAC macro that gets as input the name of list of bits and its size, and the name of integer variable. Every element in the list should be constrained to be equal to every bit in the variable (both of the same length), i.e. (for list name list_of_bits and variable name foo and their length is 4) the macro's output should be:
keep list_of_bits[0] == foo[0:0];
keep list_of_bits[1] == foo[1:1];
keep list_of_bits[2] == foo[2:2];
keep list_of_bits[3] == foo[3:3];
My macro's code is:
define <keep_all_bits'exp> "keep_all_bits <list_size'exp> <num'name> <list_name'name>" as computed {
for i from 0 to (<list_size'exp> - 1) do {
result = appendf("%s keep %s[%d] == %s[%d:%d];",result, <list_name'name>, index, <num'name>, index, index);
};
};
The error I get:
*** Error: The type of '<list_size'exp>' is 'string', while expecting a
numeric type
...
for i from 0 to (<list_size'exp> - 1) do {
Why it interprets the <list_size'exp> as string?
Thank you for your help
All macro arguments in DAC macros are considered strings (except repetitions, which are considered lists of strings).
The point is that a macro treats its input purely syntactically, and it has no semantic information about the arguments. For example, in case of an expression (<exp>) the macro is unable to actually evaluate the expression and compute its value at compilation time, or even to figure out its type. This information is figured out at later compilation phases.
In your case, I would assume that the size is always a constant. So, first of all, you can use <num> instead of <exp> for that macro argument, and use as_a() to convert it to the actual number. The difference between <exp> and <num> is that <num> allows only constant numbers and not any expressions; but it's still treated as a string inside the macro.
Another important point: your macro itself should be a <struct_member> macro rather than an <exp> macro, because this construct itself is a struct member (namely, a constraint) and not an expression.
And one more thing: to ensure that the list size will be exactly as needed, add another constraint for the list size.
So, the improved macro can look like this:
define <keep_all_bits'struct_member> "keep_all_bits <list_size'num> <num'name> <list_name'name>" as computed {
result = appendf("keep %s.size() == %s;", <list_name'name>, <list_size'num>);
for i from 0 to (<list_size'num>.as_a(int) - 1) do {
result = appendf("%s keep %s[%d] == %s[%d:%d];",result, <list_name'name>, i, <num'name>, i, i);
};
};
Why not write is without macro?
keep for each in list_of_bits {
it == foo[index:index];
};
This should do the same, but look more readable and easier to debug; also the generation engine might take some advantage of more concise constraint.
How do I detect whether a variable is float, double, int, etc.?
Thanks.
Objective-C is not like PHP or other interpreted languages where the 'type' of a variable can change according to how you use it. All variables are set to a fixed type when they are declared and this cannot be changed. Since the type is defined at compile time, there is no need to query the type of a variable at run-time.
For example:
float var1; // var1 is a float and can't be any other type
int var2; // var2 is an int and can't be any other type
NSString* var3; // var3 is a pointer to a NSString object and can't be any other type
The type is specified before the variable name, also in functions:
- (void)initWithValue:(float)param1 andName:(NSString*)param2
{
// param1 is a float
// param2 is a pointer to a NSString object
}
So as you can see, the type is fixed when the variable is declared (also you will notice that all variables must be declared, i.e. you cannot just suddenly start using a new variable name unless you've declared it first).
In a compiled C based language (outside of debug mode with symbols), you can't actually "detect" any variable unless you know the type, or maybe guess a type and get lucky.
So normally, you know and declare the type before any variable reference.
Without type information, the best you can do might be to dereference a pointer to random unknown bits/bytes in memory, and hopefully not crash on an illegal memory reference.
Added comment:
If you know the type is a legal Objective C object, then you might be able to query the runtime for additional information about the class, etc. But not for ints, doubles, etc.
Use sizeof. For double it will be 8. It is 4 for float.
double x = 3.1415;
float y = 3.1415f;
printf("size of x is %d\n", sizeof(x));
printf("size of y is %d\n", sizeof(y));