How does PyNumber_Float handle an argument that is already a float? - type-conversion

Documentation for PyNumber_Float (here) doesn't specify what happens if you pass in a PyObject* that points to another float.
e.g.
PyObject* l = PyLong_FromLong( 101 );
PyObject* outA = PyNumber_Float(l);
outA will point to a newly created float PyObject
(or if there already exists one with that value, I think it will point to that and just increment the reference counter)
However,
PyObject* f = PyFloat_FromDouble( 1.1 );
PyObject* outB = PyNumber_Float(f);
What happens here?
Does it simply return the same pointer?
Does it first increment the reference count and then return the same pointer?
Or does it return a pointer to a new PyObject?
Is the behaviour guaranteed to be identical for the equivalent C-API calls for generating other primitives, such as Long, String, List, Dict, etc?
Finally, should the documentation clarify this situation? Would it be reasonable to file a doc-bug?

Thanks to haypo on the dev IRC channel, the following test shows that it returns the same object, with the reference counter incremented:
>>> x=1.1
>>> y=float(x)
>>> y is x, sys.getrefcount(x)-1, sys.getrefcount(y)-1
(True, 2, 2)
>>> y+=1
>>> y is x, sys.getrefcount(x)-1, sys.getrefcount(y)-1
(False, 1, 1)
Note: explanation of why refcount is one-too-high here
Note: x is y compares the memory address, "x is y" is the same as "id(x) == id(y)"
Of course it is possible that some assignment-operator optimisation is bypassing the application of float()

Related

How does dereference work C++

I have trouble understanding what happens when calling &*pointer
int j=8;
int* p = &j;
When I print in my compiler I get the following
j = 8 , &j = 00EBFEAC p = 00EBFEAC , *p = 8 , &p = 00EBFEA0
&*p= 00EBFEAC
cout << &*p gives &*p = 00EBFEAC which is p itself
& and * have same operator precedence.I thought &*p would translate to &(*p)--> &(8) and expected compiler error.
How does compiler deduce this result?
You are stumbling over something interesting: Variables, strictly spoken, are not values, but refer to values. 8 is an integer value. After int i=8, i refers to an integer value. The difference is that it could refer to a different value.
In order to obtain the value, i must be dereferenced, i.e. the value stored in the memory location which i stands for must be obtained. This dereferencing is performed implicitly in C whenever a value of the type which the variable references is requested: i=8; printf("%d", i) results in the same output as printf("%d", 8). That is funny because variables are essentially aliases for addresses, while numeric literals are aliases for immediate values. In C these very different things are syntactically treated identically. A variable can stand in for a literal in an expression and will be automatically dereferenced. The resulting machine code makes that very clear. Consider the two functions below. Both have the same return type, int. But f has a variable in the return statement which must be dereferenced so that its value can be returned (in this case, it is returned in a register):
int i = 1;
int g(){ return 1; } // literal
int f(){ return i; } // variable
If we ignore the housekeeping code, the functions each translate into a sigle machine instruction. The corresponding assembler (from icc) is for g:
movl $1, %eax #5.17
That's pretty starightforward: Put 1 in the register eax.
By contrast, f translates to
movl i(%rip), %eax #4.17
This puts the value at the address in register rip plus offset i in the register eax. It's refreshing to see how a variable name is just an address (offset) alias to the compiler.
The necessary dereferencing should now be obvious. It would be more logical to write return *i in order to return 1, and write return i only for functions which return references — or pointers.
In your example it is indeed illogical to a degree that
int j=8;
int* p = &j;
printf("%d\n", *p);
prints 8 (i.e, p is actually dereferenced twice); but that &(*p) yields the address of the object pointed to by p (which is the address value stored in p), and is not interpreted as &(8). The reason is that in the context of the address operator a variable (or, in this case, the L-value obtained by dereferencing p) is not implicitly dereferenced the way it is in other contexts.
When the attempt was made to create a logical, orthogonal language — Algol68 —, int i=8 indeed declared an alias for 8. In order to declare a variable the long form would have been refint m = loc int := 3. Consequently what we call a pointer or reference would have had the type ref ref int because actually two dereferences are needed to obtain an integer value.
j is an int with value 8 and is stored in memory at address 00EBFEAC.
&j gives the memory address of variable j (00EBFEAC).
int* p = &j Here you define a variable p which you define being of type int *, namely a value of an address in memory where it can find an int. You assign it &j, namely an address of an int -> which makes sense.
*p gives you the value associated with the address stored in p.
The address stored in p points to an int, so *p gives you the value of that int, namely 8.
& p is the address of where the variable p itself is stored
&*p gives you the address of the value the memory address stored in p points to, which is indeed p again. &(*p) -> &(j) -> 00EBFEAC
Think about &j itself (or even &(j)). According to your logic, shouldn't j evaluate to 8 and result in &8, as well? Dereferencing a pointer or evaluating a variable results in an lvalue, which is a value that you can assign to or take the address of.
The L in "lvalue" refers to the left in "left hand side of the assignment", such as j = 10 or *p = 12. There are also rvalues, such as j + 10, or 8, which obviously cannot be assigned to.
That's just a basic explanation. In C++ there's a lot more to it, with various classes of values (but that thread might be too advanced for your current needs).

How to match and delete an element from a queue?

According to 1800-2012 specs,
Queue::delete( [input int index] )
deletes an element of a queue in SystemVerilog, furthermore, a Queue can perform the same operations as an unpacked Array, giving it access to:
Array::find_first_index( )
which returns the index of the first element matching a certain criteria. i.e.
find_first_index( x ) with ( x == 3)
Now I'd like to delete a unique item, guaranteed to exist, from the Queue. Combining 1 and 1 gives me:
queue.delete(queue.find_first_index( x ) with ( x == obj_to_del ));
The compiler does not appreciate that though saying that the argument passed must be either an integer or integer compatible. I could probably pull the two apart:
int index = queue.find_first_index( x ) with ( x == obj_to_del );
queue.delete( index );
or force an integer by typecasting find_first_index:
queue.delete(int'(queue.find_first_index( x ) with ( x == obj_to_del ))) //Just finished compiling, does not work.
The former does not look very elegant to me, and the latter seems somewhat forced which made me curious if there is maybe a more proper way to accomplish this. Is find_first_index possibly returning an array of size one with the index at location 0?
EDIT: I foolishly did not provide a self contained example: A stripped example of what I'm doing looks like:
class parent_task;
endclass;
class child_taskA extends parent_task;
endclass;
class child_taskB extends parent_task;
endclass;
class task_collector;
child_taskA A_queue[$];
child_taskB B_queue[$];
function delete_from_queue(parent_task task_to_del);
case (task_to_del.type):
A: A_queue.delete(A_queue.find_first_index( x ) with ( x == task_to_del));
B: B_queue.delete(B_queue.find_first_index( x ) with ( x == task_to_del));
default: $display("This shouldn't happen.");
endfunction
endclass
The error message, word for word is:
Error-[SV-IQDA] Invalid Queue delete argument
"this.A_queue.find_first_index( iterator ) with ((iterator == task))"
Queue method delete can take optional integer argument. So, argument passed
to it must be either integer or integer assignment compatible.
There are checks in place to make sure that the task in question exists before the call to delete_from_queue.
The int cast didn't work for me as well but the following worked
int index_to_del[$];
index_to_del = que.find_first_index(x) with ( x == task_to_del );
que.delete(index_to_del[0]);
queue.delete(int'(queue.find_first_index( x ) with ( x == obj_to_del )));
works for me. It would really help if you could provide complete self contained examples like the one below:
module top;
int queue[$] = {1,2,3,4,5};
let object_to_del = 3;
initial begin
queue.delete(int'(queue.find_first_index( x ) with ( x == object_to_del )));
$display("%p", queue);
end
endmodule
But what if there was no match? Would you not need to test the result from find_first_index() anyways before deleting?

Using Swift to reference an external C function call that uses pointers

Being new to Xcode, I am trying to make use of an external C call that use pointers, and I’m having difficulty finding a way to reference them in Swift. The original call in C is defined as:
int32 FAR PASCAL swe_calc(double tjd, int ipl, int32 iflag, double *xx, char *serr)
where xx is a pointer to an array of 6 Doubles and serr is a pointer to any error message(s)
Swift sees it as:
int32 swe_calc(tjd: Double, ipl: Int32, iflag: int32, xx: UnsafeMutablePointer<Double>, serr: UnsafeMutablePointer<Int8>)
(from: https://github.com/dwlnetnl/SwissEphemeris/tree/master/SwissEphemeris)
The closest thing I’ve tried that even comes close is:
var serr : UnsafeMutablePointer<Int8> = nil; // returns any error messages from the call
var x = [Double](count: 6, repeatedValue: 0); // array of 6 doubles returned from the call
var xx : UnsafeMutablePointer<Double> ;
xx = &x
swe_calc(2436647.0003794227, 4, 258, xx, serr)
println("xx = \(x[0]), \(x[1]), \(x[2]), \(x[3]), \(x[4]), \(x[5]), errors (if any)=\(serr)")
The line xx=&x gives the error
Cannot assign a value of type input[(Double)] to a value of type ‘UnsafeMutablePointer’
I need a way to get/reference/use the values returned from xx into an array of 6 doubles, and serr should definitely not be an Int8, but a string instead. (I can get the other Java and C# versions to work, but Swift has me stumped.)
How can I make the swe_calc call to give it what it needs, and get out what I need?
You were close. Both UnsafeMutablePointer parameters need an
array of the appropriate type passed as "inout argument" with &:
var x = [Double](count: 6, repeatedValue: 0);
var serr = [Int8](count: 1024, repeatedValue: 0)
let result = swe_calc(2436647.0003794227, 4, 258, &x, &serr)
Of course the arrays must be allocated large enough as expected by
the C function.
If that function puts a NUL-terminated error string into the buffer
pointed to by serr then you can convert it to a Swift string with:
let errorString = String.fromCString(&serr)

Mutable reference to immutable data

I often time hear the term "Mutable reference to immutable data". In my case this was for Scala.
If you have a mutable reference, wouldn't this imply that the immutable data is mutable? I am having hard time understanding the theory and practical aspect of it. Example would be great.
It means you can mutate the reference (change what it refers to) but not mutate the data (change what's behind the reference). The difference matters as soon as there are multiple references to the data, which happens all the time in a language like Scala (assignment, parameter passing, adding to collections, etc.). For example:
var x = List(1);
var y = x;
x = List(2);
// y.head == 1
// x.head == 2
Note that this distinction applies even to Java:
String x = "foo";
String y = x;
x = "bar";
// y.equals("foo")
// x.equals("bar")
Note that in both examples, we mutated the references x and y, but we didn't (and in fact couldn't) mutate the objects they refer to.

scala assignment of value vs. reference types

I thought I had a firm grasp of Scala's treatment of reference types (i.e., those derived from AnyRef), but now I am not so sure.
If I create a simple class like this
class C(var x: Int = 0) {}
and define a few instances
var a = new C
var b = new C(1)
var c = new C(2)
and then I assign
a = b
I do not get a (shallow) copy, but rather the original reference to the instance to a is lost forever, and a and b are essentially "aliases" for the same object. (This can be seen by looking at the addresses of these items.) This is fine and sensible. It is also clear that these are references (as opposed to values), since I can do
c = null
and this does not generate an error.
Now, suppose I do this
import scala.math.BigInt
var x = BigInt("12345678987654321")
var y = BigInt("98765432123456789")
var z = x + y
This creates three BigInts, with x, y and z, as, I suppose, references to these. In fact, I can do
z = null
and again get no error. However,
y = x
x += 1
does not cause y to change, i.e., it appears that in this case assignment did not simply create another "name" for the object referred to by x, but made a copy of it.
Why does this happen? I cannot find any mechanism (e.g., akin to the "copy constructor" of C++) that would be silently invoked by (what appears to be) straightforward reference assignment.
Any explanation would be greatly appreciated, as two days of web search has proved fruitless.
x += 1 will be expanded into x = x + 1 so it's not only assignment.
If you will look at the source of bigInt you'll see that + creates new instance:
def + (that: BigInt): BigInt = new BigInt(this.bigInteger.add(that.bigInteger))
in fact it uses java's BigInteger underneath whose add operations leaves both arguments untouched.
So what basically happens at the end of the day is reference reassignment of result of copy constructor of immutable addition
y = x
x += 1
BigInt is immutable so +1 creates new BigInt that's why y does not change. y still points to previous object while x points to new BigInt object.
I suppose its related to the immutability of BigInt and similar classes, you always get a new immutable object.