I'm new to iPhone development and I'm just trying out some simple drawing routines and I'm having trouble using defined values in simple math.
I have a line like this:
int offset = (((myValue - min_value) * 6) - middle);
and this works fine - but I don't like using the hard coded 6 in there (because I'll use it lots of places.
So I thought I'd define a constant using #define:
#define WIDTH_OFFSET 6;
then I could use:
int offset = (((myValue - min_value) * WIDTH_OFFSET) - middle);
however - this gets a compiler error : "Expected Expression."
I can get round this by breaking up the calculation onto several lines:
int offset = myValue - min_value;
offset = offset * WIDTH_OFFSET;
offset = offset - middle;
The compiler thinks this is fine.
I'm guessing there's some implicit cast or some other language feature at work here - can anyone explain to me what is happening?
Remove the semicolon ; after #define:
#define WIDTH_OFFSET 6
#define substitutes its arguments literally, so your expression after preprocessing becomes
(((myValue - min_value) * 6;) - middle);
As you can see, there is a semicolon in the middle of the expression, which is a syntax error.
On the other hand, your other expression
int offset = myValue - min_value;
offset = offset * WIDTH_OFFSET;
does not exhibit such problem, because having two semicolons in a row as in
offset = offset * 6;;
is syntactically valid.
When you #define something where you use it is exactly the same as if you typed it in yourself there. So where you are using WIDTH_OFFSET you are getting 6; in its place - which of course is not your intention. So just remove the semicolon.
As dasblinkenlight said, remove the semi colon. The explanation for this is that #defines are a literal substitution into your code. Thus with the semi colon your broken code read:
int offset = (((myValue - min_value) * 6;) - middle);
The working code read:
offset = offset * 6;;
Which is syntactically fine as the ;; is effectively a blank 'line' of code.
Basically, macros are convenient functions that are placed inline by the preprocessor. So you can think that they are doing copy/paste for matching entries, in your case it will substitute any occurency of WIDTH_OFFSET with 6; so, just like others said, remover the semicolon ; and you are all set.
Also, when defining macros for simple math functions, remeber to put them in brackets ( and ) otherwise, you could end up with some math operation order bugs( like unintended part multiplication before addition)
Related
Consider the following JavaScript function, which performs a computation over several lines to clearly indicate the programmer's intent:
function computation(first, second) {
const a = first * first;
const b = second - 4;
const c = a + b;
return c;
}
computation(12, 3)
//143
computation(-3, 2.6)
//7.6
I have tried using do notation to solve this with PureScript but I seem to be just short of understanding some key concept. The do notation examples in the documentation only covers do notation when the value being bound is an array (https://book.purescript.org/chapter4.html#do-notation), but in my example I would like the values to be simple values of the Int or Number type.
While it is possible to perform this computation in one line, it makes the code harder to debug and does not scale to many operations.
How would the computation method be written correctly in PureScript so that...
If computation involved 1000 intermediate steps, instead of 3, the code would not suffer from excessive indenting but would be as readable as possible
Each step of the computation is on its own line, so that, for example, the code could be reviewed line by line by a supervisor, etc., for quality
You don't need the do notation. The do notation is intended for computations happening in a monad, whereas your computation is naked.
To define some intermediate values before returning result, use the let .. in construct:
computation first second =
let a = first * first
b = second - 4
c = a + b
in c
But if you really want to use do, you can do that as well: it also supports naked computations just to give you some choice. The difference is that within a do you can have multiple lets on the same level (and they work the same as one let with multiple definitions) and you don't need an in:
computation first second = do
let a = first * first -- first let
b = second - 4
let c = a + b -- second let
c -- no in
I have the following cython function.
01:
+02: cdef int count_char_in_x(unicode x,Py_UCS4 c):
03: cdef:
+04: int count = 0
05: Py_UCS4 x_k
06:
+07: for x_k in x: ## Yellow
+08: if x_k == c:
+09: count+=1
10:
+11: return count
Line 07 is not properly optimized.
The annotated HTML code is expanded as:
+07: for x_k in x: ## Yellow
if (unlikely(__pyx_v_x == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable");
__PYX_ERR(0, 8, __pyx_L1_error)
}
__Pyx_INCREF(__pyx_v_x);
__pyx_t_1 = __pyx_v_x;
__pyx_t_6 = __Pyx_init_unicode_iteration(__pyx_t_1, (&__pyx_t_3), (&__pyx_t_4), (&__pyx_t_5)); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(0, 8, __pyx_L1_error)
for (__pyx_t_7 = 0; __pyx_t_7 < __pyx_t_3; __pyx_t_7++) {
__pyx_t_2 = __pyx_t_7;
__pyx_v_x_k = __Pyx_PyUnicode_READ(__pyx_t_5, __pyx_t_4, __pyx_t_2);
Any tips on how could this be improved?
I think it is possible to write a cdef/cpdef function that at runtime completly avoids Python None type checks. Any idea on how this could be done?
The generated C code looks pretty good to me. The loop overall is a int-iterated for loop (i.e. it's not relying on calling the Python methods __iter__ and __next__).
__Pyx_PyUnicode_READ is translated pretty directly to PyUnicode_READ (depending slightly on the Python version you're using). PyUnicode_READ is a C macro which is as close to a direct array access as you can get.
This is probably as good as it's getting. You might get a small improvement by using bytes rather than unicode (provided you're dealing with ASCII characters). You might just consider whether it's really worth reimplementing unicode.count.
If it were a regular def function you could declare x as unicode not None to remove the None check before the loop. That might make a small difference. However, as #ead points out that isn't supported for cdef functions. It's likely the cost of a def function call will be slightly larger than the cost of a None-check, but you should time it if you care.
I need to convert a Long number as Fixed point into a Double rappresentation.
The fixed point math is used into the synthesis process and the Real data type only for validation and simulation.
If I make multiple convertion in chain with multiple datatypes to adjust the format then it is not enough or completely wrong .
In my case with a fixed point mantissa of 44 bit I have 3bit integer+sign bit. Q notation like "sfix_44_48"
As example I am doing this to convert a fixed point integer number into a Real value(getting the number 0.5f ):
logic signed [47:0] fp_number = 48'h0800_0000_0000; // it should be 0.5f
real r_val;
real rr_val;
real rrr_val;
real tmp;
initial
begin
r_val = $itor(fp_number)/(2**44); // doesn't solve the problem.
rr_val = real'{fp_number}/(2**44); // doesn't solve the problem.
$cast(tmp,fp_number>>>44); // doesn't solve the problem
rrr_val = tmp;
end
$itor(...) is limited to 32bit integer part.
As result of above I get zero or NaN, on Modelsim simulation.
No luck during all these convertions.
the SV LRM doesn't seem to have a clear way to do this convertion.
What is the SV workaround to allow simulations to analize data greater than 32bit size easily? please.
C.
You want to use
rr_val = real'(fp_number)/(2.0**44);
Do not use any of the $TtoT functions from Verilog. They have fixed datatype inputs and outputs.
2**44 gets computed as a 32-bit 2-complement value and overflows, giving you 0. You can use 2.0 or real'(2) instead.
thanks to #dave_59 I post this piece of code which show the mess with the convertion.
logic signed [47:0] fp_number = 48'h0800_0000_0000; // it should be 0.5f
logic signed [31:0] fp_number2 = 32'h0800_0000; // it should be 0.5f
real r_val;
real rr_val;
real rrr_val;
real rrrr_val;
initial
begin
$display("48bit fp convertion sfix_44_48");
r_val = real'{fp_number}/(2**44); // doesn't solve the problem (curly braces valid sintax but wrong convertion + wrong convertion on the denominator).
rr_val = real'{fp_number}/(2.0**44); // doesn't solve the problem (curly braces valid sintax but wrong convertion + denominator convertion OK).
rrr_val = real'(fp_number)/(2**44); // doesn't solve the problem (numerator OK + the power operation is not properly converted to a Real number as result).
rrrr_val = real'(fp_number)/(2.0**44); // solve the problem with long integer fixed points convertion (the braces are not curly anymore).
$display("r_val[%08f]",r_val,", rr_val[%08f]",rr_val,", rrr_val[%08f]",rrr_val,", rrrr_val[%08f]",rrrr_val); // it should be 0.5 on the fourth data
$display("32bit fp convertion sfix_28_32");
r_val = real'{fp_number2}/(2**28); // result totally different than previous 48bit operation, doesn't solve the problem (curly braces valid sintax but wrong convertion + wrong convertion on the denominator).
rr_val = real'{fp_number2}/(2.0**28); // doesn't solve the problem (curly braces valid sintax but wrong convertion + denominator convertion OK).
rrr_val = real'(fp_number2)/(2**28); // with a 32bit range it apparently solve the problem (numerator OK + the power operation is OK with this range).
rrrr_val = real'(fp_number2)/(2.0**28); // solve the problem with long integer fixed points convertion (the braces are not curly anymore).
$display("r_val[%08f]",r_val,", rr_val[%08f]",rr_val,", rrr_val[%08f]",rrr_val,", rrrr_val[%08f]",rrrr_val); // it should be 0.5 on the fourth data
end
the only valid convertion at 48bit is the fourth case.
For 32bit the third case is valid and also the fourth case.
First: the classic pow(...) operation must be done with this syntax (2.0**BIT) which will create a Real division and not a integer division using (2**BIT) when a scaling fixed point will be applied.
In this case the operation above is managed as float/double(C style) or real(SystemVerilog)
* Second: the real'() cast operation MUST be used with NO curly braces.
I didn't have a Linting tool to proper check the syntax so I would expect a Warning due to the validity of the operation with the curly braces.
Third: the subdle results are ok if the INTEGER denominator is limited at 32bit operations.
As results shown below:
SIM START.
# 48bit fp convertion sfix_44_48
# ** Error (suppressible): (vsim-8604) ./blocks/sim/test_tb.sv(141): NaN (not a number) resulted from a division operation.
# ** Error (suppressible): (vsim-8630) Infinity results from division operation.
# Time: 0 ps Iteration: 0 Process: /test_tb/#INITIAL#138 File: ./blocks/sim/test_tb.sv Line: 143
# r_val[-1.#IND00], rr_val[0.000000], rrr_val[1.#INF00], rrrr_val[0.500000]
# 32bit fp convertion sfix_28_32
SIM END.
# r_val[0.000000], rr_val[0.000000], rrr_val[0.500000], rrrr_val[0.500000]
The solution is to avoid any not protected casting, like a casting with the braces boundary:
r_val = real'{ byteU,byteH,byteL} / (2.0**44) ; // WRONG
rr_val = real'({byteU,byteH,byteL}) / (2.0**44) ; // CORRECT
If the scaling factor occurs then the operation, generally /, must be done with the same type of operands (real/real).
Unsafe way is (real/long) which leads into a nightmare.
I am beginning my journey of learning Rust. I came across this line in Rust by Example:
However, unlike macros in C and other languages, Rust macros are expanded into abstract syntax trees, rather than string preprocessing, so you don't get unexpected precedence bugs.
Why is an abstract syntax tree better than string preprocessing?
If you have this in C:
#define X(A,B) A+B
int r = X(1,2) * 3;
The value of r will be 7, because the preprocessor expands it to 1+2 * 3, which is 1+(2*3).
In Rust, you would have:
macro_rules! X { ($a:expr,$b:expr) => { $a+$b } }
let r = X!(1,2) * 3;
This will evaluate to 9, because the compiler will interpret the expansion as (1+2)*3. This is because the compiler knows that the result of the macro is supposed to be a complete, self-contained expression.
That said, the C macro could also be defined like so:
#define X(A,B) ((A)+(B))
This would avoid any non-obvious evaluation problems, including the arguments themselves being reinterpreted due to context. However, when you're using a macro, you can never be sure whether or not the macro has correctly accounted for every possible way it could be used, so it's hard to tell what any given macro expansion will do.
By using AST nodes instead of text, Rust ensures this ambiguity can't happen.
A classic example using the C preprocessor is
#define MUL(a, b) a * b
// ...
int res = MUL(x + y, 5);
The use of the macro will expand to
int res = x + y * 5;
which is very far from the expected
int res = (x + y) * 5;
This happens because the C preprocessor really just does simple text-based substitutions, it's not really an integral part of the language itself. Preprocessing and parsing are two separate steps.
If the preprocessor instead parsed the macro like the rest of the compiler, which happens for languages where macros are part of the actual language syntax, this is no longer a problem as things like precedence (as mentioned) and associativity are taken into account.
I'm adding strings to a window, with waddwstr() function, one line after other, in consecutive rows. I don't want ncurses to automatically wrap lines for me – I'm overwriting them with consecutive calls to waddwstr() and sometimes tail of previous line is left displaying. Can ncurses just stop when right edge of window is reached?
The non-wrapping functions have "ch" in their name, e.g., wadd_wchstr.
The same is true of the non-wide interfaces waddstr versus waddchstr.
However, the wrapping/non-wrapping functions differ by more than that. They use different parameter types. The wrapping functions rely upon the video attributes set via wattr_set, etc., while the non-wrapping functions combine the video-attributes with the character data:
waddstr and waddchstr use char* and chtype* parameters, respectively
waddwstr and wadd_chstr use wchar_t* and cchar_t* parameters.
Converting between the two forms can be a nuisance, because X/Open, etc., did not define functions for doing the conversion.
The manual page for bkgd describes how these video attributes are combined with the background character to obtain the actual display.
The accepted answer (by Mr. Dickey) is correct. However, the "ch" functions do not work with ordinary C strings (array of bytes). Another solution is to create a wrapper for waddstr which checks the current cursor position and window size and prints only as much as would fit.
For example:
int waddstr_trunc(WINDOW *win, const char *str)
{
int cur_x, max_x, dummy [[maybe_unused]];
getyx(win, dummy, cur_x);
getmaxyx(win, dummy, max_x);
int w=max_x - cur_x;
if (w <= 0) return 0;
char *str2 = strndup(str, w);
if (str2 == NULL) return 1;
int rv = waddstr(win, str2);
free(str2);
return rv;
}