What does error conflicting types for '' mean? - iphone

i got an error that said "error: conflicting types for '____'. What does that mean?

Quickfix:
Make sure that your functions are declared once and only once before they are called. For example, change:
main(){ myfun(3.4); }
double myfun(double x){ return x; }
To:
double myfun(double x){ return x; }
main(){ myfun(3.4); }
Or add a separate function declaration:
double myfun(double x);
main(){ myfun(3.4); }
double myfun(double x){ return x; }
Possible causes for the error
Function was called before being declared
Function defined overrides a function declared in an included header.
Function was defined twice in the same file
Declaration and definition don't match
Declaration conflict in the included headers
What's really going on
error: conflicting types for ‘foo’ means that a function was defined more than once with different type signatures.
A file that includes two functions with the same name but different return types would throw this error, for example:
int foo(){return 1;}
double foo(){return 1.0;}
Indeed, when compiled with GCC we get the following errors:
foo.c:5:8: error: conflicting types for ‘foo’
double foo(){return 1.0;}
^
foo.c:4:5: note: previous definition of ‘foo’ was here
int foo(){return 1;}
^
Now, if instead we had a file with two function definitions with the same name
double foo(){return 1;}
double foo(){return 1.0;}
We would get a 'redefinition' error instead:
foo.c:5:8: error: redefinition of ‘foo’
double foo(){return 1.0;}
^
foo.c:4:8: note: previous definition of ‘foo’ was here
double foo(){return 1;}
^
Implicit function declaration
So why does the following code throw error: conflicting types for ‘foo’?
main(){ foo(); }
double foo(){ return 1.0; }
The reason is implicit function declaration.
When the compiler first encounters foo() in the main function, it will assume a type signature for the function foo of int foo(). By default, implicit functions are assumed to return integers, and the input argument types are derived from what you're passing into the function (in this case, nothing).
Obviously, the compiler is wrong to make this assumption, but the specs for the C (and thus Objective-C) language are old, cranky, and not very clever. Maybe implicitly declaring functions saved some development time by reducing compiler complexity back in the day, but now we're stuck with a terrible feature that should have never made it into the language. In fact, implicit declarations were made illegal in C99.
That said, once you know what's going on, it should be easy to dig out the root cause of your problem.

it's probably because your function "_" already exists in your library. It happened to me with this function:
I was using stdio.h
int getline (char s[ ] , int lim)
{
int c, i;
for (i=0; i < lim-1 && (c=getchar())!=EOF && c!='\n'; ++i)
s[i] = c;
if (c == '\n') {
s[i] = c;
++i;
}
s[i] = '\0';
return i;
}
When I changed "getline" to "getlinexxx" and gcc compiled it:
int getlinexxx (char s[], int lim)
{
int c, i;
for (i=0; i < lim-1 && (c=getchar())!=EOF && c!='\n'; ++i)
s[i] = c;
if (c == '\n') {
s[i] = c;
++i;
}
s[i] = '\0';
return i;
}
And the problem was gone

What datatype is '___'?
My guess is that you're trying to initialize a variable of a type that can't accept the initial value. Like saying int i = "hello";

If you're trying to assign it from a call that returns an NSMutableDictionary, that's probably your trouble. Posting the line of code would definitely help diagnose warnings and errors in it.

Related

Flutter : PieChart Error: The argument type 'RxInt' can't be assigned to the parameter type 'double?' [duplicate]

Very simple issue. I have the useless class:
class Useless{
double field;
Useless(this.field);
}
I then commit the mortal sin and call new Useless(0);
In checked mode (which is how I run my tests) that blows up, because 'int' is not a subtype of type 'double'.
Now, it works if I use new Useless(0.0) , but honestly I spend a lot of time correcting my tests putting .0s everywhere and I feel pretty dumb doing that.
As a temporary measure I rewrote the constructor as:
class Useless{
double field;
Useless(num input){
field = input.toDouble();
}
}
But that's ugly and I am afraid slow if called often. Is there a better way to do this?
Simply toDouble()
Example:
int intVar = 5;
double doubleVar = intVar.toDouble();
Thanks to #jamesdlin who actually gave this answer in a comment to my previous answer...
In Dart 2.1, integer literals may be directly used where double is expected. (See https://github.com/dart-lang/sdk/issues/34355.)
Note that this is syntactic sugar and applies only to literals. int variables still won't be automatically promoted to double, so code like:
double reciprocal(double d) => 1 / d;
int x = 42;
reciprocal(x);
would fail, and you'd need to do:
reciprocal(x.toDouble());
You can also use:
int x = 15;
double y = x + .0;
use toDouble() method.
For e.g.:
int a = 10
print(a.toDouble)
//or store value in a variable and then use
double convertedValue = a.toDouble()
From this attempt:
class Useless{
double field;
Useless(num input){
field = input.toDouble();
}
}
You can use the parse method of the double class which takes in a string.
class Useless{
double field;
Useless(num input){
field = double.parse(input.toString()); //modified line
}
}
A more compact way of writing the above class using constructor's initialisers is:
class Useless{
double _field;
Useless(double field):_field=double.parse(field.toString());
}
Since all divisions in flutter result to a double, the easiest thing I did to achieve this was just to divide the integer value with 1:
i.e.
int x = 15;
double y = x /1;
There's no better way to do this than the options you included :(
I get bitten by this lots too, for some reason I don't get any warnings in the editor and it just fails at runtime; mighty annoying :(
I'm using a combination:
static double checkDouble(dynamic value) {
if (value is String) {
return double.parse(value);
} else if (value is int) {
return 0.0 + value;
} else {
return value;
}
}
This is how you can cast from int to double
int a = 2;
double b = a*1.0;

Freeglut doesn't initialize when using it from Swift

I've tried to use the Freeglut library in a Swift 4 Project. When the
void glutInit(int *argcp, char **argv);
function is shifted to Swift, its declaration is
func glutInit(_ pargc: UnsafeMutablePointer<Int32>!, _ argv: UnsafeMutablePointer<UnsafeMutablePointer<Int8>?>!)
Since I don't need the real arguments from the command line I want to make up the two arguments. I tried to define **argv in the Bridging-Header.h file
#include <OpenGL/gl.h>
#include <GL/glut.h>
char ** argv[1] = {"t"};
and use them in main.swift
func main() {
var argcp: Int32 = 1
glutInit(&argcp, argv!) // EXC_BAD_ACCESS
glutInitDisplayMode(UInt32(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH));
glutCreateWindow("my project")
glutDisplayFunc(display)
initOpenGL()
glutMainLoop()
but with that I get Thread 1: EXC_BAD_ACCESS (code=1, address=0x74) at the line with glutInit().
How can I initialize glut properly? How can I get an UnsafeMutablePointer<UnsafeMutablePointer<Int8>?>! so that it works?
The reason the right code in C char * argv[1] = {"t"}; does not work is because Swift imports fixed size C-array as a tuple, not a pointer to the first element.
But your char ** argv[1] = {"t"}; is completely wrong. Each Element of argv needs to be char **, but you assign char * ("t"). Xcode must have shown you a warning at first build:
warning: incompatible pointer types initializing 'char **' with an expression of type 'char [2]'
You should better take incompatible pointer types warning as error, unless you know what you are doing completely.
Generally, you should better not write some codes generating actual code/data like char * argv[1] = {"t"}; in a header file.
You can try it with Swift code.
As you know, when you want to pass a pointer to single element T, you declare a var of type T and pass &varName to the function you call.
As argcp in your code.
As well, when you want to pass a pointer to multiple element T, you declare a var of type [T] (Array<T>) and pass &arrName to the function you call.
(Ignoring immutable case to simplify.)
The parameter argv matches this case, where T == UnsafeMutablePointer<Int8>?.
So declare a var of type [UnsafeMutablePointer<Int8>?].
func main() {
var argc: Int32 = 1
var argv: [UnsafeMutablePointer<Int8>?] = [
strdup("t")
]
defer { argv.forEach{free($0)} }
glutInit(&argc, &argv)
//...
}
But I wonder if you really want to pass something to glutInit().
You can try something like this:
func main() {
var argc: Int32 = 0 //<- 0
glutInit(&argc, nil)
//...
}
I'm not sure if freeglut accept this, but you can find some articles on the web saying that this works in some implementation of Glut.

In systemverilog is there a way to condition on a type?

So I am using a parameterized type in a common module.
Is there a way to say:
if( type == TYPE1 ) assign the struct one way
else if( type == TYPE2 ) assign another way
I was picturing this in a generate block.
Yes, you can use the type operator do a generate-if/case, or procedural if/case like:
real r;
if ( type(r) == type(real) ) ...
But unfortunately the code in all branches still must successfully compile, regardless of the condition. You will not be able to reference struct member that does not exist.
typedef struct {int a;} s1_t;
typedef struct {int a;int b;} s2_t;
s1_t s;
initial
#1 // procedural-if
if (type(s) == type(s1_t))
$display("%m s.a = %0d",s.a);
else if (type(s) == type(s2_t))
$display("%m s.b ==%0d",s.b); // this will not compile
There is type() operator described in IEEE1800-2012 § 6.23. Example usage from from the LRM:
bit[12:0] A_bus, B_bus;
parameter typebus_t = type(A_bus);
generate
case(type(bus_t))
type(bit[12:0]): addfixed_int #(bus_t) (A_bus,B_bus);
type(real): add_float #(type(A_bus)) (A_bus,B_bus);
endcase
endgenerate
There is also $typename() described in IEEE1800-2012 § 20.6.1. $typename() return s string of the type. Example usage from from the LRM:
// source code // $typename would return
typedef bitnode; // "bit"
node [2:0] X; // "bit [2:0]"
int signedY; // "int"
packageA;
enum{A,B,C=99} X; // "enum{A=32'sd0,B=32'sd1,C=32'sd99}A::e$1"
typedef bit[9:1'b1] word; // "A::bit[9:1]"
endpackage: A
importA::*;
moduletop;
typedef struct{node A,B;} AB_t;
AB_t AB[10]; // "struct{bit A;bit B;}top.AB_t$[0:9]"
...
endmodule

rust calling failure::fail_bounds_check with no-landing-pads flag enabled

I have been trying to write a basic kernel in rust and the link script fails with the following error:
roost.rs:(.text.kmain+0x12a): undefined reference to 'failure::fail_bounds_check::hee3207bbe41f708990v::v0.11.0'
I compile the rust source files with the following flags:
-O --target i686-unknown-linux-gnu -Z no-landing-pads --crate-type lib --emit=obj
If I understand the rust compiler correctly the -Z no-landing-pads option should stop the compiler from generating the failure functions. From testing I can tell that the failure function is only generated when the kmain function calls my function io::write_char(c: char)
This is the definition of io::write_char(c: char)
pub fn write_char(c: char) {
unsafe {
vga::place_char_at(c, vga::cursor_x, vga::cursor_y);
vga::cursor_y =
if vga::cursor_x >= vga::VGA_WIDTH {
vga::cursor_y + 1
} else {
vga::cursor_y
};
vga::cursor_x =
if vga::cursor_x >= vga::VGA_WIDTH {
0
} else {
vga::cursor_x + 1
};
vga::set_cursor_location(vga::cursor_x, vga::cursor_y);
}
}
How can I stop rust from trying to call the nonexistant function failure::fail_bounds_check?
Edit: further testing indicates that the vga::place_char_at function is the cause. Here is the code:
pub fn place_char_at(c: char, x: u8, y: u8) {
let tmpx =
if x >= VGA_WIDTH {
VGA_WIDTH - 1
} else {
x
};
let tmpy =
if y >= VGA_HEIGHT {
VGA_HEIGHT - 1
} else {
y
};
unsafe {
(*SCREEN)[(tmpy as uint) * 80 + (tmpx as uint)].char = c as u8;
}
}
From what I can tell the issue is that rust wants to bound check the array access I'm doing, is there a way to turn the assure the compiler that the checks have been done or turn off the feature for that function?
Edit2: So I solved it after some work. After digging around in the docs I found that rust has a function for vector access that bypasses bound checking. To use it I changed the place_char_at function to this:
pub fn place_char_at(c: char, x: u8, y: u8) {
let tmpx =
if x >= VGA_WIDTH {
VGA_WIDTH - 1
} else {
x
};
let tmpy =
if y >= VGA_HEIGHT {
VGA_HEIGHT - 1
} else {
y
};
unsafe {
(*SCREEN).unsafe_mut_ref((tmpy as uint) * 80 + (tmpx as uint)).char = c as u8;
}
}
Make sure you're linking to libcore. Also libcore has one dependency: a definition of failure. Make sure you mark a function #[lang="begin_unwind"] somewhere in your exception code. The requirement is that begin_unwind not return. See here for my example.
is there a way to ... turn off the feature for that function?
Nope. In the words of bstrie, if there were a compiler flag to eliminate array bounds checks, then bstrie would fork the language and make the flag delete your hard drive. In other words, safety is paramount.
You haven't described the type of SCREEN but if it implements MutableVector trait, what you probably want is to use an unsafe_set ( http://doc.rust-lang.org/core/slice/trait.MutableVector.html#tymethod.unsafe_set ):
unsafe fn unsafe_set(self, index: uint, val: T)
This performs no bounds checks, and it is undefined behaviour if index is larger than the length of self. However, it does run the destructor at index. It is equivalent to self[index] = val.

Compilation error on blocks with return type

I have the following block code
typedef BOOL(^FieldValidationBlock)(NSString *);
FieldValidationBlock aBlock = ^(NSString *input){
return ([input length] == 10) ;
};
which throws me a compilation error thats tates the return type is int and should be BOOL.
when I add a cast it works just fine:
typedef BOOL(^FieldValidationBlock)(NSString *);
FieldValidationBlock aBlock = ^(NSString *input){
return (BOOL)([input length] == 10) ;
};
why this happen?
Because BOOL is an objective C type, and the logical comparison operators are standard C. In standard C the return type of comparison operators is an int. This is important to know sometimes, as when you negate a value that you assume to be boolean, but is in fact an int, it's not necessarily going to be what you expect.
In your example, casting to a BOOL is fine.