how to get 18 bit code address from symbol defined in linker command file - linker-scripts

In Code Composer, you can define new symbols in the linker command file simply:
_Addr_start = 0x5C00;
_AppLength = 0x4C000;
before the memory map and section assignment. This is done in the bootloader example from TI.
You can then refer to the address (as integers) in your c-code as this
extern uint32_t _Addr_start; // note that uint32_t is fake.
extern uint32_t _AppLength; // there is no uint32_t object allocated
printf("start = %X len= %X\r\n", (uint32_t)&_Addr_start, (uint32_t)&_AppLength);
The problem is that if you use the 'small' memory model, the latter symbol (at 0x45C00) gives linker warning, because it tries to cast it to a 16-bit pointer.
"C:/lakata/hardware-platform/CommonSW/otap.c", line 78: warning #17003-D:
relocation from function "OtapGetExternal_CRC_Calc" to symbol "_AppLength"
overflowed; the 18-bit relocated address 0x3f7fc is too large to encode in
the 16-bit field (type = 'R_MSP_REL16' (161), file = "./otap.obj", offset =
0x00000002, section = ".text:OtapGetExternal_CRC_Calc")
I tried using explicit far pointers, but code composer doesn't understand the keyword far. I tried to make the dummy symbol a function pointer, to trick the compiler into thinking that dereferencing it would.... The pointer points to code space, and the code space model is "large" while the data space model is "small".

I figured it out before I finished entering the question!
Instead of declaring the symbol as
extern uint32_t _AppLength; // pretend it is a dummy data
declare it as
void _AppLength(void); // pretend it is a dummy function
Then the pointer conversion works properly, because &_AppLength is assumed to be far now. (When it declared as an integer, &_AppLength is assumed to be near and the linker fails.)

Related

Microchip dsPIC33 C30 function pointer size?

The C30 user manual manual states that pointers near and far are 16bits wide.
How then does this address the full code memory space which is 24bits wide?
I am so confused as I have an assembler function (called from C) returning the program counter (from the stack) where a trap error occurred. I am pretty sure it sets w1 and w0 before returning.
In C, the return value is defined as a function pointer:
void (*errLoc)(void);
and the call is:
errLoc = getErrLoc();
When I now look at errLoc, it is a 16 bit value and I just do not think that is right. Or is it? Can function pointers (or any pointers) not access the full code address space?
All this has to do with a TRAP Adress error I am trying to figure out for the past 48 hours.
I see you are trying to use the dsPIC33Fxxxx/PIC24Hxxxx fault interrupt trap example code.
The problem is that pointer size for dsPIC33 (via the MPLAB X C30 compiler) is 16bit wide, but the program counter is 24bits. Fortunately the getErrLoc() assembly function does return the correct size.
However the example C source code function signature provided is void (*getErrLoc(void))(void) which is incorrect as it will be treating the return values as if it was a 16bit pointer. You want to change the return type of the function signature to be large enough to store the 24bits program counter value below instead. Thus if you choose unsigned long integer as the return type of getErrLoc(), then it will be large enough to store the 24bit program counter into a 32bit unsigned long integer location.
unsigned long getErrLoc(void); // Get Address Error Loc
unsigned long errLoc __attribute__((persistent));
(FYI: Using __attribute__((persistent)) to record trap location on next reboot)

Passing a struct as a parameter in System Verilog

The following code works fine under Modelsim when the unused localparam is removed. It produces the error below if it is left in. If it is possible to use a struct to pass parameters to a module, what am I doing wrong? Many thanks.
typedef bit [7:0] myarr[2];
typedef struct { int a; myarr bytes; } mystruct;
module printer #(mystruct ms)();
// works fine if this is removed
localparam myarr extracted = ms.bytes;
initial
$display("Got %d and %p", ms.a, ms.bytes);
endmodule
parameter mystruct ms = '{ a:123, bytes:'{5, 6}};
module top;
printer #(.ms(ms)) DUT ();
endmodule
Here is the error. Compilation using vlog -sv -sv12compat produces no errors or warnings.
$ vsim -c -do "run -all; quit" top
Model Technology ModelSim - Intel FPGA Edition vlog 10.5c Compiler 2017.01 Jan 23 2017
(.......)
# ** Error: (vsim-8348) An override for an untyped parameter ('#dummyparam#0') must be integral or real.
I think the problem here is that you are assigning a whole unpacked array in one statement, which is not allowed. Try changing the myarr typedef to a packed array instead.
My workaround was to use a packed array. I didn't need to pack the whole struct.
I would happily upvote/accept someone else's answer if one appears. In particular, it would be helpful to confirm whether this is really a bug in Modelsim, or just an instance of a correct compilation error that could be made more helpful by including the location and parameter name.

c cast and deference a pointer strict aliasing

In http://blog.regehr.org/archives/1307, the author claims that the following snippet has undefined behavior:
unsigned long bogus_conversion(double d) {
unsigned long *lp = (unsigned long *)&d;
return *lp;
}
The argument is based on http://port70.net/~nsz/c/c11/n1570.html#6.5p7, which specified the allowed access circumstances. However, in the footnote(88) for this bullet point, it says this list is only for checking aliasing purpose, so I think this snippet is fine, assuming sizeof(long) == sizeof(double).
My question is whether the above snippet is allowed.
The snippet is erroneous but not because of aliasing. First there is a simple rule that says to deference a pointer to object with a different type than its effective type is wrong. Here the effective type is double, so there is an error.
This safeguard is there in the standard, because the bit representation of a double must not be a valid representation for unsigned long, although this would be quite exotic nowadays.
Second, from a more practical point of view, double and unsigned long may have different alignment properties, and accessing this in that way may produce a bus error or just have a run time penalty.
Generally casting pointers like that is almost always wrong, has no defined behavior, is bad style and in addition is mostly useless, anyhow. Focusing on aliasing in the argumentation about these problems is a bad habit that probably originates in incomprehensible and scary gcc warnings.
If you really want to know the bit representation of some type, there are some exceptions of the "effective type" rule. There are two portable solutions that are well defined by the C standard:
Use unsigned char* and inspect the bytes.
Use a union that comprises both types, store the value in there and read it with the other type. By that you are telling the compiler that you want an object that can be seen as both types. But here you should not use unsigned long as a target type but uint64_t, since you have to be sure that the size is exactly what you think it is, and that there are no trap representations.
To illustrate that, here is the same function as in the question but with defined behavior.
unsigned long valid_conversion(double d) {
union {
unsigned long ul;
double d;
} ub = { .d = d, };
return ub.ul;
}
My compiler (gcc on a Debian, nothing fancy) compiles this to exactly the same assembler as the code in the question. Only that you know that this code is portable.

Problems with using delphi dta types with a C DLL

I am trying to use a .dll which has been written in C (although it wraps around a matlab .ddl)
The function I am trying to use is defined in C as:
__declspec(dllexport) int ss_scaling_subtraction(double* time, double** signals, double* amplitudes, int nSamples, int nChannels, double* intensities);
The .dll requires, amongst others, a 2 dimensional array - When I tried to use:
Array of array of double
In the declaration, the compiler gave an error so I defined my own data type:
T2DArray = Array of array of double;
I initialise the .dll function in a unit like so:
function ss_scaling_subtraction(const time: array of double; const signals: T2DArray; const amplituides : array of double; const nSamples: integer;const nChannels: integer; var intensities: array of double) : integer ; cdecl; external 'StirScanDLL.dll';
However, when called this function, I get an access violation from the .dll
Creating a new data type
T1DArray = array of double
and changing
Array of double
To
T1DArray
In the declaration seems to make things run but the result is still not correct.
I have read on here that it can be dangerous to pass delphi data types to .dll's coded in a different language so I thought this might be causing the issue.
But how do I NOT use a delphi data type when I HAVE to use it to properly declare the function in the first place?!
Extra Info, I have already opened the matlab runtime complier lib's and opened the entry point to the StirScanDLL.dll
The basic problem here is one of binary interop mismatch. Simply put, a pointer to an array is not the same thing at a binary level as a Delphi open array parameter. Whilst they both semantically represent an array, the binary representation differs.
The C function is declared as follows:
__declspec(dllexport) int ss_scaling_subtraction(
double* time,
double** signals,
double* amplitudes,
int nSamples,
int nChannels,
double* intensities
);
Declare your function like so in Delphi:
function ss_scaling_subtraction(
time: PDouble;
signals: PPDouble;
amplitudes: PDouble;
nSamples: Integer;
nChannels: Integer;
intensities: PDouble
): Integer; cdecl; external 'StirScanDLL.dll';
If you find that PPDouble is not declared, define it thus:
type
PPDouble = ^PDouble;
That is, pointer to pointer to double.
Now what remains is to call the functions. Declare your arrays in Delphi as dynamic arrays. Like this:
var
time, amplitudes, intensities: TArray<Double>;
signals: TArray<TArray<Double>>;
If you have an older pre-generics Delphi then declare some types:
type
TDoubleArray = array of Double;
T2DDoubleArray = array of TDoubleArray;
Then declare the variables with the appropriate types.
Next you need to allocate the arrays, and populate any that have data passing from caller to callee.
SetLength(time, nSamples); // I'm guessing here as to the length
SetLength(signals, nSamples, nChannels); // again, guessing
Finally it is time to call the function. Now it turns out that the good designers of Delphi arranged for dynamic arrays to be stored as pointers to the first element. That means that they are a simple cast away from being used as parameters.
retval := ss_scaling_subtraction(
PDouble(time),
PPDouble(signals),
PDouble(amplitudes),
nSamples,
nChannels,
PDouble(intensities)
);
Note that the casting of the dynamic arrays seen here does rely on an implementation detail. So, some people might argue that it would be better to use, for instance #time[0] and so on for the one dimensional arrays. And to create an array of PDouble for the amplitudes and copy over the addresses of the first elements of the inner arrays. Personally I am comfortable with relying on this implementation detail. It certainly makes the coding a lot simpler.
One final piece of advice. Interop can be tricky. It's easy to get wrong. When you get it wrong, the code compiles, but then dies horribly at runtime. With cryptic error messages. Leading to much head scratching.
So, start with the simplest possible interface. A function that receives scalar parameters. Say, receives an integer, and returns an integer. Prove that you can do that. Then move on to floating point scalars. Then one dimensional arrays. Finally two dimensional arrays. Each step along the way, build up the complexity. When you hit a problem you'll know that it is down to the most recently added parameter.
You've not taken that approach. You've gone straight for the kill and implemented everything in your first attempt. And when it fails, you've no idea where to look. Break a problem into small pieces, and build the more complex problem out of those smaller pieces.

How does the auto-free()ing work when I use functions like mktemp()?

Greetings,
I'm using mktemp() (iPhone SDK) and this function returns a char * to the new file name where all "X" are replaced by random letters.
What confuses me is the fact that the returned string is automatically free()d. How (and when) does that happen? I doubt it has something to do with the Cocoa event loop. Is it automatically freed by the kernel?
Thanks in advance!
mktemp just modifies the buffer you pass in, and returns the same poiinter you pass in, there's no extra buffer to be free'd.
That's at least how the OSX manpage describes it(I couldn't find documentation for IPhone) , and the posix manpage (although the example in the posix manpage looks to be wrong, as it pass in a pointer to a string literal - possibly an old remnant, the function is also marked as legacy - use mkstemp instead. The OSX manpage specifically mention that as being an error).
So, this is what will happen:
char template[] = "/tmp/fooXXXXXX";
char *ptr;
if((ptr = mktemp(template)) == NULL) {
assert(ptr == template); //will be true,
// mktemp just return the same pointer you pass in
}
If it's like the cygwin function of the same name, then it's returning a pointer to an internal static character buffer that will be overwritten by the next call to mktemp(). On cygwin, the mktemp man page specifically mentions _mktemp_r() and similar functions that are guaranteed reentrant and use a caller-provided buffer.