Translating c to objective c - iphone

I found a c function which I would like to use in my app. Unfortunately, my c knowledge is not great. The first section of code shows the original c code and the second my "translation" to objective c. I have 3 questions I would appreciate help with please:
Is my translation of the variables from their c counterparts to their objective c counterparts valid? (I have had no compiler warnings)
Is it acceptable to use the free () at the end or should this be done in another way in objective c
c code:
unsigned int i, j, diagonal, cost, s1len, s2len;
unsigned int *arr;
char *str1, *str2;
general code...
s1len = strlen(str1);
s2len = strlen(str2);
arr = (unsigned int *) malloc(sizeof(unsigned int) * j);
general code...
free(arr);
objective c code:
NSUInteger i, j, diagonal, cost, s1len, s2len;
NSUInteger *arr;
const char *str1 = [source cStringUsingEncoding:NSISOLatin1StringEncoding];
const char *str2 = [target cStringUsingEncoding:NSISOLatin1StringEncoding];
general code...
s1len = strlen(str1);
s2len = strlen(str2);
arr = (NSUInteger *) malloc(sizeof(NSUInteger) * j);
general code...
free(arr);

Objective-C is a strict superset of C, therefore you can use any C code without any modifications. I suggest either using the C code as is (as far as possible) or re-implementing the algorithm with objects in Objective-C.
NSString to char
What you need to provide, is a way to make convert Objective-C objects into C types, like NSString* to char*.
The conversion is correct, but you might want to use -UTF8String to keep all chars intact, Latin-1 might lose some information. The disadvantage of utf-8 is, that you're C code might not be able to correctly work with it.
You'd better get the lengths using one of NSString's methods instead of strlen, because it has linear running time and NSString's methods could be constant.
// utf-8
int len = [source length];
// latin 1
int len = [source lengthOfBytesUsingEncoding:NSISOLatin1StringEncoding];
int to NSInteger
There's no reason to convert ints to NSIntegers. Apple has added this type for 64bit compatibility. NSInteger is typedef'd so that on 32bit platforms it needs 32bit and on 64bit platforms 64bit.
I'd try to change as little as possible of the C code. (Makes it easier to update it when the original gets updated.) So just leave the ints as they are.
Memory management
C's memory management is more basic than Objective-C's, but as you seem to know you use malloc and free, that just stays the same. Retain/release and the garbage collector are only useful for objects anyway.

You can mix Objective C with pure C when developing for the iPhone. In general, with Objectvie C you want to be working with higher level objects and as such you shouldn't need to invoke malloc and similar (although of course you can).
I would suggest that you either re-implement the functionality the C code provides from scratch in Objective C, that is think about what you require the code to do and then just write the Objective C code - rather than trying to change the C code line by line. Or, I would just include the pure C code in your project and call the functions you need from Objective C.

What's stopping you from just using the c function in your code as is? You can use any c function in Objective-c and it won't cause a problem. Many of the functions in Cocoa are c functions (for example NSSearchPathForDirectoriesInDomains().

Your C version is perfectly valid Objective-C code. You don't need to translate it.

Related

c cast and deference a pointer strict aliasing

In http://blog.regehr.org/archives/1307, the author claims that the following snippet has undefined behavior:
unsigned long bogus_conversion(double d) {
unsigned long *lp = (unsigned long *)&d;
return *lp;
}
The argument is based on http://port70.net/~nsz/c/c11/n1570.html#6.5p7, which specified the allowed access circumstances. However, in the footnote(88) for this bullet point, it says this list is only for checking aliasing purpose, so I think this snippet is fine, assuming sizeof(long) == sizeof(double).
My question is whether the above snippet is allowed.
The snippet is erroneous but not because of aliasing. First there is a simple rule that says to deference a pointer to object with a different type than its effective type is wrong. Here the effective type is double, so there is an error.
This safeguard is there in the standard, because the bit representation of a double must not be a valid representation for unsigned long, although this would be quite exotic nowadays.
Second, from a more practical point of view, double and unsigned long may have different alignment properties, and accessing this in that way may produce a bus error or just have a run time penalty.
Generally casting pointers like that is almost always wrong, has no defined behavior, is bad style and in addition is mostly useless, anyhow. Focusing on aliasing in the argumentation about these problems is a bad habit that probably originates in incomprehensible and scary gcc warnings.
If you really want to know the bit representation of some type, there are some exceptions of the "effective type" rule. There are two portable solutions that are well defined by the C standard:
Use unsigned char* and inspect the bytes.
Use a union that comprises both types, store the value in there and read it with the other type. By that you are telling the compiler that you want an object that can be seen as both types. But here you should not use unsigned long as a target type but uint64_t, since you have to be sure that the size is exactly what you think it is, and that there are no trap representations.
To illustrate that, here is the same function as in the question but with defined behavior.
unsigned long valid_conversion(double d) {
union {
unsigned long ul;
double d;
} ub = { .d = d, };
return ub.ul;
}
My compiler (gcc on a Debian, nothing fancy) compiles this to exactly the same assembler as the code in the question. Only that you know that this code is portable.

Getting the imageDenoising CUDA example to work using a MATLAB CUDAKernel

TL;DR
I'm looking for a way to extract a part of an existing CUDA Toolkit example and turn it into a CUDAKernel executable in MATLAB.
The Story
In an attempt to obtain a short-runtime implementation of the non-local means (NLM) 2D filter, I stumbled upon the imageDenoising example provided with the CUDA Toolkit which implements two variants of this filter, called NLM & NLM2 (or "quick NLM").
Having no previous experience with CUDA coding, I initially attempted to follow MATLAB's documentation on the subject, which resulted in several strange errors including: ptx compilation, multiple entry points and wrong number of inputs in the C prototype. At this point I realized that this isn't going to be a "just works" case and that some tinkering is required.
So I decided to eliminate the multiple entry point issue by simply deleting parts of imageDenoising.cu file and consolidating the relevant .cuh (either ..._nlm_kernel.cuh or ..._nlm2_kernel.cuh) into the .cu so as to obtain a single entry point at any given time.
To my surprise this actually managed to compile and I was finally able to create a CUDAKernel without an error (using the command k = parallel.gpu.CUDAKernel('imageDenoising.ptx', 'uint8_T *, int, int, float, float');).
This however was not enough, because I mistakenly concluded that the 1st argument is the unprocessed image in the form of an RGB matrix (i.e. X*Y*3 uint8), and so the result I was getting back was exactly the input but with 0 in the 1st 4 elements.
After searching a bit more I realized that there are additional, and critical, aspects I'm entirely unaware of (like the need to initialize __device__ variables) to such a conversion process, at which stage I decided to ask for help.
The Problem
I'm currently wondering how to efficiently continue from here. While I'd love to hear if this kind of approach can generally bear fruit (and whether a complete example of this process is available somewhere), which other pitfalls I should look out for, and what alternative courses of action I can take (considering my very limited knowledge in CUDA and the fact I won't hire anybody else to do this for me), I keep in mind that this is SO and so I must have a specific programming problem, so here goes:
How do I modify imageDenoising.cu such that the MATLAB CUDAKernel constructed from it will also accept the unprocessed image as an input?
Note: in my application, the input matrix is a 2d, grayscale, double matrix.
Related: How CudaMalloc work?
P.S.
A working piece of code would obviously be welcomed, but I'd really rather "learn to fish".
I ended up taking an alternative approach to CUDAKernel, using .MEX, by doing the following:
Setting up the external libraries OpenCV v2.4.10 (not v3!) and mexopencv.
Writing a small wrapper function for OpenCV's fastNlMeansDenoising using the guidelines of mexopencv for unimplemented functions, as seen below (excluding the documentation):
#include "mexopencv.hpp"
using namespace cv;
void mexFunction(int nlhs, mxArray *plhs[],
int nrhs, const mxArray *prhs[])
{
// Check arguments
if (nlhs != 1 || nrhs<1 || ((nrhs % 2) != 1) )
mexErrMsgIdAndTxt("fastNLM:invalidArgs", "Wrong number of arguments");
// Argument vector
vector<MxArray> rhs(prhs, prhs + nrhs);
// Option processing
// Defaults:
double h = 3;
int templateWindowSize = 7;
int searchWindowSize = 21;
// Parsing input name-value pairs:
for (int i = 1; i<nrhs; i += 2) {
string key = rhs[i].toString();
if (key == "h")
h = rhs[i + 1].toDouble();
else if (key == "templateWindowSize")
templateWindowSize = rhs[i + 1].toInt();
else if (key == "searchWindowSize")
searchWindowSize = rhs[i + 1].toInt();
else
mexErrMsgIdAndTxt("mexopencv:error", "Unrecognized option");
}
// Process
Mat src(rhs[0].toMat()), dst;
fastNlMeansDenoising(src, dst, h, templateWindowSize, searchWindowSize);
// Convert cv::Mat back to mxArray*
plhs[0] = MxArray(dst);
}
Compiling it..... and viola - a working CUDA-accelerated NLM filter.
The answer to my question itself can be found by comparing opencv\sources\modules\photo\src\cuda\nlm.cu (this is the opencv2 path) with imageDenoising_nlm2_kernel.cuh.
This solution worked well for me because it was more important for me to get an NLM filter running, rather than using CUDAKernel.
The main lesson I learned from this (and I'd like to pass on to others) is:
Running CUDA code in MATLAB can also be done in ways other than CUDAKernel, such as using .mex wrappers as shown above.

Char string encoding differences between native C++ and C++/CLI?

I have a strange problem for which I believe there is a solution but I cannot find it. Your help would be appreciated.
On the one hand, I have a native C++ class named Native which has a static wchar_t array containing accentuated characters. This array is const and defined at build time.
/// Header file
Native
{
public:
static const wchar_t* Array() const { return mArray; }
private:
static const wchar_t *mArray;
};
//--------------------------------------------------------------
/// .cpp file
const wchar_t* Native::mArray = {L"This is a description éàçï"};
On the other hand, I have a C++/CLI class that uses the array like this:
/// C++/CLI use
System::String^ S1 = gcnew System::String( Native::Array() );
System::String^ S2 = gcnew System::String( L"This is a description éàçï" };
The problem is that while S2 gives This is a description éàçï as expected, S1 gives This is a description éà çï. I do not understand why passing a pointer to a static array will not give the same result as giving the same array directly???
I guess this is an encoding problem but I would have expected the same results for both S1 and S2. Do you know how to solve the problem? The way I must use it in my program is like S1 i.e. by accessing the build time static array with a static method that returns a const wchar_t*.
Thanks for your help!
EDIT 1
What is the best way to define literals at build time in C++ using Intel C++ 13.0 to make them directly usable in C++/CLI System::String constructor? This could be the ultimate question for my problem.
I don't have enough reputation to add a comment to ask this question, so I apologize for posting this as an answer if that seems inappropriate.
Could the problem be that your compiler defines wchar_t to be 8 bits? I'm basing that is possible on this answer:
Should I use wchar_t when using UTF-8?
To answer your question (in the comments) about building a UTF-16 array at build time, I believe you can force it to be UTF-16 by using u"..." for your literal instead of L"..." (see http://en.cppreference.com/w/cpp/language/string_literal)
Edit 1:
For what it's worth, I tried your code (after fixing a couple compile errors) using Microsoft Visual Studio 10 and didn't have the same problem (both strings printed as expected).
I don't know if it will help you, but another possible way to statically initialize this wchar_t array is to use std::wstring to wrap your literal and then set your array to the c-string pointer returned by wstring::c_str(), shown as follows:
std::wstring ws(L"This is a description éàçï");
const wchar_t* Native::mArray = ws.c_str();
This edit was inspired by Dynamic wchar_t array (C++ beginner)

Code formatting when using pointers

Is there any reason why the asterisk is next to the object type in this code? I'm a little confused by the way I see this used. Some times it looks like this:
NSString* stringBefore;
and sometimes like this:
NSString *stringBefore;
Is there a difference? Or a right or wrong way to do this?
Thanks
I use the * near the variable name and not the type, since if you declare something like:
int *i, j;
i will be a pointer to int, and j will be a int.
If you used the other syntax:
int* i, j;
you may think that both i and j are pointers when they are not.
That said, I don't use nor recommend declaring a pointer and a non-pointer variable in the same line, as in this sample.
It makes no difference.
It just just an indicator to how well versed the author is in writing and reading Objective-C. The traditional standard is to write it as:
NSString *stringBefore;
There is no difference.

Reading byte stream returned from JavaEE server

We have a JavaEE server and servlets providing data to mobile clients (first JavaME, now soon iPhone). The servlet writes out data using the following code:
DataOutputStream dos = new DataOutputStream(out);
dos.writeInt(someInt);
dos.writeUTF(someString);
... and so on
This data is returned to the client as bytes in the HTTP response body, to reduce the number of bytes transferred.
In the iPhone app, the response payload is loaded into NSData object. Now, after spending hours and hours trying to figure out how to read the data out in the Objective-C application, I'm almost ready to give up, as I haven't found any good way to read the data into NSInteger and NSString (as corresponding to above protocol)
Would anyone have any pointers how to read stuff out from a binary protocol written by a java app? Any help is greatly appreciated!
Thanks!
You'll have to do the demarshalling yourself; fortunately, it's fairly straightforward. Java's DataOutputStream class writes integers in big-endian (network) format. So, to demarshall the integer, we grab 4 bytes and unpack them into a 4-byte integer.
For UTF-8 strings, DataOutputStream first writes a 2-byte value indicating the number of bytes that follow. We read that in, and then read the subsequent bytes. Then, to decode the string, we can use the NSString method initWithBytes:length:encoding: as so:
NSData *data = ...; // this comes from the HTTP request
int length = [data length];
const uint8_t *bytes = (const uint8_t *)[data bytes];
if(length < 4)
; // oops, handle error
// demarshall the big-endian integer from 4 bytes
uint32_t myInt = (bytes[0] << 24) | (bytes[1] << 16) | (bytes[2] << 8) | (bytes[3]);
// convert from (n)etwork endianness to (h)ost endianness (may be a no-op)
// ntohl is defined in <arpa/inet.h>
myInt = ntohl(myInt);
// advance to next datum
bytes += 4;
length -= 4;
// demarshall the string length
if(length < 2)
; // oops, handle error
uint16_t myStringLen = (bytes[0] << 8) | (bytes[1]);
// convert from network to host endianness
myStringLen = ntohs(myStringLen);
bytes += 2;
length -= 2;
// make sure we actually have as much data as we say we have
if(myStringLen > length)
myStringLen = (uint16_t)length;
// demarshall the string
NSString *myString = [[NSString alloc] initWithBytes:bytes length:myStringLen encoding:NSUTF8StringEncoding];
bytes += myStringLen;
length -= myStringLen;
You can (and probably should) write functions to demarshall, so that you don't have to repeat this code for every field you want to demarshall. Also, be extra careful about buffer overflows. You're handling data sent over the network, which you should always distrust. Always verify your data, and always check your buffer lengths.
The main thing is to understand the binary data format itself. It doesn't matter what's written it, so long as you know what the bytes mean.
As such, the docs for DataOutputStream are your best bet. They specify everything (hopefully) about what the binary data will look like.
Next, I would try to basically come up with a class on the iPhone which will read the same format into appropriate data structure. I don't know Objective C at all, but I'm sure that it can't be too hard to read 4 bytes, know that the first byte is the most significant (etc) and do appropriate bit-twiddling to get the right kind of integer. (Basically read a byte, shift it left 8, read the next byte and add it into the result, shift the whole lot left 8 bits, etc.) There may well be more efficient ways of doing it, but get something that works first. When you've got unit tests around it all, you can move onto optimising it.
Don't forget that Objective-C is just C in a pretty dress--and C excels at this kind of bit-grovelling. To a large extent, you should be able to just define a C struct that looks like your data and cast the pointer to your data into a pointer to that struct. Now, exactly which types to use, and if you need to byte-swap anything, will depend on how Java constructs this stream; that's what you'll need to spend time with Java's documentation for.
Fundamentally, though, this is a design smell. You're having this problem because you made assumptions about your client platform that are no longer valid. If it's an option, I'd recommend you offer a second, more portable interface to the same functions (just adding "WithXML" wrappers or something should suffice). This will save you time if you ever end up porting to another platform that doesn't use Java.
Carl, if you really can't change what the server provides have a look at this class. It should be the pointer you are looking for. That said: The idea to use native java serialization as a transport format does not sound like a good idea. My first choice would have been JSON. If that's still too big I would probably rather use something like Thrift or Protobuffers. They also provide binary serialization - but in a cross language manner. (There is also the oldie ASN1 - but that's painful)