How to convert CString to const char * - constants

I have problems converting CString to const char *. I tried the methods from other forums and msdn way and it doesn't work:
CString value1("text1");
const char * value2= LPCTSTR(value1);
Any idea ?

Get pointer of the internal character buffer of CString
const char * value2 = (const char *) value1.GetBuffer( value1.GetLength() );
Release the buffer when done.
value1.ReleaseBuffer()
The ReleaseBuffer() gives ownership of the buffer back to CString.

Related

Check if an app is pre-installed or protected on OS X

Checking the app by the author 'Apple' isn't a good choice because some other apps from Apple like 'Xcode' or 'Numbers' are not system apps.
I also thought about 'Date Added' but it seems not a good choice either.
isDeletableFile fails as well.
This information seems to be encoded in a folders extended file system attributes.
running xattr on Mail, Maps, Stocks and Messages shows the are marked with com.apple.rootless
xattr /Applications/Mail.app/
com.apple.rootless
xattr /Applications/Maps.app/
com.apple.rootless
xattr /Applications/Stocks.app/
com.apple.rootless
xattr /Applications/Messages.app/
com.apple.rootless
while other apple apps dont have this
xattr /Applications/Xcode.app/
-
xattr /Applications/Numbers.app/
-
--
The extended attributes API, declared in , has functions for getting, setting, listing, and removing attributes:
ssize_t getxattr(const char *path, const char *name, void *value, size_t size, u_int32_t position, int options);
int setxattr(const char *path, const char *name, void *value, size_t size, u_int32_t position, int options);
ssize_t listxattr(const char *path, char *namebuf, size_t size, int options);
int removexattr(const char *path, const char *name, int options);

Inaccessible memory when parsing two bytes from tuple

I created a small C wrapper function which requires two bytes variables:
static PyObject * wrapperfunction(PyObject * self, PyObject * args) {
const unsigned char * data;
Py_ssize_t datalen;
const unsigned char * otherdata;
Py_ssize_t otherlen;
if (!PyArg_ParseTuple(args, "y#y#", &data, &datalen, &otherdata, &otherlen))
return NULL;
some_function(data, datalen, otherdata, otherlen);
}
But I noticed that on Linux 64bit the function would fail in certain cases (I could not really narrow them down to a special case) and segfault inside some_function as data was a not readable address.
usually this address would be 0x7fff00000001
I would not see why this was happening but changed the code to use Py_buffer instead - which works perfectly:
static PyObject * wrapperfunction(PyObject * self, PyObject * args) {
Py_buffer data;
Py_buffer otherdata;
if (!PyArg_ParseTuple(args, "y*y*", &data, &otherdata))
return NULL;
some_function((unsigned char *)data.buf, data.len, (unsigned char *)otherdata, otherdata.len);
}
As far as I can tell, the python documentation only says that y* is the preferred method, but not that y# would only work once.
Is there any reason why the method using y# fails?
I'm using Python 3.5.3 on debian stretch amd64.
On a Windows machine (python 3.6.4 / x64), the same code never produced a segfault.

getting crash om memory set After updating to lion and xCode

I have just updated to Lion and now my app is crashing which was working fine in older version. It crash on memset function with no logs.
unsigned char *theValue;
add(theValue, someotherValues);
I have passed theValue reference to function
add(unsigned char *inValue, some other perameter) {
memset(inValue,0,sizeOf(inValue)); // **here it is crashing**
}
Is there really no code between the declaration of theValue and the call to add()? If so, then that's your problem. You are passing a random value as the first parameter to memset().
For this code to make sense, you have to allocate a block of memory for theValue and pass its size to add(), like so:
unsigned char *theValue = new unsigned char[BUFSIZE]; // Or malloc
add(theValue, BUFSIZE, ...);
void add(unsigned char *inValue, size_t bufsize, ...) {
memset(inValue, 0, bufsize);
...
}
Do you allocate memory for inValue?
1)
add(unsigned char *inValue, some other perameter) {
inValue = (unsigned char*)malloc(sizeof(inValue));
memset(inValue,0,sizeOf(inValue)); // **here it is crashing**
}
2)
theValue = (unsigned char*)malloc(sizeof(inValue));
add(theValue, ...)
unsigned char *theValue;
This points to a random bit of memory (or 0). Until you call malloc you don't own what it's pointing at so you can't really memset it.

iPhone app crash when converting formatted NSString to UTF8 char *

I want to convert a NSString into a const char * in order to access a sqlite DB.
This works:
NSString *queryStatementNS = #"select title from article limit 10";
const char *queryStatement = [queryStatementNS UTF8String];
This causes a crash in the simulator (without any stacktrace):
NSString *queryStatementNS = [NSString stringWithFormat:#"select title from article limit %d", 10];
const char *queryStatement = [queryStatementNS UTF8String];
Can anybody tell me, what the stringWithFormat method changes in the String to make the conversion to UTF8 (or to ASCII using cStringUsingEncoding:NSASCIIStringEncoding) crash? The same crash happens also when passing no arg at all to the stringWithFormat. Could it be related to memory management somehow?
From the documentation:
The returned C string is automatically
freed just as a returned object would
be released; you should copy the C
string if it needs to store it outside
of the autorelease context in which
the C string is created.
Your problem is that queryStatement is being freed when queryStatementNS gets deallocated, and as queryStatementNS is autoreleased you don't know exactly when this is going to occur. You can either retain queryStatementNS by calling
[queryStatementNS retain]
at some point in that function (remember to release it when you want to relinquish ownership), you can explicitly create a non-autoreleased string to deal with yourself by saying
NSString* query = [[NSString alloc] initWithFormat:#"a string! %d", 10, nil]
(as an aside, note the nil - if you don't have it there xcode will give you a missing sentinel warning)
or you can copy the output of [queryStatementNS UTF8String] to your const char* queryStatement as you would in plain C, with strcopy or whatever.
The reason the first example you give continues to work is that you're setting a pointer to a string literal, #"select title from article limit 10". The objective c compiler ensures that there's only ever one instance of this string in memory, no matter how many times you reference it in your code. Thus, it doesn't obey the standard memory management conventions of objective c and your pointer remains valid outside of the autoreleased context.

What's the CFString Equiv of NSString's UTF8String?

I'm stuck on stoopid today as I can't convert a simple piece of ObjC code to its Cpp equivalent. I have this:
const UInt8 *myBuffer = [(NSString*)aRequest UTF8String];
And I'm trying to replace it with this:
const UInt8 *myBuffer = (const UInt8 *)CFStringGetCStringPtr(aRequest, kCFStringEncodingUTF8);
This is all in a tight unit test that writes an example HTTP request over a socket with CFNetwork APIs. I have working ObjC code that I'm trying to port to C++. I'm gradually replacing NS API calls with their toll free bridged equivalents. Everything has been one for one so far until this last line. This is like the last piece that needs completed.
This is one of those things where Cocoa does all the messy stuff behind the scenes, and you never really appreciate just how complicated things can be until you have to roll up your sleeves and do it yourself.
The simple answer for why it's not 'simple' is because NSString (and CFString) deal with all the complicated details of dealing with multiple character sets, Unicode, etc, etc, while presenting a simple, uniform API for manipulating strings. It's object oriented at its best- the details of 'how' (NS|CF)String deals with strings that have different string encodings (UTF8, MacRoman, UTF16, ISO 2022 Japanese, etc) is a private implementation detail. It all 'just works'.
It helps to understand how [#"..." UTF8String] works. This is a private implementation detail, so this isn't gospel, but based on observed behavior. When you send a string a UTF8String message, the string does something approximating (not actually tested, so consider it pseudo-code, and there's actually simpler ways to do the exact same thing, so this is overly verbose):
- (const char *)UTF8String
{
NSUInteger utf8Length = [self lengthOfBytesUsingEncoding:NSUTF8StringEncoding];
NSMutableData *utf8Data = [NSMutableData dataWithLength:utf8Length + 1UL];
char *utf8Bytes = [utf8Data mutableBytes];
[self getBytes:utf8Bytes
maxLength:utf8Length
usedLength:NULL
encoding:NSUTF8StringEncoding
options:0UL
range:NSMakeRange(0UL, [self length])
remainingRange:NULL];
return(utf8Bytes);
}
You don't have to worry about the memory management issues of dealing with the buffer that -UTF8String returns because the NSMutableData is autoreleased.
A string object is free to keep the contents of the string in whatever form it wants, so there's no guarantee that its internal representation is the one that would be most convenient for your needs (in this case, UTF8). If you're using just plain C, you're going to have to deal with managing some memory to hold any string conversions that might be required. What was once a simple -UTF8String method call is now much, much more complicated.
Most of NSString is actually implemented in/with CoreFoundation / CFString, so there's obviously a path from a CFStringRef -> -UTF8String. It's just not as neat and simple as NSString's -UTF8String. Most of the complication is with memory management. Here's how I've tackled it in the past:
void someFunction(void) {
CFStringRef cfString; // Assumes 'cfString' points to a (NS|CF)String.
const char *useUTF8StringPtr = NULL;
UInt8 *freeUTF8StringPtr = NULL;
CFIndex stringLength = CFStringGetLength(cfString), usedBytes = 0L;
if((useUTF8StringPtr = CFStringGetCStringPtr(cfString, kCFStringEncodingUTF8)) == NULL) {
if((freeUTF8StringPtr = malloc(stringLength + 1L)) != NULL) {
CFStringGetBytes(cfString, CFRangeMake(0L, stringLength), kCFStringEncodingUTF8, '?', false, freeUTF8StringPtr, stringLength, &usedBytes);
freeUTF8StringPtr[usedBytes] = 0;
useUTF8StringPtr = (const char *)freeUTF8StringPtr;
}
}
long utf8Length = (long)((freeUTF8StringPtr != NULL) ? usedBytes : stringLength);
if(useUTF8StringPtr != NULL) {
// useUTF8StringPtr points to a NULL terminated UTF8 encoded string.
// utf8Length contains the length of the UTF8 string.
// ... do something with useUTF8StringPtr ...
}
if(freeUTF8StringPtr != NULL) { free(freeUTF8StringPtr); freeUTF8StringPtr = NULL; }
}
NOTE: I haven't tested this code, but it is modified from working code. So, aside from obvious errors, I believe it should work.
The above tries to get the pointer to the buffer that CFString uses to store the contents of the string. If CFString happens to have the string contents encoded in UTF8 (or a suitably compatible encoding, such as ASCII), then it's likely CFStringGetCStringPtr() will return non-NULL. This is obviously the best, and fastest, case. If it can't get that pointer for some reason, say if CFString has its contents encoded in UTF16, then it allocates a buffer with malloc() that is large enough to contain the entire string when its is transcoded to UTF8. Then, at the end of the function, it checks to see if memory was allocated and free()'s it if necessary.
And now for a few tips and tricks... CFString 'tends to' (and this is a private implementation detail, so it can and does change between releases) keep 'simple' strings encoded as MacRoman, which is an 8-bit wide encoding. MacRoman, like UTF8, is a superset of ASCII, such that all characters < 128 are equivalent to their ASCII counterparts (or, in other words, any character < 128 is ASCII). In MacRoman, characters >= 128 are 'special' characters. They all have Unicode equivalents, and tend to be things like extra currency symbols and 'extended western' characters. See Wikipedia - MacRoman for more info. But just because a CFString says it's MacRoman (CFString encoding value of kCFStringEncodingMacRoman, NSString encoding value of NSMacOSRomanStringEncoding) doesn't mean that it has characters >= 128 in it. If a kCFStringEncodingMacRoman encoded string returned by CFStringGetCStringPtr() is composed entirely of characters < 128, then it is exactly equivalent to its ASCII (kCFStringEncodingASCII) encoded representation, which is also exactly equivalent to the strings UTF8 (kCFStringEncodingUTF8) encoded representation.
Depending on your requirements, you may be able to 'get by' using kCFStringEncodingMacRoman instead of kCFStringEncodingUTF8 when calling CFStringGetCStringPtr(). Things 'may' (probably) be faster if you require strict UTF8 encoding for your strings but use kCFStringEncodingMacRoman, then check to make sure the string returned by CFStringGetCStringPtr(string, kCFStringEncodingMacRoman) only contains characters that are < 128. If there are characters >= 128 in the string, then go the slow route by malloc()ing a buffer to hold the converted results. Example:
CFIndex stringLength = CFStringGetLength(cfString), usedBytes = 0L;
useUTF8StringPtr = CFStringGetCStringPtr(cfString, kCFStringEncodingUTF8);
for(CFIndex idx = 0L; (useUTF8String != NULL) && (useUTF8String[idx] != 0); idx++) {
if(useUTF8String[idx] >= 128) { useUTF8String = NULL; }
}
if((useUTF8String == NULL) && ((freeUTF8StringPtr = malloc(stringLength + 1L)) != NULL)) {
CFStringGetBytes(cfString, CFRangeMake(0L, stringLength), kCFStringEncodingUTF8, '?', false, freeUTF8StringPtr, stringLength, &usedBytes);
freeUTF8StringPtr[usedBytes] = 0;
useUTF8StringPtr = (const char *)freeUTF8StringPtr;
}
Like I said, you don't really appreciate just how much work Cocoa does for you automatically until you have to do it all yourself. :)
In the sample code above, the following appears:
CFIndex stringLength = CFStringGetLength(cfString)
stringLength is then being used to malloc() a temporary buffer of that many bytes, plus 1.
But the header file for CFStringGetLength() expressly says it returns the number of 16-bit Unicode characters, not bytes. So if some of those Unicode characters are outside the ASCII range, the malloc() buffer won't be long enough to hold the UTF-8 conversion of the string.
Perhaps I'm missing something, but to be absolutely safe, the number of bytes needed to hold N arbitrary Unicode characters is at most 4*n, when they're all converted to UTF-8.
From the documentation:
Whether or not this function returns a valid pointer or NULL depends on many factors, all of which depend on how the string was created and its properties. In addition, the function result might change between different releases and on different platforms. So do not count on receiving a non-NULL result from this function under any circumstances.
You should use CFStringGetCString if CFStringGetCStringPtr returns NULL.
Here's some working code. I started with #johne's answer, replaced CFStringGetBytes with CFStringGetLength for simplicity, and made the correction suggested by #Doug.
const char *useUTF8StringPtr = NULL;
char *freeUTF8StringPtr = NULL;
if ((useUTF8StringPtr = CFStringGetCStringPtr(cfString, kCFStringEncodingUTF8)) == NULL)
{
CFIndex stringLength = CFStringGetLength(cfString);
CFIndex maxBytes = 4 * stringLength + 1;
freeUTF8StringPtr = malloc(maxBytes);
CFStringGetCString(cfString, freeUTF8StringPtr, maxBytes, kCFStringEncodingUTF8);
useUTF8StringPtr = freeUTF8StringPtr;
}
// ... do something with useUTF8StringPtr...
if (freeUTF8StringPtr != NULL)
free(freeUTF8StringPtr);
If it's destined for a socket, perhaps CFStringGetBytes() would be your best choice?
Also note that the documentation for CFStringGetCStringPtr() says:
This function either returns the requested pointer immediately, with no memory allocations and no copying, in constant time, or returns NULL. If the latter is the result, call an alternative function such as the CFStringGetCString function to extract the characters.
Here's a way to printf a CFStringRef which implies we get a '\0'-terminated string from a CFStringRef:
// from: http://lists.apple.com/archives/carbon-development/2001/Aug/msg01367.html
// by Ali Ozer
// gcc -Wall -O3 -x objective-c -fobjc-exceptions -framework Foundation test.c
#import <stdio.h>
#import <Foundation/Foundation.h>
/*
This function will print the provided arguments (printf style varargs) out to the console.
Note that the CFString formatting function accepts "%#" as a way to display CF types.
For types other than CFString and CFNumber, the result of %# is mostly for debugging
and can differ between releases and different platforms. Cocoa apps (or any app which
links with the Foundation framework) can use NSLog() to get this functionality.
*/
void show(CFStringRef formatString, ...) {
CFStringRef resultString;
CFDataRef data;
va_list argList;
va_start(argList, formatString);
resultString = CFStringCreateWithFormatAndArguments(NULL, NULL, formatString, argList);
va_end(argList);
data = CFStringCreateExternalRepresentation(NULL, resultString,
CFStringGetSystemEncoding(), '?');
if (data != NULL) {
printf ("%.*s\n", (int)CFDataGetLength(data), CFDataGetBytePtr(data));
CFRelease(data);
}
CFRelease(resultString);
}
int main(void)
{
// To use:
int age = 25;
CFStringRef name = CFSTR("myname");
show(CFSTR("Name is %#, age is %d"), name, age);
return 0;
}