Issue With Code: Format string is not a string literal [duplicate] - iphone

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
SnowLeopard Xcode warning: “format not a string literal and no format arguments”
I am getting the following issue for this line of code.
"Format string is not a string literal (potentially insecure)"
NSLog([NSString stringWithFormat:#"%#", entered]);
Any suggestions?

The compiler wants us to use an NSString constant for the format string (the first argument to NSLog) because it prevents a fairly well-known exploit that could potentially violate security. So for example, you could change the code you posted as follows to keep the compiler happy:
NSLog(#"%#", [NSString stringWithFormat:#"%#", entered]);
EDIT
And of course, the above could (and should) simply be written as follows:
NSLog(#"%#", entered);
Nature of Security Exploits
Uncontrolled format string[1] is a type of software vulnerability,
discovered around 1999, that can be used in security exploits.
Previously thought harmless, format string exploits can be used to
crash a program or to execute harmful code. The problem stems from the
use of unchecked user input as the format string parameter in certain
C functions that perform formatting, such as printf(). A malicious
user may use the %s and %x format tokens, among others, to print data
from the stack or possibly other locations in memory. One may also
write arbitrary data to arbitrary locations using the %n format token,
which commands printf() and similar functions to write the number of
bytes formatted to an address stored on the stack.
A typical exploit
uses a combination of these techniques to force a program to overwrite
the address of a library function or the return address on the stack
with a pointer to some malicious shellcode. The padding parameters to
format specifiers are used to control the number of bytes output and
the %x token is used to pop bytes from the stack until the beginning
of the format string itself is reached. The start of the format string
is crafted to contain the address that the %n format token can then
overwrite with the address of the malicious code to execute.
Source: Wikipedia Uncontrolled Format String
[1]: http://cwe.mitre.org/data/definitions/134.html "CWE-134: Uncontrolled Format String". Common Weakness Enumeration. MITRE.

Here is the solution.
Warning: "format not a string literal and no format arguments"
Try with
NSLog(#"%#",entered);
because NSLog can also do formatting for you...

Try:
NSLog(#"%#",[NSString stringWithFormat:#"%#",entered]);
Hope this helps you. :)

Related

In Perl, are user-provided format specifiers always safe?

I would like to provide a user with access to writing their own format specifier. I know that in some languages like C that there is a format specifier attack.
Are there any attacks against format specifiers in Perl for calling functions like sprintf and allowing the user to provide the format specifier?
In this example, can you contrive anything in $unsafe_data that would be unsafe in Perl?
return sprintf($unsafe_data, $internal_value);
There are potentially unwanted effects.
%n can be used to modify the arguments.
Long strings can be generated.
Can cause performance issues. Can result in brutal termination.
Warnings can be generated.
Noise. Exception if warnings are made fatal.
Internal representation of scalars can be changed (e.g. by formatting "abc" using %d).
Probably harmless, but could have subtle effects.

Difference between NSLog and Printf statement for ObjectiveC

I want to know about the difference between the NSLog and the Printf statement in Objective-C (for application purpose...!)
Why do all developer use NSLog instead of Printf ?
Both look similar, but what is the difference in internal working?
At which point can they be differentiated ?
printf() is a C standard library function, accepting a C string constant (const char *) as its format argument. printf() writes to stdout.
NSLog() is a Foundation function, accepting a constant NSString as format, and has an extended format specifier set (for example, printf() does't print objects specified by %#, NSLog() does).
NSLog() also prints the process name and date before it prints the actual format and writes to sdterr.
Basically, we can say that NSLog() is an extended printf()
Style function for Objective-C (more precisely, Cocoa and Cocoa Touch) and specific purposes.
NSLog is like a printf, but it does a bit more:
A timestamp is added to the output.
The output is sent to the Xcode console, or whatever stderr is defined as.
It accepts all the printf specifiers, but it also accepts the # operator for objects which displays the string provided by the object's description method. (description is part of NSObject, so all objects can override it to return a string that describes the object).
The output is also sent to the Apple System Log (ASL), which is Apple's version of syslogd. This data can be read by other applications using a C API, or by a OS X user using the application “Console”.
From a developer point of view, the biggest difference is that NSLog supports Objective-C object types via the %# format. NSLog also writes to stderr, while printf writes to stdout.
I see two main differences between NSLog and printf:
NSLog supports NSString objects through the %# extension;
furthermore, NSLog automatically adds time and process data (e.g., 2012-01-25 17:52:10.479 process[906:707])

NSString to NSData encoding considerations

I understand why when going from NSData to NSString you need to specify encoding.
However I'm finding it frustrating how the reverse (NSString to NSData) needs to have an encoding specified.
In this related question the answers suggested using
NSUTF8StringEncoding or defaultCStringEncoding, with the latter not being fully explained.
So I just wanted to ask IF the following is correct when converting NSString to NSData:
In cases where you want to be 100% sure the binary representation of the NSString object is UTF8 then use NSUTF8StringEncoding (or whatever encoding is needed)
In cases where the encoding of the NSString object is known/expected to already be of a certain type and no conversion is required then it's safe (perhaps internally faster) to use defaultCStringEncoding (from what I have read objective-c uses UTF-16 internally, not sure if LE or BE but I'd assume LE because the platform is LE)
TIA
The encoding needs to be specified for converting NSString to NSData for the same reason it needs to be specified going from NSData to NSString.
An NSData object is a wrapper for a string of absolutely raw bytes. If the NSString doesn't specify some encoding, it doesn't know what to write, because at the level of ones and zeroes, a UTF-16 encoding looks different from a UTF-8 encoding of the same letter, and of course, if you write UTF-16 as big-endian and read it as little-endian you will get gibberish.
In other words, don't think of it as converting or escaping a string; it's generating a byte buffer, and the encoding tells it which ones and zeroes to write when the next character is "a" and which ones to write when it means "妈".
As for your question...here's my two cents.
1) If you are converting an NSString to an NSData so that your same program can convert it back later, and no other software will need to deal with that NSData until after you've read it back into an NSString, then none of this matters. All that matters is that your string-to-data encoding and your data-to-string encoding match.
2) If you are dealing only with ASCII characters, you can probably get away with a lot, just because many kinds of encoding use the same representation for characters under 128. But this breaks easily, even with little things like smart quotes.
3) Despite the name, defaultCStringEncoding is not something you should use as a default. It's designed for special circumstances where you need to deal with system strings and don't otherwise know how the system deals with its internal strings. It refers to the way strings are handled in the default C implementation, NOT in the NSString internals, so there's not necessarily a performance benefit.
4) If you write a string with an unknown string encoding, and you try to read it back with a different string encoding, your code will fail; in many cases, you will just end up with an empty string.
Bottom line is: who will be trying to interpret your NSData objects? If it's your own app, pick an encoding that makes sense for you (I use UTF8 for everything) and use it for both conversions. Otherwise, figure out what your ecosystem needs to read or write and make that your standard.

Stig JSON library parse error: How do you accommodate new lines in JSON?

I have some xml that is coming back from a web service. I in turn use xslt to turn that xml into json (I am turning someone else's xml service into a json-based service). My service, which is now outputting JSON, is consumed by my iphone app using the de facto iphone json framework, SBJSON.
The problem is, using the [string JSONValue] method chokes, and I can see that it's due to line breaks. Lo and behold, even the FAQ tells me the problem but I don't know how to fix it.
The parser fails to parse string X
Are you sure it's legal JSON? This framework is really strict, so won't accept stuff that (apparently) several validators accepts. In particular, literal TAB, NEWLINE or CARRIAGE RETURN (and all other control characters) characters in string tokens are disallowed, but can be very difficult to spot. (These characters are allowed between tokens, of course.)
If you get something like the below (the number may vary) then one of your strings has disallowed Unicode control characters in it.
NSLocalizedDescription = "Unescaped control character '0x9'";
I have tried using a line such as: NSString *myString = [myString stringByReplacingOccurrencesOfString:#"\n" withString:#"\\n"];
But that doesn't work. My xml service is not coming back as CDATA. The xml does have a line break in it as far as I can tell (how would I confirm this). I just want to faithfully transmit the line break into JSON.
I have actually spent an entire day on this, so it's time to ask. I have no pride anymore.
Thanks alot
Escaping a new line character should work. So following line should ideally work. Just check if your input also contains '\r' character.
NSString *myString = [myString stringByReplacingOccurrencesOfString:#"\n" withString:#"\\n"];
You can check which control character is present in the string using any editor which supports displaying all characters (non-displayable characters as well). e.g. using Notepad++ you can view all characters contained in a string.
It sounds like your XSLT is not working, in that it is not producing legal JSON. This is unsurprising, as producing correctly formatted JSON strings is not entirely trivial. I'm wondering if it would be simpler to just use the standard XML library to parse the XML into data structures that your app can consume.
I don't have a solution for you, but I usually use CJSONSerializer and CJSONDeserializer from the TouchJSON project and it is pretty reliable, I have never had a problem with line breaks before. Just a thought.
http://code.google.com/p/touchcode/source/browse/TouchJSON/Source/JSON/CJSONDeserializer.m?r=6294fcb084a8f174e243a68ccfb7e2c519def219
http://code.google.com/p/touchcode/source/browse/TouchJSON/Source/JSON/CJSONSerializer.m?r=3f52118ae2ff60cc34e31dd36d92610c9dd6c306

How should I handle digits from different sets of UNICODE digits in the same string?

I am writing a function that transliterates UNICODE digits into ASCII digits, and I am a bit stumped on what to do if the string contains digits from different sets of UNICODE digits. So for example, if I have the string "\x{2463}\x{24F6}" ("④⓶"). Should my function
return 42?
croak that the string contains mixed sets?
carp that the string contains mixed sets and return 42?
give the user an additional argument to specify one of the three above behaviours?
do something else?
Your current function appears to do #1.
I suggest that you should also write another function to do #4, but only when the requirement appears, and not before .
I'm sure Joel wrote about "premature implementation" in a blog article sometime recently, but I can't find it.
I'm not sure I see a problem.
You support numeric conversion from a range of scripts, which is to say, you are aware of the Unicode codepoints for their numeric characters.
If you find an unknown codepoint in your input data, it is an error.
It is up to you what you do in the event of an error; you may insert a space or underscore, or you may abort conversion. What you would do will depend on the environment in which your function executes; it is not something we can tell you.
My initial thought was #4; strictly based on the fact that I like options. However, I changed my mind, when I viewed your function.
The purpose of the function seems to be, simply, to get the resulting digits 0..9. Users may find it useful to send in mixed sets (a feature :) . I'll use it.
If you ever have to handle input in bases greater than 10, you may end up having to treat many variants on the first 6 letters of the Latin alphabet ('ABCDEF') as digits in all their forms.