I try to convert a sjis string to utf-8 using the iconv API. I compiled it already succesfully, but the output isn't what I expected.
My code:
void convertUtf8ToSjis(char* utf8, char* sjis){
iconv_t icd;
int index = 0;
char *p_src, *p_dst;
size_t n_src, n_dst;
icd = iconv_open("Shift_JIS", "UTF-8");
int c;
p_src = utf8;
p_dst = sjis;
n_src = strlen(utf8);
n_dst = 32; // my sjis string size
iconv(icd, &p_src, &n_src, &p_dst, &n_dst);
iconv_close(icd);
}
I got only random numbers. Any ideas?
Edit:
My input is
char utf8[] = "\xe4\xba\x9c"; //亜
And output should be:
0x88 0x9F
But is in fact:
0x30 0x00 0x00 0x31 0x00 ...
I was unable to duplicate the problem. The only thing I can suggest is to be careful about your allocations.
Code:
#include <iconv.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
void convertUtf8ToSjis(char* utf8, char* sjis){
...
}
int main(int argc, char *argv[])
{
char utf8[] = "\xe4\xba\x9c";
char *sjis;
sjis = malloc(32);
convertUtf8ToSjis(utf8, sjis);
int i;
for (i = 0; sjis[i]; i++)
{
printf("%02x\n", (unsigned char)sjis[i]);
}
free(sjis);
}
Output:
$ gcc t.c
$ ./a.out
88
9f
Related
On one computer with the client (written in C) I get the error send: Bad address when I try to send chars to another computer with a server written in Python. But the address is NOT bad.
If instead of chars I just send a written string, "A string written like this" I can send it just fine to the server and see it print with no problems. So, I don't think there is really a problem with an address.
I have also tried converting the int to a string. I get error when compiling cannot convert string to char. I have tried variations and I can only compile with the client written as it is below.
The client (in C)
#include <sys/socket.h>
#include <sys/types.h>
#include <netdb.h>
#include <unistd.h>
#include <iostream>
#include <string>
#include <vector>
#include <cstring>
#include <stdio.h>
#include <stdlib.h>
#define ADDR "192.168.0.112"
#define PORT "12003"
void sendall(int socket, char *bytes, int length)
{
int n = 0, total = 0;
while (total < length) {
n = send(socket, bytes + total, total-length, 0);
if (n == -1) {
perror("send");
exit(1);
}
total += n;
}
}
void thesock(char *ADDRf, char *PORTf, char *RAZZstr)
{
struct addrinfo hints = {0}, *addr = NULL;
hints.ai_family = AF_UNSPEC;
hints.ai_socktype = SOCK_STREAM;
int status = getaddrinfo(ADDRf, PORTf, &hints, &addr);
if (status != 0) {
std::cerr << "Error message";
exit(1);
}
int sock = -1;
struct addrinfo *p = NULL;
for (p = addr; p != NULL; p = addr->ai_next) {
sock = socket(p->ai_family, p->ai_socktype, p->ai_protocol);
if (sock == -1) {
continue;
}
if (connect(sock, p->ai_addr, p->ai_addrlen) != -1) {
break;
}
close(sock);
}
if (p == NULL) {
fprintf(stderr, "connect(), socket()\n");
exit(1);
}
sendall(sock, RAZZstr, 12);
close(sock);
}
int main()
{
int someInt = 321;
char strss[12];
sprintf(strss, "%d", someInt);
thesock(ADDR, PORT, strss);
return 0;
}
This last part of the code above is where the chars, or string is entered. It's this part of the code where you can replace strss in thesock with a string written in the strss position "just like this" and it will send to the server on the other computer written in Python. Though, when compiling I do get warnings ISO C++ forbids converting a string constant to ‘char*’.
The server (In Python)
import os
import sys
import socket
s=socket.socket()
host='192.168.0.112'
port=12003
s.bind((host,port))
s.listen(11)
while True:
c, addr=s.accept()
content=c.recv(29).decode('utf-8')
print(content)
This server decodes utf-8. I don't know if I have the option for a different 'decode' here. I don't think Python has 'chars'.
TL;DR: this is unrelated to "address" in terms of IP address but it is about invalid access to a local memory access.
int n = 0, total = 0;
while (total < length) {
n = send(socket, bytes + total, total-length, 0);
total - length is a negative number, i.e. 0-12 = -12 in your case. The third argument of send is of type size_t, i.e. an unsigned integer. The negative number (-12) thus gets cast into an unsigned integer, resulting in a huge unsigned integer.
This causes send to access memory far outside the allocated memory for bytes, hence EFAULT "Bad address".
I just started using CImg libraries and I would like to test it with raw data, but I am experiencing big problems to figure out why I cannot obtain correct results.
My RGB24 raw file format is RGB interleaved (RGBRGBRGB ...) while CImg stores data as explained here.
The (last) test code I wrote is this, where I collect all the possible 4! = 24 combination for permute_axes method
#ifndef _USE_MATH_DEFINES
#define _USE_MATH_DEFINES
#endif
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#ifdef cimg_display
#undef cimg_display
#define cimg_display 0
#endif
#include "CImg.h"
#include <iostream>
#include <iomanip>
#include <sstream>
#include <cstdint>
#include <cstdlib>
#include <thread>
#include <vector>
#include <array>
#include <cmath>
#include <unistd.h>
#include <sys/file.h>
#include <fcntl.h>
using namespace std;
using namespace cimg_library;
#define MODE_700 (mode_t) S_IRWXU // 00700
#define MODE_400 (mode_t) S_IRUSR // 00400
#define MODE_200 (mode_t) S_IWUSR // 00200
#define MODE_100 (mode_t) S_IXUSR // 00100
#define MODE_070 (mode_t) S_IRWXG // 00070
#define MODE_040 (mode_t) S_IRGRP // 00040
#define MODE_020 (mode_t) S_IWGRP // 00020
#define MODE_010 (mode_t) S_IXGRP // 00010
#define MODE_007 (mode_t) S_IRWXO // 00007
#define MODE_004 (mode_t) S_IROTH // 00004
#define MODE_002 (mode_t) S_IWOTH // 00002
#define MODE_001 (mode_t) S_IXOTH // 00001
#define MODE_777 (mode_t) MODE_700 | MODE_070 | MODE_007
int main(int argc, const char *argv[])
{
//uint8_t buffer[960 * 1280 * 3];
CImg <uint8_t> image(960, 1280, 1, 3);
// CImg <float> gaussk(7,7,3);
uint8_t buffer[960 * 1280 * 3] = {0};
float var = 1.5*1.5;
// int32_t xtmp;
// int32_t ytmp;
int32_t fd = -1;
stringstream sstr;
cout << "image size is: " << unsigned(image.size()) << endl;
printf("opening %s\n", argv[1]);
fd = open(argv[1], O_RDONLY);
if(fd == -1)
{
perror("open: ");
return -1;
}
cout << "Reading..." << endl;
// read(fd, image.data(), image.size());
read(fd, buffer, sizeof(buffer));
close(fd);
fd = -1;
for(uint8_t i = 0; i < 24; i++)
{
sstr << "blurred_" << unsigned(i) << ".raw";
image.assign(buffer, 960, 1280, 1, 3);
switch(i)
{
case 0:
image.permute_axes("xyzc");
break;
case 1:
image.permute_axes("xycz");
break;
case 2:
image.permute_axes("xzyc");
break;
case 3:
image.permute_axes("xzcy");
break;
case 4:
image.permute_axes("xcyz");
break;
case 5:
image.permute_axes("xczy");
break;
case 6:
image.permute_axes("yxzc");
break;
case 7:
image.permute_axes("yxcz");
break;
case 8:
image.permute_axes("yzxc");
break;
case 9:
image.permute_axes("yzcx");
break;
case 10:
image.permute_axes("ycxz");
break;
case 11:
image.permute_axes("yczx");
break;
case 12:
image.permute_axes("zxyc");
break;
case 13:
image.permute_axes("zxcy");
break;
case 14:
image.permute_axes("zyxc");
break;
case 15:
image.permute_axes("zycx");
break;
case 16:
image.permute_axes("zcxy");
break;
case 17:
image.permute_axes("zcyx");
break;
case 18:
image.permute_axes("cxyz");
break;
case 19:
image.permute_axes("cxzy");
break;
case 20:
image.permute_axes("cyxz");
break;
case 21:
image.permute_axes("cyzx");
break;
case 22:
image.permute_axes("czxy");
break;
case 23:
image.permute_axes("czyx");
break;
}
image.blur(2.5);
// image.save("blurred.bmp");
cout << "Writing " << sstr.str() << endl;
fd = open(sstr.str().c_str(), O_RDWR | O_CREAT, MODE_777);
write(fd, image.data(), image.size());
close(fd);
sstr.str("");
}
return 0;
}
But the output blurred frame is wrong, most of the times in grayscale.
According to the header file CImg.h this conversion should be possible, and it's what I understand also after reading this blog post.
Before I start to write my own function to try to fix this problem, what is the correct way to deal with raw file and to make such a conversion with CImg ?
So you have interleaved data and want it non-interleaved to be able to let CImg functions work with it.
So why didn't you just do exactly what the blog post you referenced advised under IMPORTANT
CImg result(dataPointer, spectrum, width, height, depth, false);
result.permute_axes("yzcx");
for your code that would be
image.assign(buffer, 3, 960, 1280, 1); // note the 3 being first
image.permute_axes("yzcx");
image.blur(2.5);
// image.save("blurred.bmp");
//now convert back to interleaved
image.permute_axes("cxyz");
fd = open(sstr.str().c_str(), O_RDWR | O_CREAT, MODE_777);
write(fd, image.data(), image.size());
If I have a character string, how can I convert the values to hexadecimal in Objective-C? Likewise, how can I convert from a hexadecimal string to a character string?
As an exercise and in case it helps, I wrote a program to demonstrate how I might do this in pure C, which is 100% legal in Objective-C. I used the string-formatting functions in stdio.h to do the actual conversions.
Note that this can (should?) be tweaked for your setting. It will create a string twice as long as the passed-in string when going char->hex (converting 'Z' to '5a' for instance), and a string half as long going the other way.
I wrote this code in such a way that you can simply copy/paste and then compile/run to play around with it. Here is my sample output:
My favorite way to include C in XCode is to make a .h file with the function declarations separate from the .c file with implementation. See the comments:
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
// Place these prototypes in a .h to #import from wherever you need 'em
// Do not import the .c file anywhere.
// Note: You must free() these char *s
//
// allocates space for strlen(arg) * 2 and fills
// that space with chars corresponding to the hex
// representations of the arg string
char *makeHexStringFromCharString(const char*);
//
// allocates space for about 1/2 strlen(arg)
// and fills it with the char representation
char *makeCharStringFromHexString(const char*);
// this is just sample code
int main() {
char source[256];
printf("Enter a Char string to convert to Hex:");
scanf("%s", source);
char *output = makeHexStringFromCharString(source);
printf("converted '%s' TO: %s\n\n", source, output);
free(output);
printf("Enter a Hex string to convert to Char:");
scanf("%s", source);
output = makeCharStringFromHexString(source);
printf("converted '%s' TO: %s\n\n", source, output);
free(output);
}
// Place these in a .c file (named same as .h above)
// and include it in your target's build settings
// (should happen by default if you create the file in Xcode)
char *makeHexStringFromCharString(const char*input) {
char *output = malloc(sizeof(char) * strlen(input) * 2 + 1);
int i, limit;
for(i=0, limit = strlen(input); i<limit; i++) {
sprintf(output + (i*2), "%x", input[i]);
}
output[strlen(input)*2] = '\0';
return output;
}
char *makeCharStringFromHexString(const char*input) {
char *output = malloc(sizeof(char) * (strlen(input) / 2) + 1);
char sourceSnippet[3] = {[2]='\0'};
int i, limit;
for(i=0, limit = strlen(input); i<limit; i+=2) {
sourceSnippet[0] = input[i];
sourceSnippet[1] = input[i+1];
sscanf(sourceSnippet, "%x", (int *) (output + (i/2)));
}
output[strlen(input)/2+1] = '\0';
return output;
}
I am working on IPv6 and need to craft an IPv6 packet from scratch and put it into a buffer. Unfortunately I do not have much experience with C. From a tutorial I have successfully done the same thing with IPv4 by defining
struct ipheader {
unsigned char iph_ihl:5, /* Little-endian */
iph_ver:4;
unsigned char iph_tos;
unsigned short int iph_len;
unsigned short int iph_ident;
unsigned char iph_flags;
unsigned short int iph_offset;
unsigned char iph_ttl;
unsigned char iph_protocol;
unsigned short int iph_chksum;
unsigned int iph_sourceip;
unsigned int iph_destip;
};
/* Structure of a TCP header */
struct tcpheader {
unsigned short int tcph_srcport;
unsigned short int tcph_destport;
unsigned int tcph_seqnum;
unsigned int tcph_acknum;
unsigned char tcph_reserved:4, tcph_offset:4;
// unsigned char tcph_flags;
unsigned int
tcp_res1:4, /*little-endian*/
tcph_hlen:4, /*length of tcp header in 32-bit words*/
tcph_fin:1, /*Finish flag "fin"*/
tcph_syn:1, /*Synchronize sequence numbers to start a connection*/
tcph_rst:1, /*Reset flag */
tcph_psh:1, /*Push, sends data to the application*/
tcph_ack:1, /*acknowledge*/
tcph_urg:1, /*urgent pointer*/
tcph_res2:2;
unsigned short int tcph_win;
unsigned short int tcph_chksum;
unsigned short int tcph_urgptr;
};
and fill the packet content in like this:
// IP structure
ip->iph_ihl = 5;
ip->iph_ver = 6;
ip->iph_tos = 16;
ip->iph_len = sizeof (struct ipheader) + sizeof (struct tcpheader);
ip->iph_ident = htons(54321);
ip->iph_offset = 0;
ip->iph_ttl = 64;
ip->iph_protocol = 6; // TCP
ip->iph_chksum = 0; // Done by kernel
// Source IP, modify as needed, spoofed, we accept through command line argument
ip->iph_sourceip = inet_addr("1922.168.1.128");
// Destination IP, modify as needed, but here we accept through command line argument
ip->iph_destip = inet_addr(1922.168.1.1);
// The TCP structure. The source port, spoofed, we accept through the command line
tcp->tcph_srcport = htons(atoi("1024"));
// The destination port, we accept through command line
tcp->tcph_destport = htons(atoi("4201"));
tcp->tcph_seqnum = htons(1);
tcp->tcph_acknum = 0;
tcp->tcph_offset = 5;
tcp->tcph_syn = 1;
tcp->tcph_ack = 0;
tcp->tcph_win = htons(32767);
tcp->tcph_chksum = 0; // Done by kernel
tcp->tcph_urgptr = 0;
// IP checksum calculation
ip->iph_chksum = csum((unsigned short *) buffer, (sizeof (struct ipheader) + sizeof (struct tcpheader)));
However for IPv6 I have not find a similar way. What I already found is this struct from IETF,
struct ip6_hdr {
union {
struct ip6_hdrctl {
uint32_t ip6_un1_flow; /* 4 bits version, 8 bits TC, 20 bits
flow-ID */
uint16_t ip6_un1_plen; /* payload length */
uint8_t ip6_un1_nxt; /* next header */
uint8_t ip6_un1_hlim; /* hop limit */
} ip6_un1;
uint8_t ip6_un2_vfc; /* 4 bits version, top 4 bits
tclass */
} ip6_ctlun;
struct in6_addr ip6_src; /* source address */
struct in6_addr ip6_dst; /* destination address */
};
But I did not know how to fill in the information, for example, how to send a TCP/SYN from 2001:220:806:22:aacc:ff:fe00:1 port 1024 to 2001:220:806:21::4 port 1025?
Could anybody help me or is there any references?
Thank you vere much then.
this is what I have done so far, however there are mismatch between the code and the real packet captured by Wireshark (as discussed in comments below). I'm not sure it is possible to post a long code in comment section, so I just edit my question.
Anyone can help?
#define PCKT_LEN 2000
int main(void) {
unsigned char buffer[PCKT_LEN];
int s;
struct sockaddr_in6 din;
struct ipv6_header *ip = (struct ipv6_header *) buffer;
struct tcpheader *tcp = (struct tcpheader *) (buffer + sizeof (struct ipv6_header));
memset(buffer, 0, PCKT_LEN);
din.sin6_family = AF_INET6;
din.sin6_port = htons(0);
inet_pton(AF_INET6, "::1", &(din.sin6_addr)); // For routing
ip->version = 6;
ip->traffic_class = 0;
ip->flow_label = 0;
ip->length = 40;
ip->next_header = 6;
ip->hop_limit = 64;
inet_pton(AF_INET6, "::1", &(ip->dst)); // IPv6
inet_pton(AF_INET6, "::1", &(ip->src)); // IPv6
tcp->tcph_srcport = htons(atoi("11111"));
tcp->tcph_destport = htons(atoi("13"));
tcp->tcph_seqnum = htons(0);
tcp->tcph_acknum = 0;
tcp->tcph_offset = 5;
tcp->tcph_syn = 1;
tcp->tcph_ack = 0;
tcp->tcph_win = htons(32752);
tcp->tcph_chksum = 0; // Done by kernel
tcp->tcph_urgptr = 0;
s = socket(PF_INET6, SOCK_RAW, IPPROTO_RAW);
if (s < 0) {
perror("socket()");
return 1;
}
unsigned short int packet_len = sizeof (struct ipv6_header) + sizeof (struct tcpheader);
if (sendto(s, buffer, packet_len, 0, (struct sockaddr*) &din, sizeof (din)) == -1) {
perror("sendto()");
close(s);
return 1;
}
close(s);
return 0;
}
Maybe this article can help you getting started?
Edit:
Using the wikipedia article linked above I made this structure (without knowing what some of the fields means):
struct ipv6_header
{
unsigned int
version : 4,
traffic_class : 8,
flow_label : 20;
uint16_t length;
uint8_t next_header;
uint8_t hop_limit;
struct in6_addr src;
struct in6_addr dst;
};
It's no different than how the header-struct was made for IPv4 in your example. Just create a struct containing the fields, in the right order and in the right size, and fill it with the right values.
Just do the same for the TCP headers.
Unfortunately the ipv6 RFCs don't provide the same raw socket interface that you get with ipv4. From what i've seen to create ipv6 packets you have to go a level deeper and use an AF_PACKET socket to send an ethernet frame including your ipv6 packet.
This is the code I am using for the encryption but it generate an error
"CCKeyDerivationPBKDF is unavailable" in AESKeyForPassword method though it is declare before implementation. How to Resolve it.
#ifndef _CC_PBKDF_H_
#define _CC_PBKDF_H_
#include <sys/types.h>
#include <sys/param.h>
#include <string.h>
#include <limits.h>
#include <stdlib.h>
#include <Availability.h>
#include <CommonCrypto/CommonDigest.h>
#include <CommonCrypto/CommonHMAC.h>
#ifdef __cplusplus
extern "C" {
#endif
enum {
kCCPBKDF2 = 2,
};
typedef uint32_t CCPBKDFAlgorithm;
enum {
kCCPRFHmacAlgSHA1 = 1,
kCCPRFHmacAlgSHA224 = 2,
kCCPRFHmacAlgSHA256 = 3,
kCCPRFHmacAlgSHA384 = 4,
kCCPRFHmacAlgSHA512 = 5,
};
typedef uint32_t CCPseudoRandomAlgorithm;
/*
#function CCKeyDerivationPBKDF
#abstract Derive a key from a text password/passphrase
#param algorithm Currently only PBKDF2 is available via kCCPBKDF2
#param password The text password used as input to the derivation
function. The actual octets present in this string
will be used with no additional processing. It's
extremely important that the same encoding and
normalization be used each time this routine is
called if the same key is expected to be derived.
#param passwordLen The length of the text password in bytes.
#param salt The salt byte values used as input to the derivation
function.
#param saltLen The length of the salt in bytes.
#param prf The Pseudo Random Algorithm to use for the derivation
iterations.
#param rounds The number of rounds of the Pseudo Random Algorithm
to use.
#param derivedKey The resulting derived key produced by the function.
The space for this must be provided by the caller.
#param derivedKeyLen The expected length of the derived key in bytes.
#discussion The following values are used to designate the PRF:
* kCCPRFHmacAlgSHA1
* kCCPRFHmacAlgSHA224
* kCCPRFHmacAlgSHA256
* kCCPRFHmacAlgSHA384
* kCCPRFHmacAlgSHA512
#result kCCParamError can result from bad values for the password, salt,
and unwrapped key pointers as well as a bad value for the prf function.
*/
int CCKeyDerivationPBKDF( CCPBKDFAlgorithm algorithm, const char *password, size_t passwordLen,
const uint8_t *salt, size_t saltLen,
CCPseudoRandomAlgorithm prf, uint rounds,
uint8_t *derivedKey, size_t derivedKeyLen)
__OSX_AVAILABLE_STARTING(__MAC_10_7, __IPHONE_NA);
/*
* All lengths are in bytes - not bits.
*/
/*
#function CCCalibratePBKDF
#abstract Determine the number of PRF rounds to use for a specific delay on
the current platform.
#param algorithm Currently only PBKDF2 is available via kCCPBKDF2
#param passwordLen The length of the text password in bytes.
#param saltLen The length of the salt in bytes.
#param prf The Pseudo Random Algorithm to use for the derivation
iterations.
#param derivedKeyLen The expected length of the derived key in bytes.
#param msec The targetted duration we want to achieve for a key
derivation with these parameters.
#result the number of iterations to use for the desired processing time.
*/
uint CCCalibratePBKDF(CCPBKDFAlgorithm algorithm, size_t passwordLen, size_t saltLen,
CCPseudoRandomAlgorithm prf, size_t derivedKeyLen, uint32_t msec)
__OSX_AVAILABLE_STARTING(__MAC_10_7, __IPHONE_NA);
#ifdef __cplusplus
}
#endif
#endif /* _CC_PBKDF_H_ */
#import "AESEncryption.h"
#import <CommonCrypto/CommonCryptor.h>
//#import <CommonCrypto/CommonKeyDerivation.h>
//#import <CommonKeyDerivation.h>
#implementation AESEncryption
NSString * const
kRNCryptManagerErrorDomain = #"net.robnapier.RNCryptManager";
const CCAlgorithm kAlgorithm = kCCAlgorithmAES128;
const NSUInteger kAlgorithmKeySize = kCCKeySizeAES128;
const NSUInteger kAlgorithmBlockSize = kCCBlockSizeAES128;
const NSUInteger kAlgorithmIVSize = kCCBlockSizeAES128;
const NSUInteger kPBKDFSaltSize = 8;
const NSUInteger kPBKDFRounds = 1000;//0; // ~80ms on an iPhone 4
// ===================
+ (NSData *)encryptedDataForData:(NSData *)data
password:(NSString *)password
iv:(NSData **)iv
salt:(NSData **)salt
error:(NSError **)error {
NSAssert(iv, #"IV must not be NULL");
NSAssert(salt, #"salt must not be NULL");
*iv = [self randomDataOfLength:kAlgorithmIVSize];
*salt = [self randomDataOfLength:kPBKDFSaltSize];
NSData *key = [self AESKeyForPassword:password salt:*salt];
size_t outLength;
NSMutableData *
cipherData = [NSMutableData dataWithLength:data.length +
kAlgorithmBlockSize];
CCCryptorStatus
result = CCCrypt(kCCEncrypt, // operation
kAlgorithm, // Algorithm
kCCOptionPKCS7Padding, // options
key.bytes, // key
key.length, // keylength
(*iv).bytes,// iv
data.bytes, // dataIn
data.length, // dataInLength,
cipherData.mutableBytes, // dataOut
cipherData.length, // dataOutAvailable
&outLength); // dataOutMoved
if (result == kCCSuccess) {
cipherData.length = outLength;
}
else {
if (error) {
*error = [NSError errorWithDomain:kRNCryptManagerErrorDomain
code:result
userInfo:nil];
}
return nil;
}
return cipherData;
}
// ===================
+ (NSData *)randomDataOfLength:(size_t)length {
NSMutableData *data = [NSMutableData dataWithLength:length];
int result = SecRandomCopyBytes(kSecRandomDefault, length,data.mutableBytes);
NSLog(#"%d",result);
NSAssert1(result == 0, #"Unable to generate random bytes: %d", errno);
//NSAssert( #"Unable to generate random bytes: %d", errno);
return data;
}
// ===================
// Replace this with a 10,000 hash calls if you don't have CCKeyDerivationPBKDF
+ (NSData *)AESKeyForPassword:(NSString *)password
salt:(NSData *)salt {
NSMutableData *
derivedKey = [NSMutableData dataWithLength:kAlgorithmKeySize];
int result = CCKeyDerivationPBKDF(kCCPBKDF2, // algorithm
password.UTF8String, // password
password.length, // passwordLength
salt.bytes, // salt
salt.length, // saltLen
kCCPRFHmacAlgSHA1, // PRF
kPBKDFRounds, // rounds
derivedKey.mutableBytes, // derivedKey
derivedKey.length); // derivedKeyLen
NSLog(#"%d",result);
// Do not log password here
NSAssert1(result == kCCSuccess,#"Unable to create AES key for password: %d", result);
//NSAssert(#"Unable to create AES key for password: %d", result);
return derivedKey;
}
#end
The code placed above implementation is of CommonCrypto/CommonKeyDerivation.h which was not found be me xcode and hence I put code directly at the top.
Try to comment out this lines:
__OSX_AVAILABLE_STARTING(__MAC_10_7, __IPHONE_NA);
I think they limit the method to a particular Operating System and this is exactly what you don't need.
But I cannot guarantee if further issues may appear. I'm trying to achieve the same.
You have merely declared 2 prototypes for CCKeyDerivationPBKDF and CCCalibratePBKDF. Either put the full code for the functions at this place or declare them as extern and have them in a seperate module or library.