i am having trouble in writing data to CFStream.
// i am getting the CFSocketRef and then from it getting native Handle.
CFSocketNativeHandle sock = CFSocketGetNative( [appDelegate getSocketRef]);
Does above Code return me the same handler of the created socket?what ever i write onto stream will be written on the created socket?
// and then wrote
CFStreamCreatePairWithSocket(kCFAllocatorDefault, sock,
&readStream, &writeStream);
if (!readStream || !writeStream) {
// close([appDelegate TCPClient]);
// close(sock);
fprintf(stderr, "CFStreamCreatePairWithSocket() failed\n");
return;
}
above works fine,it does not give me failed message
// does not give error ,else portion is executed
if (!CFWriteStreamOpen(writeStream)) {
CFStreamError myErr = CFWriteStreamGetError(writeStream);
// An error has occurred.
if (myErr.domain == kCFStreamErrorDomainPOSIX) {
// Interpret myErr.error as a UNIX errno.
NSLog(#"kCFStreamErrorDomainPOSIX");
} else if (myErr.domain == kCFStreamErrorDomainMacOSStatus) {
// Interpret myErr.error as a MacOS error code.
OSStatus macError = (OSStatus)myErr.error;
// Check other error domains.
NSLog(#"kCFStreamErrorDomainMacOSStatus");
}
}else
/* Send the connect call to stream */
// while (send_len < (originalLength + 1))
{
// if (CFWriteStreamCanAcceptBytes(writeStream))
{
//UInt8 buf[] = "Hello, world";//(unsigned char *) "connectStream"
//CFIndex bufLen = (CFIndex)strlen(buf);
bytes = CFWriteStreamWrite(writeStream,
(unsigned char *) connectStream,
originalLength );
NSLog(#"%#",[[NSString alloc] initWithData:connectStream encoding:NSASCIIStringEncoding] );
if (bytes < 0) {
fprintf(stderr, "CFWriteStreamWrite() failed\n");
// close(sock);
return;
}
send_len += bytes;
}
// close(sock);
CFReadStreamClose(readStream);
CFWriteStreamClose(writeStream);
return;
}
CFWriteStreamCanAcceptBytes always return false so i have commented it and directly wrote bytes,and it blocks the call and does not return any thing neither any byte is written on to the stream,
Can any one please guide me in this rergard?
is there any other way of doing this?
Regards,
Aamir
Related
I'm currently in the progress of writing my own TCP network library in C# and I want to make sure I'm doing things right.
Currently, I have every packet that gets sent through a socket prefixed with the length. The receiving end will read 4 bytes, convert that to an int, and read that many bytes afterward.
I'm not sure if that's the best approach. Would it be better to just read a fixed amount of bytes and process from there?
In the end, I would just extract 4 bytes from the fixed length buffer and read ahead however much I need.
Here is the code that I currently have to demonstrate my thought process.
internal void BeginReceive()
{
ReceiveImpl();
}
protected virtual void ReceiveImpl()
{
// Should we rather receive a bigger buffer (e.g. 8KB) immediately and then grab the length from that?
// It would require an internal buffer to store whatever we've already read ahead.
var recvState = new ReceiveState(new byte[4]);
Socket.BeginReceive(recvState.Buffer, 0, 4, SocketFlags.None, OnReceiveLength, recvState);
}
private void OnReceiveLength(IAsyncResult ar)
{
var recvState = (ar.AsyncState as ReceiveState)!;
var bytesRead = Socket.EndReceive(ar, out var errorCode);
if (errorCode != SocketError.Success)
{
// we ain't good fam!
return;
}
if (bytesRead > 0) // should we rather check if we read the 4 bytes we wanted?
{
var length = LengthFromBuffer(recvState.Buffer);
recvState.Buffer = new byte[length];
#if DEBUG
Console.WriteLine($"Receiving a packet with length of {length}.");
#endif
// what if the packet is absolutely massive? should we limit the buffer size?
Socket.BeginReceive(recvState.Buffer, 0, length, SocketFlags.None, OnReceive, recvState);
}
}
private void OnReceive(IAsyncResult ar)
{
var recvState = (ar.AsyncState as ReceiveState)!;
var bytesRead = Socket.EndReceive(ar, out var errorCode);
if (errorCode != SocketError.Success)
{
// we ain't good fam!
return;
}
if (bytesRead > 0)
{
recvState.BytesReceived += bytesRead;
if (recvState.BytesReceived < recvState.Buffer.Length)
{
Socket.BeginReceive(recvState.Buffer, recvState.BytesReceived,
recvState.Buffer.Length - recvState.BytesReceived, SocketFlags.None, OnReceive, recvState);
return;
}
OnDataReceived(recvState.Buffer); // this will call BeginReceive again.
}
}
I am trying to use grand central dispatch in conjunction with bsd sockets to send an icmp ping. I add DISPATCH_SOURCE_TYPE_WRITE and DISPATCH_SOURCE_TYPE_READ as dispatch sources to read and write async.
So this is the method were I create the bsd socket and install the dispatch sources:
- (void)start
{
int err;
const struct sockaddr * addrPtr;
assert(self.hostAddress != nil);
// Open the socket.
addrPtr = (const struct sockaddr *) [self.hostAddress bytes];
fd = -1;
err = 0;
switch (addrPtr->sa_family) {
case AF_INET: {
fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_ICMP);
if (fd < 0) {
err = errno;
}
} break;
case AF_INET6:
assert(NO);
// fall through
default: {
err = EPROTONOSUPPORT;
} break;
}
if (err != 0) {
[self didFailWithError:[NSError errorWithDomain:NSPOSIXErrorDomain code:err userInfo:nil]];
} else {
dispatch_source_t writeSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_WRITE, fd, 0, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0));
dispatch_source_set_event_handler(writeSource, ^{
abort(); // testing
// call call method here to send a ping
});
dispatch_resume(writeSource);
//NSLog(#"testout");
dispatch_source_t readSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_READ, fd, 0, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0));
dispatch_source_set_event_handler(readSource, ^{
unsigned long bytesAvail = dispatch_source_get_data(readSource);
NSLog(#"bytes available: %lu", bytesAvail);
});
dispatch_resume(readSource);
}
}
You see the //NSLog(#"testout");? The funny thing is that the write block is only called when the //NSLog(#"testout"); is NOT commented out. This is very odd. I didn't test the read callback. The sending needs to be working first.
So what is going on here?
There are kind of a bunch of things missing here. I'm not sure exactly which one is causing the weird behavior, but when I do all of the missing things, it seems to work "as expected" and my write event handler is called reliably and repeatedly. In general, there are a bunch of things you need to do when setting up a socket like this before passing it off to GCD. They are:
Create the socket
Bind it to a local address (missing in your code)
Set it to non-blocking (missing in your code)
Here is a little example I was able to put together in which the write handler gets called repeatedly, as expected:
int DoStuff()
{
int fd = -1;
// Create
if ((fd = socket(AF_INET, SOCK_DGRAM, 0)) < 0) {
perror("cannot create socket");
return 0;
}
// Bind
struct sockaddr_in *localAddressPtr = (struct sockaddr_in *)malloc(sizeof(struct sockaddr_in));
memset((char *)localAddressPtr, 0, sizeof(*localAddressPtr));
localAddressPtr->sin_family = AF_INET;
localAddressPtr->sin_addr.s_addr = htonl(INADDR_ANY);
localAddressPtr->sin_port = htons(0);
if (bind(fd, (struct sockaddr *)localAddressPtr, sizeof(*localAddressPtr)) < 0) {
perror("bind failed");
return 0;
}
// Set non-blocking
int flags;
if (-1 == (flags = fcntl(fd, F_GETFL, 0)))
flags = 0;
if (-1 == fcntl(fd, F_SETFL, flags | O_NONBLOCK))
{
perror("Couldnt set non-blocking");
return 0;
}
// Do a DNS lookup...
struct hostent *hp;
struct sockaddr_in *remoteAddressPtr = malloc(sizeof(struct sockaddr_in));
// Fill in the server's address and data
memset((char*)remoteAddressPtr, 0, sizeof(*remoteAddressPtr));
remoteAddressPtr->sin_family = AF_INET;
remoteAddressPtr->sin_port = htons(12345);
// Look up the address of the server by name
const char* host = "www.google.com";
hp = gethostbyname(host);
if (!hp) {
fprintf(stderr, "could not obtain address of %s\n", host);
return 0;
}
// Copy the host's address into the remote address structure
memcpy((void *)&remoteAddressPtr->sin_addr, hp->h_addr_list[0], hp->h_length);
dispatch_source_t writeSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_WRITE, fd, 0, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0));
dispatch_source_set_event_handler(writeSource, ^{
// Send message
const char* my_message = "the only thing we have to fear is fear itself.";
unsigned long len = strlen(my_message);
if (sendto(fd, my_message, len, 0, (struct sockaddr *)remoteAddressPtr, sizeof(*remoteAddressPtr)) != len) {
perror("sendto failed");
dispatch_source_cancel(writeSource);
}
});
dispatch_source_set_cancel_handler(writeSource, ^{
close(fd);
free(localAddressPtr);
free(remoteAddressPtr);
});
dispatch_resume(writeSource);
return 1;
}
NB: There's no way to dispose of the writeSource in my example without there being an error in a send operation. It's a trivial example...
My general theory on why NSLog triggers the handler to fire in your case, is that it keeps execution at or below that stack frame long enough for the background thread to come around and call the handler, but without that NSLog, your function returns, and something has a chance to die before the handler can get called. In fact, if you're using ARC it's probably the writeSource itself that is getting deallocated, since I don't see you making a strong reference to it anywhere outside the scope of this function. (My example captures a strong reference to it in the block, thus keeping it alive.) You could test this in your code by stashing a strong reference to writeSource.
I found the error:
In newer SDKs dispatch sources are subject to automatic reference counting despite the fact that they are no Objective-C objects.
So when the start method is over ARC disposes the dispatch source and they never get called.
NSLog delays the end of the start method in a way that the dispatch source triggers before the source gets disposed.
I'm playing sounds for my game with openAL and I have some problems that sometimes a small glitch is played while looping. Also without looping I get a small pop...sometimes but not all.
I think it has something to do with the buffer being a little too long so there is some undefined data in the end. I just can't figure out how to change this. I'm loading a caf file with this function:
void* MyGetOpenALAudioData(CFURLRef inFileURL, ALsizei *outDataSize, ALenum *outDataFormat, ALsizei *outSampleRate, ALdouble *duration) {
OSStatus err = noErr;
SInt64 theFileLengthInFrames = 0;
AudioStreamBasicDescription theFileFormat;
UInt32 thePropertySize = sizeof(theFileFormat);
ExtAudioFileRef extRef = NULL;
void* theData = NULL;
AudioStreamBasicDescription theOutputFormat;
// Open a file with ExtAudioFileOpen()
err = ExtAudioFileOpenURL(inFileURL, &extRef);
if(err) { printf("MyGetOpenALAudioData: ExtAudioFileOpenURL FAILED, Error = %ld\n", err); goto Exit; }
// Get the audio data format
err = ExtAudioFileGetProperty(extRef, kExtAudioFileProperty_FileDataFormat, &thePropertySize, &theFileFormat);
if(err) { printf("MyGetOpenALAudioData: ExtAudioFileGetProperty(kExtAudioFileProperty_FileDataFormat) FAILED, Error = %ld\n", err); goto Exit; }
if (theFileFormat.mChannelsPerFrame > 2) { printf("MyGetOpenALAudioData - Unsupported Format, channel count is greater than stereo\n"); goto Exit;}
// Set the client format to 16 bit signed integer (native-endian) data
// Maintain the channel count and sample rate of the original source format
theOutputFormat.mSampleRate = theFileFormat.mSampleRate;
theOutputFormat.mChannelsPerFrame = theFileFormat.mChannelsPerFrame;
theOutputFormat.mFormatID = kAudioFormatLinearPCM;
theOutputFormat.mBytesPerPacket = 2 * theOutputFormat.mChannelsPerFrame;
theOutputFormat.mFramesPerPacket = 1;
theOutputFormat.mBytesPerFrame = 2 * theOutputFormat.mChannelsPerFrame;
theOutputFormat.mBitsPerChannel = 16;
theOutputFormat.mFormatFlags = kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
// Set the desired client (output) data format
err = ExtAudioFileSetProperty(extRef, kExtAudioFileProperty_ClientDataFormat, sizeof(theOutputFormat), &theOutputFormat);
if(err) { printf("MyGetOpenALAudioData: ExtAudioFileSetProperty(kExtAudioFileProperty_ClientDataFormat) FAILED, Error = %ld\n", err); goto Exit; }
// Get the total frame count
thePropertySize = sizeof(theFileLengthInFrames);
err = ExtAudioFileGetProperty(extRef, kExtAudioFileProperty_FileLengthFrames, &thePropertySize, &theFileLengthInFrames);
if(err) { printf("MyGetOpenALAudioData: ExtAudioFileGetProperty(kExtAudioFileProperty_FileLengthFrames) FAILED, Error = %ld\n", err); goto Exit; }
// Read all the data into memory
UInt32 dataSize = theFileLengthInFrames * theOutputFormat.mBytesPerFrame;;
theData = malloc(dataSize);
if (theData)
{
AudioBufferList theDataBuffer;
theDataBuffer.mNumberBuffers = 1;
theDataBuffer.mBuffers[0].mDataByteSize = dataSize;
theDataBuffer.mBuffers[0].mNumberChannels = theOutputFormat.mChannelsPerFrame;
theDataBuffer.mBuffers[0].mData = theData;
// Read the data into an AudioBufferList
err = ExtAudioFileRead(extRef, (UInt32*)&theFileLengthInFrames, &theDataBuffer);
if(err == noErr)
{
// success
*outDataSize = (ALsizei)dataSize;
*outDataFormat = (theOutputFormat.mChannelsPerFrame > 1) ? AL_FORMAT_STEREO16 : AL_FORMAT_MONO16;
*outSampleRate = (ALsizei)theOutputFormat.mSampleRate;
}
else
{
// failure
free (theData);
theData = NULL; // make sure to return NULL
printf("MyGetOpenALAudioData: ExtAudioFileRead FAILED, Error = %ld\n", err); goto Exit;
}
}
// Alex(Colombiamug): get the file duration...
// first, get the audioID for the file...
AudioFileID audioID;
UInt32 audioIDSize = sizeof(audioID);
err = ExtAudioFileGetProperty(extRef, kExtAudioFileProperty_AudioFile, &audioIDSize, &audioID);
if(err) { printf("MyGetOpenALAudioData: ExtAudioFileGetProperty(kExtAudioFileProperty_AudioFile) FAILED, Error = %ld\n", err); goto Exit; }
//now the duration...
double soundDuration;
UInt32 durationSize = sizeof(soundDuration);
err = AudioFileGetProperty(audioID, kAudioFilePropertyEstimatedDuration, &durationSize, &soundDuration);
if(err) { printf("MyGetOpenALAudioData: AudioFileGetProperty(kAudioFilePropertyEstimatedDuration) FAILED, Error = %ld\n", err); goto Exit; }
*duration = soundDuration;
//printf("Audio duration:%f secs.\n", soundDuration);
Exit:
// Dispose the ExtAudioFileRef, it is no longer needed
if (extRef) ExtAudioFileDispose(extRef);
return theData;
}
It is part of this soundengine: SoundEngine
I have tried to put my caf file directly into the sample code and it is the same small glitch. (This caf file was doing fine with the old Apple SoundEngine.cpp but I had other issues with that so i decided to change)
Answering my own question ;)
By pure luck I must admit I tried to remove the kAudioFormatFlagIsPacked flag from this line:
theOutputFormat.mFormatFlags = kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
and that fixed it.
If anybody can tell me why it could be nice to know..or if there are some problems in removing that flag I would also like to hear about it.
When I start an audio output process with AudioQueueStart(out.queue, nil), the output callback only fires 3 times (which is the number of allocated buffers).
Here is my Output callback code:
static void AQOutputCallback(void* aqr,
AudioQueueRef outQ,
AudioQueueBufferRef outQB)
{
AQCallbackStruct *aqc = (AQCallbackStruct *) aqr;
NSLog(#"Out");
// Check if AudioQueue is stopped
if (!aqc->run) {
NSLog(#"Stopped");
return;
}
// Processing data
// Check enqueue error
int err = AudioQueueEnqueueBuffer(outQ, outQB, 0, NULL);
if (err != noErr) NSLog(#"OutputCallback AudioQueueEnqueueBuffer() %d ", err);
NSLog(#"Enqueued");
}
I think it's due to the lack of buffers, but my output is:
Out
Enqueued
Out
Enqueued
Out
Enqueued
So the first buffer is enqueued before the AudioQueue starts to fill the third one, it should not run out of buffers.
What happens here ?
Edit: Setup code
#define AUDIO_BUFFERS 3
typedef struct AQCallbackStruct {
AudioStreamBasicDescription mDataFormat;
AudioQueueRef queue;
AudioQueueBufferRef mBuffers[AUDIO_BUFFERS];
unsigned long frameSize;
BOOL *run;
} AQCallbackStruct;
// In some method
AQCallbackStruct out;
out.mDataFormat
out.mDataFormat.mFormatID = kAudioFormatLinearPCM;
out.mDataFormat.mSampleRate = 44100.0;
out.mDataFormat.mChannelsPerFrame = 2;
out.mDataFormat.mBitsPerChannel = 16;
out.mDataFormat.mBytesPerPacket =
out.mDataFormat.mBytesPerFrame =
out.mDataFormat.mChannelsPerFrame * sizeof(short int);
out.mDataFormat.mFramesPerPacket = 1;
out.mDataFormat.mFormatFlags =
kLinearPCMFormatFlagIsBigEndian
| kLinearPCMFormatFlagIsSignedInteger
| kLinearPCMFormatFlagIsPacked;
out.frameSize = 735;
int err;
err = AudioQueueNewOutput(&out.mDataFormat,
AQOutputCallback,
&out,
CFRunLoopGetCurrent(),
kCFRunLoopCommonModes,
0,
&out.queue);
if (err != noErr) NSLog(#"AudioQueueNewOutput() error: %d", err);
for (int i=0; i<AUDIO_BUFFERS; i++) {
err = AudioQueueAllocateBuffer(out.queue, out.frameSize, &out.mBuffers[i]);
if (err != noErr) NSLog(#"Output AudioQueueAllocateBuffer() error: %d", err);
out.mBuffers[i]->mAudioDataByteSize = out.frameSize;
err = AudioQueueEnqueueBuffer(out.queue, out.mBuffers[i], 0, NULL);
if (err != noErr) NSLog(#"Output AudioQueueEnqueueBuffer() error: %d", err);
}
AudioQueueStart(out.queue, nil);
Please have a look at this page: where to start with audio synthesis on iPhone.
The BleepMachine sample code that's in there is what got me started with this.
I finally found out where the problem was: AudioQueue callback in simulator but not on device
However, the output callback was properly fired in simulator but not on my device, and I still don't know exactly why there are differences in the AudioSession settings.
Test Case '-[TestParse testParsing]' started.
/Developer/Tools/RunPlatformUnitTests.include: line 415: 3256 Segmentation fault "${THIN_TEST_RIG}" "${OTHER_TEST_FLAGS}" "${TEST_BUNDLE_PATH}"
/Developer/Tools/RunPlatformUnitTests.include:451: error: Test rig '/Developer/Platforms /iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator4.2.sdk/Developer/usr/bin/otest'
exited abnormally with code 139 (it may have crashed).
I got this seg fault message while I built test case randomly (sometime it built successfully, sometimes it throws seg fault). I'm not sure how I could fix this error.
Only thing I test here is I wrote one class name Parse with class level method. And in test case I just call it like
var = [Parse methodName:filepath];
method is like this
NSMutableDictionary *tempBox = [[NSMutableDictionary alloc] init];
FILE *fd = fopen([filePath UTF8String], "r");
if(!fd){
NSLog(#"fail to open file\n");
}
char buf[4096], *ptr;
char name[512], description[4096];
int isNewInfo = 2, description_continue = 0;
// for (line = 0; line < [args objectAtIndex:1]; line++) {
// fgets(buf, 4096, fd);
// }
while(fgets(buf, sizeof(buf), fd) != NULL){
if(strcmp(buf, "\n") == 0){
isNewInfo -= 1;
if(isNewInfo == 0){
isNewInfo = 2;
description_continue = 0;
description[strlen(description)-1] = '\0';
[self saveDrinkandResetBuf:name
detail:description box:tempBox];
if(name[0] != 0 || description[0] != 0){
NSLog(#"fail to reset...");
}
}
}
if(description_continue){
strcat(description, buf);
continue;
}
if((ptr = strstr(buf, "Drink Name: "))){
memcpy(name, buf+12, strlen(buf));
name[strlen(name)] = '\0';
continue;
}
if((ptr = strstr(buf, "Description: "))){
memcpy(description, buf+13, strlen(buf));
description_continue = 1;
continue;
}
}
fclose(fd);
NSLog(#"finish parsing section\n");
//[tempBox release];
return tempBox;
Not sure what is going on here..
I suppose, the problem is in array management.
In C if the array is declared in a function (and is not declared as a global or static one), then value of its elements is undefined. So your char description[4096] is filled with any values. And nobody said that '\0' will be there.
And the result of strlen(...) for non-null-terminated char string is not defined. It may result in a memory access violation, as it will keep counting until it reaches the first memory byte whose value is 0.
Moreover, when you call description[strlen(description)-1], strlen can return 0 (imagine that the first value, stored there initially was '\0' and your file was started with two empty lines [to reach this line of code]) - so array index will be -1...