I am doing some socket IO, and using a bytearray object as a buffer. I would like to receive data with an offset into this buffer using csock.recv_into as shown below in order to avoid creating intermediary string objects. Unfortunately, it seems bytearrays can't be used this way, and the code below doesn't work.
buf = bytearray(b" " * toread)
read = 0
while(toread):
nbytes = csock.recv_into(buf[read:],toread)
toread -= nbytes
read += nbytes
So instead I am using the code below, which does use a temporary string (and works)...
buf = bytearray(b" " * toread)
read = 0
while(toread):
tmp = csock.recv(toread)
nbytes = len(tmp)
buf[read:] = tmp
toread -= nbytes
read += nbytes
Is there a more elegant way to do this that doesn't require copying intermediate strings around?
Use a memoryview to wrap your bytearray:
buf = bytearray(toread)
view = memoryview(buf)
while toread:
nbytes = sock.recv_into(view, toread)
view = view[nbytes:] # slicing views is cheap
toread -= nbytes
Related
What I'm trying to do here is pack a byte like I could in c# like this:
string symbol = "T" + "\0";
byte orderTypeEnum = (byte)OrderType.Limit;
int size = -10;
byte[] packed = new byte[symbol.Length + sizeof(byte) + sizeof(int)]; // byte = 1, int = 4
Encoding.UTF8.GetBytes(symbol, 0, symbol.Length, packed, 0); // add the symbol
packed[symbol.Length] = orderTypeEnum; // add order type
Array.ConstrainedCopy(BitConverter.GetBytes(size), 0, packed, symbol.Length + 1, sizeof(int)); // add size
client.Send(packed);
Is there any way to accomplish this in q?
As for the Unpacking in C# I can easily do this:
byte[] fillData = client.Receive();
long ticks = BitConverter.ToInt64(fillData, 0);
int fillSize = BitConverter.ToInt32(fillData, 8);
double fillPrice = BitConverter.ToDouble(fillData, 12);
new
{
Timestamp = ticks,
Size = fillSize,
Price = fillPrice
}.Dump("Received response");
Thanks!
One way to do it is
symbol:"T\000"
orderTypeEnum: 123 / (byte)OrderType.Limit
size: -10i;
packed: "x"$symbol,("c"$orderTypeEnum),reverse 0x0 vs size / *
UPDATE:
To do the reverse you can use 1: function:
(8 4 8; "jif")1:0x0000000000000400000008003ff3be76c8b43958 / server data is big-endian
("jif"; 8 4 8)1:0x0000000000000400000008003ff3be76c8b43958 / server data is little-endian
/ ticks=1024j, fillSize=2048i, fillPrice=1.234
*) When using BitConverter.GetBytes() you should also check the value of BitConverter.IsLittleEndian to make sure you send bytes over the wire in a proper order. Contrary to popular belief .NET is not always little-endian. Hovewer, an internal representation in kdb+ (a value returned by 0x0 vs ...) is always big-endian. Depending on your needs you may or may not want to use reverse above.
I am currently in the process of building an application that reads in audio from my iPhone's microphone, and then does some processing and visuals. Of course I am starting with the audio stuff first, but am having one minor problem.
I am defining my sampling rate to be 44100 Hz and defining my buffer to hold 4096 samples. Which is does. However, when I print this data out, copy it into MATLAB to double check accuracy, the sample rate I have to use is half of my iPhone defined rate, or 22050 Hz, for it to be correct.
I think it has something to do with the following code and how it is putting 2 bytes per packet, and when I am looping through the buffer, the buffer is spitting out the whole packet, which my code assumes is a single number. So what I am wondering is how to split up those packets and read them as individual numbers.
- (void)setupAudioFormat {
memset(&dataFormat, 0, sizeof(dataFormat));
dataFormat.mSampleRate = kSampleRate;
dataFormat.mFormatID = kAudioFormatLinearPCM;
dataFormat.mFramesPerPacket = 1;
dataFormat.mChannelsPerFrame = 1;
// dataFormat.mBytesPerFrame = 2;
// dataFormat.mBytesPerPacket = 2;
dataFormat.mBitsPerChannel = 16;
dataFormat.mReserved = 0;
dataFormat.mBytesPerPacket = dataFormat.mBytesPerFrame = (dataFormat.mBitsPerChannel / 8) * dataFormat.mChannelsPerFrame;
dataFormat.mFormatFlags =
kLinearPCMFormatFlagIsSignedInteger |
kLinearPCMFormatFlagIsPacked;
}
If what I described is unclear, please let me know. Thanks!
EDIT
Adding the code that I used to print the data
float *audioFloat = (float *)malloc(numBytes * sizeof(float));
int *temp = (int*)inBuffer->mAudioData;
int i;
float power = pow(2, 31);
for (i = 0;i<numBytes;i++) {
audioFloat[i] = temp[i]/power;
printf("%f ",audioFloat[i]);
}
I found the problem with what I was doing. It was a c pointer issue, and since I have never really programmed in C before, I of course got them wrong.
You can not directly cast inBuffer->mAudioData to an int array. So what I simply did was the following
SInt16 *buffer = malloc(sizeof(SInt16)*kBufferByteSize);
buffer = inBuffer->mAudioData;
This worked out just fine and now my data is of correct length and the data is represented properly.
I saw your answer, there also is an underlying issue which gives wrong sample data bytes which is because of an endian issue of bytes being swapped.
-(void)feedSamplesToEngine:(UInt32)audioDataBytesCapacity audioData:(void *)audioData {
int sampleCount = audioDataBytesCapacity / sizeof(SAMPLE_TYPE);
SAMPLE_TYPE *samples = (SAMPLE_TYPE*)audioData;
//SAMPLE_TYPE *sample_le = (SAMPLE_TYPE *)malloc(sizeof(SAMPLE_TYPE)*sampleCount );//for swapping endians
std::string shorts;
double power = pow(2,10);
for(int i = 0; i < sampleCount; i++)
{
SAMPLE_TYPE sample_le = (0xff00 & (samples[i] << 8)) | (0x00ff & (samples[i] >> 8)) ; //Endianess issue
char dataInterim[30];
sprintf(dataInterim,"%f ", sample_le/power); // normalize it.
shorts.append(dataInterim);
}
av_register_all();
AVCodec *codec;
AVCodecContext *c= NULL;
int out_size, size, outbuf_size;
//FILE *f;
uint8_t *outbuf;
printf("Video encoding\n");
/* find the mpeg video encoder */
codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
NSLog(#"codec = %i",codec);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
/* put sample parameters */
c->bit_rate = 400000;
c->bit_rate_tolerance = 10;
c->me_method = 2;
/* resolution must be a multiple of two */
c->width = 352;//width;//352;
c->height = 288;//height;//288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10; /* emit one intra frame every ten frames */
//c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
c ->me_range = 16;
c ->max_qdiff = 4;
c ->qmin = 10;
c ->qmax = 51;
c ->qcompress = 0.6f;
'avcodec_encode_video' is always 0 .
I guess that because 'non-strictly-monotonic PTS' warning, do you konw same situation?
For me also it returns 0 always. But encodes fine. I dont think there is an issue if it returns 0. In the avcodec.h, you can see this
"On error a negative value is returned, on success zero or the number
* of bytes used from the output buffer."
I am working on one project in which i have used AudioUnitRender it runs fine in simulator but gives -50 error in the device.
If anyone have faced similar problem please give me some solution.
RIOInterface* THIS = (RIOInterface *)inRefCon;
COMPLEX_SPLIT A = THIS->A;
void *dataBuffer = THIS->dataBuffer;
float *outputBuffer = THIS->outputBuffer;
FFTSetup fftSetup = THIS->fftSetup;
uint32_t log2n = THIS->log2n;
uint32_t n = THIS->n;
uint32_t nOver2 = THIS->nOver2;
uint32_t stride = 1;
int bufferCapacity = THIS->bufferCapacity;
SInt16 index = THIS->index;
AudioUnit rioUnit = THIS->ioUnit;
OSStatus renderErr;
UInt32 bus1 = 1;
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, THIS->bufferList);
NSLog(#"%d",renderErr);
if (renderErr < 0) {
return renderErr;
}
data regarding sample size and frame...
bytesPerSample = sizeof(SInt16);
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mBitsPerChannel = 8 * bytesPerSample;
asbd.mFramesPerPacket = 1;
asbd.mChannelsPerFrame = 1;
//asbd.mBytesPerPacket = asbd.mBytesPerFrame * asbd.mFramesPerPacket;
asbd.mBytesPerPacket = bytesPerSample * asbd.mFramesPerPacket;
//asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;
asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;
asbd.mSampleRate = sampleRate;
thanks in advance..
The length of the buffer (inNumberFrames) can be different on the device and the simulator. From my experience it is often larger on the device. When you use your own AudioBufferList this is something you have to take into account. I would suggest allocating more memory for the buffer in the AudioBufferList.
I know this thread is old, but I just found the solution to this problem.
The buffer duration for the device is different from that on the simulator. So you have to change the buffer duration:
Float32 bufferDuration = ((Float32) <INSERT YOUR BUFFER DURATION HERE>) / sampleRate; // buffer duration in seconds
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(bufferDuration), &bufferDuration);
Try adding kAudioFormatFlagsNativeEndian to your list of stream description format flags. Not sure if that will make a difference, but it can't hurt.
Also, I'm suspicious about the use of THIS for the userData member, which definitely does not fill that member with any meaningful data by default. Try running the code in a debugger and see if that instance is correctly extracted and casted. Assuming it is, just for fun try putting the AudioUnit object into a global variable (yeah, I know..) just to see if it works.
Finally, why use THIS->bufferList instead of the one passed into your render callback? That's probably not good.
I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format
_
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ;
_streamFormat.mBitsPerChannel = 16;
_streamFormat.mChannelsPerFrame = 2;
_streamFormat.mBytesPerPacket = 4;
_streamFormat.mBytesPerFrame = 4;
_streamFormat.mFramesPerPacket = 1;
_streamFormat.mSampleRate = 44100;
_streamFormat.mReserved = 0;
to this format
_streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0;
_streamFormatOutput.mBitsPerChannel = 16;
_streamFormatOutput.mChannelsPerFrame = 1;
_streamFormatOutput.mBytesPerPacket = 2;
_streamFormatOutput.mBytesPerFrame = 2;
_streamFormatOutput.mFramesPerPacket = 1;
_streamFormatOutput.mSampleRate = 44100;
_streamFormatOutput.mReserved = 0;
and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows
This is to set the channel map for PCM output file
SInt32 channelMap[1] = {0};
status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap);
and this is to convert the buffer in a while loop
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
//frames = audioBuffer.mData;
NSLog(#"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
NSLog(#"The buffer size is %d",audioBuffer.mDataByteSize);
numBytesIO = audioBuffer.mDataByteSize;
convertedBuf = malloc(sizeof(char)*numBytesIO);
status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf);
char errchar[10];
NSLog(#"status audio converter convert %d",status);
if (status != 0) {
NSLog(#"Fail conversion");
assert(0);
}
NSLog(#"Bytes converted %d",numBytesIO);
status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf);
NSLog(#"status for writebyte %d, bytes written %d",status,numBytesIO);
free(convertedBuf);
if (numBytesIO != audioBuffer.mDataByteSize) {
NSLog(#"Something wrong in writing");
assert(0);
}
countByteBuf = countByteBuf + numBytesIO;
But the insz problem is there... so it cant convert. I would appreciate any input
Thanks in advance
First, you cannot use AudioConverterConvertBuffer() to convert anything where input and output byte size is different. You need to use AudioConverterFillComplexBuffer(). This includes performing any kind of sample rate conversions, or adding/removing channels.
See Apple's documentation on AudioConverterConvertBuffer(). This was also discussed on Apple's CoreAudio mailing lists, but I'm afraid I cannot find a reference right now.
Second, even if this could be done (which it can't) you are passing the same number of bytes allocated for output as you had for input, despite actually requiring half of the number of bytes (due to reducing number of channels from 2 to 1).
I'm actually working on using AudioConverterConvertBuffer() right now, and the test files are mono while I need to play stereo. I'm currently stuck with the converter performing conversion only of the first chunk of the data. If I manage to get this to work, I'll try to remember to post the code. If I don't post it, please poke me in comments.