How can I send a udp packet with dynamic HEX values - sockets

I need to send a UDP packet with HEX values like this example;
char buffer[4]={0x22,0x00,0x0d,0xf4};
However, I need to be able to change the hex values in code.
hex1 = "0x83";
hex2 = "0x11";
hex3 = "0x00";
hex4 = "0x01";
char buffer[4]={hex1, hex2, hex3, hex4}
I have tried the example above but it does not work. Can you show me the correct way to build the buffer for sending.

Declare and fill the buffer separately. And remove the quotes around your hex values.
char buffer[4];
buffer[0] = 0x83;
buffer[1] = 0x11;
buffer[2] = 0x00;
buffer[3] = 0x01;

Related

How to pack/unpack a byte in q/kdb

What I'm trying to do here is pack a byte like I could in c# like this:
string symbol = "T" + "\0";
byte orderTypeEnum = (byte)OrderType.Limit;
int size = -10;
byte[] packed = new byte[symbol.Length + sizeof(byte) + sizeof(int)]; // byte = 1, int = 4
Encoding.UTF8.GetBytes(symbol, 0, symbol.Length, packed, 0); // add the symbol
packed[symbol.Length] = orderTypeEnum; // add order type
Array.ConstrainedCopy(BitConverter.GetBytes(size), 0, packed, symbol.Length + 1, sizeof(int)); // add size
client.Send(packed);
Is there any way to accomplish this in q?
As for the Unpacking in C# I can easily do this:
byte[] fillData = client.Receive();
long ticks = BitConverter.ToInt64(fillData, 0);
int fillSize = BitConverter.ToInt32(fillData, 8);
double fillPrice = BitConverter.ToDouble(fillData, 12);
new
{
Timestamp = ticks,
Size = fillSize,
Price = fillPrice
}.Dump("Received response");
Thanks!
One way to do it is
symbol:"T\000"
orderTypeEnum: 123 / (byte)OrderType.Limit
size: -10i;
packed: "x"$symbol,("c"$orderTypeEnum),reverse 0x0 vs size / *
UPDATE:
To do the reverse you can use 1: function:
(8 4 8; "jif")1:0x0000000000000400000008003ff3be76c8b43958 / server data is big-endian
("jif"; 8 4 8)1:0x0000000000000400000008003ff3be76c8b43958 / server data is little-endian
/ ticks=1024j, fillSize=2048i, fillPrice=1.234
*) When using BitConverter.GetBytes() you should also check the value of BitConverter.IsLittleEndian to make sure you send bytes over the wire in a proper order. Contrary to popular belief .NET is not always little-endian. Hovewer, an internal representation in kdb+ (a value returned by 0x0 vs ...) is always big-endian. Depending on your needs you may or may not want to use reverse above.

UILabel Convert Unicode(Japanese) and display

After hours of research I gave up.
I receive text data from a WebService. For some case, the text is inJapanese, and the WS returns its Unicoded version. For example: \U00e3\U0082\U008f
I know that this is a Japanese char.
I am trying to display this Unicode char or string inside a UILabel.
Since the simple setText method does'nt display the correct chars, I used this (copied) routine:
unichar unicodeValue = (unichar) strtol([[[p innerData] valueForKey:#"title"] UTF8String], NULL, 16);
char buffer[2];
int len = 1;
if (unicodeValue > 127) {
buffer[0] = (unicodeValue >> 8) & (1 << 8) - 1;
buffer[1] = unicodeValue & (1 << 8) - 1;
len = 2;
} else {
buffer[0] = unicodeValue;
}
[[cell title] setText:[[NSString alloc] initWithBytes:buffer length:len encoding:NSUTF8StringEncoding] ];
But no success: the UILabel is empty.
I know that one way could be convert the chars to hex and then from hex to String...is there a simpler way?
SOLVED
First you must be sure that your server is sending UTF8 and not UNICODE CODE POINTS. The only way I found is to json_encode strings which contain UNICODE chars.
Then, in iOS user unescaping following this link Using Objective C/Cocoa to unescape unicode characters, ie \u1234

How to store int in char * in iphone

Can anyone help converting the Int to char array
as i have buffer as
char *buffer = NULL;
int lengthOfComponent = -1;
char *obj;
buffer[index]= (char *)&lengthOfComponent;
if i do this it is thorwing EXCESS BAD ACCESS after the execution how to store the value of the obj to buffer using memcpy
Of course you cannot write in buffer[index], it is not allocated!
buffer = malloc(sizeof(char) * lengthOfBuffer);
should do it. After that you can write the buffer with memcpy or with an assignation, like you are doing.
buffer[index] = (char *)&lengthOfComponent;
buffer[index] is like dereferencing the pointer. But buffer is not pointing to any valid location. Hence the runtime error.
The C solution is using snprintf. Try -
int i = 11;
char buffer[10];
snprintf(buffer, sizeof(buffer), "%d", i);

AudioQueue Recording Audio Sample

I am currently in the process of building an application that reads in audio from my iPhone's microphone, and then does some processing and visuals. Of course I am starting with the audio stuff first, but am having one minor problem.
I am defining my sampling rate to be 44100 Hz and defining my buffer to hold 4096 samples. Which is does. However, when I print this data out, copy it into MATLAB to double check accuracy, the sample rate I have to use is half of my iPhone defined rate, or 22050 Hz, for it to be correct.
I think it has something to do with the following code and how it is putting 2 bytes per packet, and when I am looping through the buffer, the buffer is spitting out the whole packet, which my code assumes is a single number. So what I am wondering is how to split up those packets and read them as individual numbers.
- (void)setupAudioFormat {
memset(&dataFormat, 0, sizeof(dataFormat));
dataFormat.mSampleRate = kSampleRate;
dataFormat.mFormatID = kAudioFormatLinearPCM;
dataFormat.mFramesPerPacket = 1;
dataFormat.mChannelsPerFrame = 1;
// dataFormat.mBytesPerFrame = 2;
// dataFormat.mBytesPerPacket = 2;
dataFormat.mBitsPerChannel = 16;
dataFormat.mReserved = 0;
dataFormat.mBytesPerPacket = dataFormat.mBytesPerFrame = (dataFormat.mBitsPerChannel / 8) * dataFormat.mChannelsPerFrame;
dataFormat.mFormatFlags =
kLinearPCMFormatFlagIsSignedInteger |
kLinearPCMFormatFlagIsPacked;
}
If what I described is unclear, please let me know. Thanks!
EDIT
Adding the code that I used to print the data
float *audioFloat = (float *)malloc(numBytes * sizeof(float));
int *temp = (int*)inBuffer->mAudioData;
int i;
float power = pow(2, 31);
for (i = 0;i<numBytes;i++) {
audioFloat[i] = temp[i]/power;
printf("%f ",audioFloat[i]);
}
I found the problem with what I was doing. It was a c pointer issue, and since I have never really programmed in C before, I of course got them wrong.
You can not directly cast inBuffer->mAudioData to an int array. So what I simply did was the following
SInt16 *buffer = malloc(sizeof(SInt16)*kBufferByteSize);
buffer = inBuffer->mAudioData;
This worked out just fine and now my data is of correct length and the data is represented properly.
I saw your answer, there also is an underlying issue which gives wrong sample data bytes which is because of an endian issue of bytes being swapped.
-(void)feedSamplesToEngine:(UInt32)audioDataBytesCapacity audioData:(void *)audioData {
int sampleCount = audioDataBytesCapacity / sizeof(SAMPLE_TYPE);
SAMPLE_TYPE *samples = (SAMPLE_TYPE*)audioData;
//SAMPLE_TYPE *sample_le = (SAMPLE_TYPE *)malloc(sizeof(SAMPLE_TYPE)*sampleCount );//for swapping endians
std::string shorts;
double power = pow(2,10);
for(int i = 0; i < sampleCount; i++)
{
SAMPLE_TYPE sample_le = (0xff00 & (samples[i] << 8)) | (0x00ff & (samples[i] >> 8)) ; //Endianess issue
char dataInterim[30];
sprintf(dataInterim,"%f ", sample_le/power); // normalize it.
shorts.append(dataInterim);
}

AudioConverterConvertBuffer problem with insz error

I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format
_
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ;
_streamFormat.mBitsPerChannel = 16;
_streamFormat.mChannelsPerFrame = 2;
_streamFormat.mBytesPerPacket = 4;
_streamFormat.mBytesPerFrame = 4;
_streamFormat.mFramesPerPacket = 1;
_streamFormat.mSampleRate = 44100;
_streamFormat.mReserved = 0;
to this format
_streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0;
_streamFormatOutput.mBitsPerChannel = 16;
_streamFormatOutput.mChannelsPerFrame = 1;
_streamFormatOutput.mBytesPerPacket = 2;
_streamFormatOutput.mBytesPerFrame = 2;
_streamFormatOutput.mFramesPerPacket = 1;
_streamFormatOutput.mSampleRate = 44100;
_streamFormatOutput.mReserved = 0;
and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows
This is to set the channel map for PCM output file
SInt32 channelMap[1] = {0};
status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap);
and this is to convert the buffer in a while loop
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
//frames = audioBuffer.mData;
NSLog(#"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
NSLog(#"The buffer size is %d",audioBuffer.mDataByteSize);
numBytesIO = audioBuffer.mDataByteSize;
convertedBuf = malloc(sizeof(char)*numBytesIO);
status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf);
char errchar[10];
NSLog(#"status audio converter convert %d",status);
if (status != 0) {
NSLog(#"Fail conversion");
assert(0);
}
NSLog(#"Bytes converted %d",numBytesIO);
status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf);
NSLog(#"status for writebyte %d, bytes written %d",status,numBytesIO);
free(convertedBuf);
if (numBytesIO != audioBuffer.mDataByteSize) {
NSLog(#"Something wrong in writing");
assert(0);
}
countByteBuf = countByteBuf + numBytesIO;
But the insz problem is there... so it cant convert. I would appreciate any input
Thanks in advance
First, you cannot use AudioConverterConvertBuffer() to convert anything where input and output byte size is different. You need to use AudioConverterFillComplexBuffer(). This includes performing any kind of sample rate conversions, or adding/removing channels.
See Apple's documentation on AudioConverterConvertBuffer(). This was also discussed on Apple's CoreAudio mailing lists, but I'm afraid I cannot find a reference right now.
Second, even if this could be done (which it can't) you are passing the same number of bytes allocated for output as you had for input, despite actually requiring half of the number of bytes (due to reducing number of channels from 2 to 1).
I'm actually working on using AudioConverterConvertBuffer() right now, and the test files are mono while I need to play stereo. I'm currently stuck with the converter performing conversion only of the first chunk of the data. If I manage to get this to work, I'll try to remember to post the code. If I don't post it, please poke me in comments.