Is there some way to add multiple (2, 3 or 4) images to the same page of a PDF file using the LEADTOOLS C API?
The L_SaveBitmap() function takes a SAVEFILEOPTION parameter, where the PageNumber can be set, but setting this to a value greater than 1 causes a new page to be appended. Instead, a value of 1 or 0 causes the file to be overwritten.
The L_SaveFile() function performs similarly; setting the SAVEFILE_MULTIPAGE flag causes a new page to be always appended.
The L_PdfComp..() functions don't seem to be capable of handling pages at all.
Do the MRC functions support handling of pages, i.e. specify which page each image will be stored in? Also, is the file generated of standard PDF format, or it is LEAD-specific?
Any help would be highly appreciated.
To address the part about “is the file generated of standard PDF format, or it is LEAD-specific”, any PDF file saved by any and all LEADTOOLS functions is a standard PDF file, and it should be possible to open it using any standard PDF viewer.
However, if you wish to append or replace pages using L_SaveBitmap(), the existing PDF file must be a raster-based PDF similar to the output of L_SaveBitmap() itself.
To work with more general types of PDF, other LEADTOOLS functions can be used, such as the .NET PDFFile class and PDFDocument class.
The C/C++ code below performs overwriting (creating a new file), appending and replacing a specific page. It also shows how to place 2 images inside a single PDF page.
The code segments can be combined into one function to test them with any 4 input image files.
First, create a file with one page, overwriting it if it exists:
// Create file with first page
BITMAPHANDLE page1 = { 0 };
L_LoadBitmap(page1_file, &page1, sizeof BITMAPHANDLE, 24, ORDER_BGR, NULL, NULL);
page1.XResolution = page1.YResolution = BITMAPHEIGHT(&page1) / 11; //set the DPI to cause 11 inch height.
L_SaveBitmap(outputPdf_file, &page1, FILE_RAS_PDF_LZW, 24, 0, NULL);
L_FreeBitmap(&page1);
Next, load an image and append it as a second page to the same PDF file:
// Append second page
BITMAPHANDLE page2 = { 0 };
L_LoadBitmap(page2_file, &page2, sizeof BITMAPHANDLE, 24, ORDER_BGR, NULL, NULL);
SAVEFILEOPTION SaveOptions = { 0 };
L_GetDefaultSaveFileOption(&SaveOptions, sizeof SAVEFILEOPTION);
SaveOptions.PageNumber = 2;
page2.XResolution = page2.YResolution = BITMAPHEIGHT(&page2) / 11; //set the DPI to cause 11 inch height.
L_SaveBitmap(outputPdf_file, &page2, FILE_RAS_PDF_LZW, 24, 0, &SaveOptions);
L_FreeBitmap(&page2);
Finally, load 2 images, combine them into one image and replace the first page with the image combined from the newly-loaded 2 images:
BITMAPHANDLE page2_1 = { 0 }, page2_2 = { 0 };
// Load 2 iamges for one page
L_LoadBitmap(page2_1_file, &page2_1, sizeof BITMAPHANDLE, 24, ORDER_BGR, NULL, NULL);
L_LoadBitmap(page2_2_file, &page2_2, sizeof BITMAPHANDLE, 24, ORDER_BGR, NULL, NULL);
L_UINT w = max(BITMAPWIDTH(&page2_1), BITMAPWIDTH(&page2_2));
L_UINT h = BITMAPHEIGHT(&page2_1) + BITMAPHEIGHT(&page2_2);
BITMAPHANDLE combinedPage = { 0 };
// Create empty bitmap
L_CreateBitmap(&combinedPage, sizeof BITMAPHANDLE, TYPE_CONV, w, h, 24, ORDER_BGR, NULL, BOTTOM_LEFT, NULL, 0);
// Copy the first image into the empty bitmap
L_CombineBitmap(&combinedPage, 0, 0, BITMAPWIDTH(&page2_1), BITMAPHEIGHT(&page2_1), &page2_1, 0, 0, CB_DST_0 | CB_OP_ADD | CB_RAWCOMBINE, 0);
L_FreeBitmap(&page2_1);
// Copy the second image below the first image
L_CombineBitmap(&combinedPage, 0, BITMAPHEIGHT(&page2_1), BITMAPWIDTH(&page2_2), BITMAPHEIGHT(&page2_2), &page2_2, 0, 0, CB_DST_0 | CB_OP_ADD | CB_RAWCOMBINE, 0);
L_FreeBitmap(&page2_2);
SaveOptions.PageNumber = 1;
SaveOptions.Flags |= ESO_REPLACEPAGE; // add the replace flag to put the combined image instead of the old page1
combinedPage.XResolution = combinedPage.YResolution = BITMAPHEIGHT(&combinedPage) / 11; //set the DPI to cause 11 inch height.
L_SaveBitmap(outputPdf_file, &combinedPage, FILE_RAS_PDF_LZW, 24, 0, &SaveOptions);
L_FreeBitmap(&combinedPage);
The following part was added after the answer got accepted, to address a comment:
It is possible to insert multiple images into a single PDF page without combining them first, but not using the L_SaveBitmap() function. Instead, the Document Writer functions need to be used as shown in the following code sample.
The code below loads 2 images and puts them into an EMF memory file. It then draws an ellipse on top of them to show that any Windows GDI graphics object can be added (for example, you can add text using TextOut() or other GDI functions). After that, the EMF page is saved as PDF using LEADTOOLS Document Writer.
BITMAPHANDLE image1 = { 0 }, image2 = { 0 };
// Load 2 iamges for one page
L_LoadBitmap(image1_file, &image1, sizeof BITMAPHANDLE, 24, ORDER_BGR, NULL, NULL);
L_LoadBitmap(image2_file, &image2, sizeof BITMAPHANDLE, 24, ORDER_BGR, NULL, NULL);
L_UINT w = max(BITMAPWIDTH(&image1), BITMAPWIDTH(&image2));
L_UINT h = BITMAPHEIGHT(&image1) + BITMAPHEIGHT(&image2);
// Create a memory metafile and paint both bitmaps in it
HDC hdcEmf = CreateEnhMetaFile(NULL, NULL, NULL, NULL);
Rectangle(hdcEmf, 0, 0, w, h);
RECT rc1 = { 0, 0, BITMAPWIDTH(&image1), BITMAPHEIGHT(&image1) };
L_PaintDC(hdcEmf, &image1, NULL, NULL, &rc1, NULL, SRCCOPY);
L_FreeBitmap(&image1);
RECT rc2 = { 0, BITMAPHEIGHT(&image1), BITMAPWIDTH(&image2), BITMAPHEIGHT(&image1) + BITMAPHEIGHT(&image2) };
L_PaintDC(hdcEmf, &image2, NULL, NULL, &rc2, NULL, SRCCOPY);
L_FreeBitmap(&image2);
Ellipse(hdcEmf, w / 4, h / 4, w * 3 / 4, h * 3 / 4);
HENHMETAFILE hemf = CloseEnhMetaFile(hdcEmf);
DOCWRTPDFOPTIONS pdf = { 0 };
L_DOUBLE dTextScale = 0.5;
DOCUMENTWRITER_HANDLE hDocument = 0;
DOCWRTEMFPAGE Page = { 0 };
pdf.PdfProfile = DOCWRTPDFPROFILE_PDF;
pdf.Options.uStructSize = sizeof(pdf);
pdf.Options.nDocumentResolution = 300;
// Setup empty page size based on images size
pdf.Options.dEmptyPageWidth = w / 300.0;
pdf.Options.dEmptyPageHeight = h / 300.0;
pdf.Options.nEmptyPageResolution = 300;
L_DocWriterInit(&hDocument, outputPdf_file, DOCUMENTFORMAT_PDF, &pdf, NULL, NULL);
Page.hEmf = hemf;
Page.pdwTextScale = &dTextScale;
L_DocWriterAddPage(hDocument, DOCWRTPAGETYPE_EMF, (L_VOID*)&Page);
L_DocWriterFinish(hDocument);
DeleteEnhMetaFile(hemf);
Related
I am trying to port an application to an embedded system that I am trying to design. The embedded system is Raspberry Pi Zero W - based, and uses a custom Yocto build.
The application to be ported is written with SDL / OpenGLES to my understanding. I have a hard time understanding how to make a connection similar to the following depiction:
SDL APP -----> XServer ($DISPLAY) -------> Framebuffer /dev/fb1 ($FRAMEBUFFER)
System has two displays: One HDMI on /dev/fb0 and One TFT on /dev/fb1. I am trying to run the SDL application on TFT. The following are the steps I do:
First, start an XServer on DISPLAY=:1 that is connected to /dev/fb1:
FRAMEBUFFER=/dev/fb1 xinit /etc/X11/Xsession -- /usr/bin/Xorg :1 -br -pn -nolisten tcp -dpi 100
The first step seems like it's working. I can see LXDE booting up on my TFT screen. Checking the display, I get the correct display resolution:
~/projects# DISPLAY=:1 xrandr -q
xrandr: Failed to get size of gamma for output default
Screen 0: minimum 320 x 240, current 320 x 240, maximum 320 x 240
default connected 320x240+0+0 0mm x 0mm
320x240 0.00*
Second, I would like to start SDL-written application using x11. I am thinking that should work in seeing the application on the TFT. In order to do so, I try:
SDL_VIDEODRIVER=x11 SDL_WINDOWID=1 DISPLAY=:1 ./SDL_App
No matter which display number I choose, it starts on my HDMI display and not on the TFT. So now I am thinking the person who wrote the application hardcoded somethings in the application code:
void init_ogl(void)
{
int32_t success = 0;
EGLBoolean result;
EGLint num_config;
static EGL_DISPMANX_WINDOW_T nativewindow;
DISPMANX_ELEMENT_HANDLE_T dispman_element;
DISPMANX_DISPLAY_HANDLE_T dispman_display;
DISPMANX_UPDATE_HANDLE_T dispman_update;
VC_DISPMANX_ALPHA_T alpha;
VC_RECT_T dst_rect;
VC_RECT_T src_rect;
static const EGLint attribute_list[] =
{
EGL_RED_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_BLUE_SIZE, 8,
EGL_ALPHA_SIZE, 8,
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_NONE
};
EGLConfig config;
// Get an EGL display connection
display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
assert(display!=EGL_NO_DISPLAY);
// Initialize the EGL display connection
result = eglInitialize(display, NULL, NULL);
assert(EGL_FALSE != result);
// Get an appropriate EGL frame buffer configuration
result = eglChooseConfig(display, attribute_list, &config, 1, &num_config);
assert(EGL_FALSE != result);
// Create an EGL rendering context
context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL);
assert(context!=EGL_NO_CONTEXT);
// Create an EGL window surface
success = graphics_get_display_size( 0 /* LCD */ , &screen_width, &screen_height);
printf ("Screen width= %d\n", screen_width);
printf ("Screen height= %d\n", screen_height);
assert( success >= 0 );
int32_t zoom = screen_width / GAMEBOY_WIDTH;
int32_t zoom2 = screen_height / GAMEBOY_HEIGHT;
if (zoom2 < zoom)
zoom = zoom2;
int32_t display_width = GAMEBOY_WIDTH * zoom;
int32_t display_height = GAMEBOY_HEIGHT * zoom;
int32_t display_offset_x = (screen_width / 2) - (display_width / 2);
int32_t display_offset_y = (screen_height / 2) - (display_height / 2);
dst_rect.x = 0;
dst_rect.y = 0;
dst_rect.width = screen_width;
dst_rect.height = screen_height;
src_rect.x = 0;
src_rect.y = 0;
src_rect.width = screen_width << 16;
src_rect.height = screen_height << 16;
dispman_display = vc_dispmanx_display_open( 0 /* LCD */ );
dispman_update = vc_dispmanx_update_start( 0 );
alpha.flags = DISPMANX_FLAGS_ALPHA_FIXED_ALL_PIXELS;
alpha.opacity = 255;
alpha.mask = 0;
dispman_element = vc_dispmanx_element_add ( dispman_update, dispman_display,
0/*layer*/, &dst_rect, 0/*src*/,
&src_rect, DISPMANX_PROTECTION_NONE, &alpha, 0/*clamp*/, DISPMANX_NO_ROTATE/*transform*/);
nativewindow.element = dispman_element;
nativewindow.width = screen_width;
nativewindow.height = screen_height;
vc_dispmanx_update_submit_sync( dispman_update );
surface = eglCreateWindowSurface( display, config, &nativewindow, NULL );
assert(surface != EGL_NO_SURFACE);
// Connect the context to the surface
result = eglMakeCurrent(display, surface, surface, context);
assert(EGL_FALSE != result);
eglSwapInterval(display, 1);
glGenTextures(1, &theGBTexture);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, theGBTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*) NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0.0f, screen_width, screen_height, 0.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0.0f, 0.0f, screen_width, screen_height);
quadVerts[0] = display_offset_x;
quadVerts[1] = display_offset_y;
quadVerts[2] = display_offset_x + display_width;
quadVerts[3] = display_offset_y;
quadVerts[4] = display_offset_x + display_width;
quadVerts[5] = display_offset_y + display_height;
quadVerts[6] = display_offset_x;
quadVerts[7] = display_offset_y + display_height;
glVertexPointer(2, GL_SHORT, 0, quadVerts);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, kQuadTex);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glClear(GL_COLOR_BUFFER_BIT);
}
void init_sdl(void)
{
if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_GAMECONTROLLER) < 0)
{
Log("SDL Error Init: %s", SDL_GetError());
}
theWindow = SDL_CreateWindow("Gearboy", 0, 0, 0, 0, 0);
if (theWindow == NULL)
{
Log("SDL Error Video: %s", SDL_GetError());
}
...
}
At first glance, I discovered two lines: vc_dispmanx_display_open( 0 /* LCD */ ); and graphics_get_display_size( 0 /* LCD */ , &screen_width, &screen_height);. I tried changing the display parameter to 1, thinking that it refers to DISPLAY=:1, but it did not do anything. I added logs for screen resolution, and I get 1920x1080, which is the resolution of the HDMI display. I think there must be something with the EGL portion of the code that I'm missing. What should I do right now? Is my logic fair enough or am I missing something?
Any requirements, please let me know. Any guidance regarding the issue is much appreciated.
EDIT: I saw that some people use the following, but raspberry pi zero can not find EGL/eglvivante.h for fb functions so I am unable to compile it:
int fbnum = 1; // fbnum is an integer for /dev/fb1 fbnum = 1
EGLNativeDisplayType native_display = fbGetDisplayByIndex(fbnum);
EGLNativeWindowType native_window = fbCreateWindow(native_display, 0, 0, 0, 0);
display = eglGetDisplay(native_display);
I am working on a mod player which is an audio file with 4 different tracks (channels) using webaudio/audioWorkletNode.
I got it working correctly using a 2 channel (stereo) audio node:
channels (tracks) 0 & 3 are mixed into the left channel
channels (tracks) 1 & 2 are mixed into the right channel
The problem is that I'd like to analyse and show a waveform display for each of the tracks (so there should be 4 different analysers).
I had the idea of creating an audioWorkletNode with outputChannelCount set to [4], connect an analyser to each of the node's four channels, and then use a channelMerger to mix it into 2 stereo channels.
So I used the following code, expecting it to create a node with 4 channels:
let node = new AudioWorkletNode(context, 'processor', { outputChannelCount: [4] });
But the outputChannelCount parameter seems to be ignored. No matter what I specify, it's set to 2 channels in the end.
Is there a way to do it another way, or must I handle the analyse myself, using my own analyser?
I finally found a way to mix all four channels and pass each channel to its own analyser by doing that:
this.context.audioWorklet.addModule(`js/${soundProcessor}`).then(() =>
{
this.splitter = this.context.createChannelSplitter(4);
// Use 4 inputs that will be used to send each track's data to a separate analyser
// NOTE: what should we do if we support more channels (and different mod formats)?
this.workletNode = new AudioWorkletNode(this.context, 'mod-processor', {
outputChannelCount: [1, 1, 1, 1],
numberOfInputs: 0,
numberOfOutputs: 4
});
this.workletNode.port.onmessage = this.handleMessage.bind(this);
this.postMessage({
message: 'init',
mixingRate: this.mixingRate
});
this.workletNode.port.start();
// create four analysers and connect each worklet's input to one
this.analysers = new Array();
for (let i = 0; i < 4; ++i) {
const analyser = this.context.createAnalyser();
analyser.fftSize = 256;// Math.pow(2, 11);
analyser.minDecibels = -90;
analyser.maxDecibels = -10;
analyser.smoothingTimeConstant = 0.65;
this.workletNode.connect(analyser, i, 0);
this.analysers.push(analyser);
}
this.merger = this.context.createChannelMerger(4);
// merge the channel 0+3 in left channel, 1+2 in right channel
this.workletNode.connect(this.merger, 0, 0);
this.workletNode.connect(this.merger, 1, 1);
this.workletNode.connect(this.merger, 2, 1);
this.workletNode.connect(this.merger, 3, 0);
this.merger.connect(this.context.destination);
});
I basically create a new node with 4 outputs and use the outputs as a channel. To produce a stereo output I can then use a channel merger. And voila!
Complete source code of the app can be found here: https://warpdesign.github.io/modplayer-js/
What I'm trying to do here is pack a byte like I could in c# like this:
string symbol = "T" + "\0";
byte orderTypeEnum = (byte)OrderType.Limit;
int size = -10;
byte[] packed = new byte[symbol.Length + sizeof(byte) + sizeof(int)]; // byte = 1, int = 4
Encoding.UTF8.GetBytes(symbol, 0, symbol.Length, packed, 0); // add the symbol
packed[symbol.Length] = orderTypeEnum; // add order type
Array.ConstrainedCopy(BitConverter.GetBytes(size), 0, packed, symbol.Length + 1, sizeof(int)); // add size
client.Send(packed);
Is there any way to accomplish this in q?
As for the Unpacking in C# I can easily do this:
byte[] fillData = client.Receive();
long ticks = BitConverter.ToInt64(fillData, 0);
int fillSize = BitConverter.ToInt32(fillData, 8);
double fillPrice = BitConverter.ToDouble(fillData, 12);
new
{
Timestamp = ticks,
Size = fillSize,
Price = fillPrice
}.Dump("Received response");
Thanks!
One way to do it is
symbol:"T\000"
orderTypeEnum: 123 / (byte)OrderType.Limit
size: -10i;
packed: "x"$symbol,("c"$orderTypeEnum),reverse 0x0 vs size / *
UPDATE:
To do the reverse you can use 1: function:
(8 4 8; "jif")1:0x0000000000000400000008003ff3be76c8b43958 / server data is big-endian
("jif"; 8 4 8)1:0x0000000000000400000008003ff3be76c8b43958 / server data is little-endian
/ ticks=1024j, fillSize=2048i, fillPrice=1.234
*) When using BitConverter.GetBytes() you should also check the value of BitConverter.IsLittleEndian to make sure you send bytes over the wire in a proper order. Contrary to popular belief .NET is not always little-endian. Hovewer, an internal representation in kdb+ (a value returned by 0x0 vs ...) is always big-endian. Depending on your needs you may or may not want to use reverse above.
I have seen answers to question of how to add text watermark to an existing PDF document using iTextSharp. My question is how can we do multiline text. Is there a way to do this without having multiple PdfContentByte defined. I have tried to insert a newline character with no luck.
Here is the code from the internet. I just added
pdfData.ShowTextAligned(Element.ALIGN_CENTER, editDate, (pageRectangle.Width / 2) + 100, (pageRectangle.Height / 2) - 100, 45);
as the second line to get the second line of the watermark, it works but uses same parameters (color, size, etc.) as the first.
iTextSharp.text.Rectangle pageRectangle = PDFreader.GetPageSizeWithRotation(1);
//pdfcontentbyte object contains graphics and text content of page returned by pdfstamper
PdfContentByte pdfData = stamper.GetOverContent(1);
//create fontsize for watermark
pdfData.SetFontAndSize(BaseFont.CreateFont(BaseFont.HELVETICA, BaseFont.CP1252, BaseFont.NOT_EMBEDDED), 120);
//create new graphics state and assign opacity
PdfGState graphicsState = new PdfGState();
graphicsState.FillOpacity = 0.2F;
//set graphics state to pdfcontentbyte
pdfData.SetGState(graphicsState);
//set color of watermark
pdfData.SetColorFill(iTextSharp.text.Color.BLUE);
//indicates start of writing of text
pdfData.BeginText();
//show text as per position and rotation
pdfData.ShowTextAligned(Element.ALIGN_CENTER, "E D I T E D" , (pageRectangle.Width / 2), (pageRectangle.Height / 2), 45);
pdfData.ShowTextAligned(Element.ALIGN_CENTER, editDate, (pageRectangle.Width / 2) + 100, (pageRectangle.Height / 2) - 100, 45);
//call endText to invalid font set
pdfData.EndText();
I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format
_
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ;
_streamFormat.mBitsPerChannel = 16;
_streamFormat.mChannelsPerFrame = 2;
_streamFormat.mBytesPerPacket = 4;
_streamFormat.mBytesPerFrame = 4;
_streamFormat.mFramesPerPacket = 1;
_streamFormat.mSampleRate = 44100;
_streamFormat.mReserved = 0;
to this format
_streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0;
_streamFormatOutput.mBitsPerChannel = 16;
_streamFormatOutput.mChannelsPerFrame = 1;
_streamFormatOutput.mBytesPerPacket = 2;
_streamFormatOutput.mBytesPerFrame = 2;
_streamFormatOutput.mFramesPerPacket = 1;
_streamFormatOutput.mSampleRate = 44100;
_streamFormatOutput.mReserved = 0;
and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows
This is to set the channel map for PCM output file
SInt32 channelMap[1] = {0};
status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap);
and this is to convert the buffer in a while loop
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
//frames = audioBuffer.mData;
NSLog(#"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
NSLog(#"The buffer size is %d",audioBuffer.mDataByteSize);
numBytesIO = audioBuffer.mDataByteSize;
convertedBuf = malloc(sizeof(char)*numBytesIO);
status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf);
char errchar[10];
NSLog(#"status audio converter convert %d",status);
if (status != 0) {
NSLog(#"Fail conversion");
assert(0);
}
NSLog(#"Bytes converted %d",numBytesIO);
status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf);
NSLog(#"status for writebyte %d, bytes written %d",status,numBytesIO);
free(convertedBuf);
if (numBytesIO != audioBuffer.mDataByteSize) {
NSLog(#"Something wrong in writing");
assert(0);
}
countByteBuf = countByteBuf + numBytesIO;
But the insz problem is there... so it cant convert. I would appreciate any input
Thanks in advance
First, you cannot use AudioConverterConvertBuffer() to convert anything where input and output byte size is different. You need to use AudioConverterFillComplexBuffer(). This includes performing any kind of sample rate conversions, or adding/removing channels.
See Apple's documentation on AudioConverterConvertBuffer(). This was also discussed on Apple's CoreAudio mailing lists, but I'm afraid I cannot find a reference right now.
Second, even if this could be done (which it can't) you are passing the same number of bytes allocated for output as you had for input, despite actually requiring half of the number of bytes (due to reducing number of channels from 2 to 1).
I'm actually working on using AudioConverterConvertBuffer() right now, and the test files are mono while I need to play stereo. I'm currently stuck with the converter performing conversion only of the first chunk of the data. If I manage to get this to work, I'll try to remember to post the code. If I don't post it, please poke me in comments.