How to use multiple USB webcam in Matlab working simultaneously? - matlab

I would like to take the live video with two USB webcams (Philips SPC 900NC), but I found that they cannot work simultaneously on my laptop. Either of the two USB webcams could work alone or work with another webcam (mounted on my laptop originally).
When I use the simulink block 'From video device', Matlab gave the error message with ' Multiple VIDEOINPUT objects cannot access the same device simultaneously.'. Then I checked the video input device with command 'imaqhwinfo', only one of the USB Philips webcam could be detected.
I would like to know that,
what's the reason of this situation? is it because the hardware limitation (USB bus bandwidth) or just matlab video object don't support same multiple video devices?
what's the solution of this? could anyone give me some suggestions?

You may interest in this link:
http://opencv.willowgarage.com/wiki/faq#How_to_use_2_cameras_.28multiple_cameras.29_with_cvCam_library
Which contains:
First, init the cvcam library and get the number of cams by:
int ncams = cvcamGetCamerasCount( ); //returns the number of available cameras in the system
Show dialog to choose which cameras in use
int* out; int nselected = cvcamSelectCamera(&out);
Get the selected cams and enable them.
int cam1 = out[0];
int cam2 = out[1];
cvcamSetProperty(cam1, CVCAM_PROP_ENABLE, CVCAMTRUE);
cvcamSetProperty(cam1, CVCAM_PROP_RENDER, CVCAMTRUE); //We'll render stream from this source
cvNamedWindow("Cam1", 1);
cvcamWindow MyWin1 = (cvcamWindow)cvGetWindowHandle("Cam1");
cvcamSetProperty(cam1, CVCAM_PROP_WINDOW, &MyWin1); // Selects a window for video rendering
//Same code for camera 2
cvcamSetProperty(cam2, CVCAM_PROP_ENABLE, CVCAMTRUE);
cvcamSetProperty(cam2, CVCAM_PROP_RENDER, CVCAMTRUE);
cvNamedWindow("Cam2", 1);
cvcamWindow MyWin2 = (cvcamWindow)cvGetWindowHandle("Cam2");
cvcamSetProperty(cam2, CVCAM_PROP_WINDOW, &MyWin1);
//If you want to open the property dialog for setting the video format parameters, uncomment this line
//cvcamGetProperty(cam1, CVCAM_VIDEOFORMAT, NULL);
//cvcamGetProperty(cam2, CVCAM_VIDEOFORMAT, NULL);
Enable the stereo mode (2 cameras working at the same time)
cvcamSetProperty(cam1, CVCAM_STEREO_CALLBACK , stereocallback); //stereocallback is the function running to process every frames
cvcamInit();
cvcamStart();
//Your app is working
while (1)
{
int key = cvWaitKey(5);
if (key == 27) break;
}
cvcamStop( );
cvcamExit( );
Define the stereocallback function outside of the function above.
void stereocallback(IplImage* image1, IplImage* image2) {
//Process 2 images here
}

Related

Can I take a photo in Unity using the device's camera?

I'm entirely unfamiliar with Unity3D's more complex feature set and am curious if it has the capability to take a picture and then manipulate it. Specifically my desire is to have the user take a selfie and then have them trace around their face to create a PNG that would then be texture mapped onto a model.
I know that the face mapping onto a model is simple, but I'm wondering if I need to write the photo/carving functionality into the encompassing Chrome app, or if it can all be done from within Unity. I don't need a tutorial on how to do it, just asking if it's something that is possible.
Yes, this is possible. You will want to look at the WebCamTexture functionality.
You create a WebCamTexture and call its Play() function which starts the camera. WebCamTexture, as any Texture, allows you to get the pixels via a GetPixels() call. This allows you to take a snapshot in when you like, and you can save this in a Texture2D. A call to EncodeToPNG() and subsequent write to file should get you there.
Do note that the code below is a quick write-up based on the documentation. I have not tested it. You might have to select a correct device if there are more than one available.
using UnityEngine;
using System.Collections;
using System.IO;
public class WebCamPhotoCamera : MonoBehaviour
{
WebCamTexture webCamTexture;
void Start()
{
webCamTexture = new WebCamTexture();
GetComponent<Renderer>().material.mainTexture = webCamTexture; //Add Mesh Renderer to the GameObject to which this script is attached to
webCamTexture.Play();
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCamTexture.width, webCamTexture.height);
photo.SetPixels(webCamTexture.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "photo.png", bytes);
}
}
For those trying to get the camera to render live feed, here's how I managed to pull it off. First, I edited Bart's answer so the texture would be assigned on Update rather than just on Start:
void Start()
{
webCamTexture = new WebCamTexture();
webCamTexture.Play();
}
void Update()
{
GetComponent<RawImage>().texture = webCamTexture;
}
Then I attached the script to a GameObject with a RawImage component. You can easily create one by Right Click -> UI -> RawImage in the Hierarchy in the Unity Editor (this requires Unity 4.6 and above). Running it should show a live feed of the camera in your view. As of this writing, Unity 5 supports the use of webcams in the free personal edition of Unity 5.
I hope this helps anyone looking for a good way to capture live camera feed in Unity.
It is possible. I highly recommend you look at WebcamTexture Unity API. It has some useful functions:
GetPixel() -- Returns pixel color at coordinates (x, y).
GetPixels() -- Get a block of pixel colors.
GetPixels32() -- Returns the pixels data in raw format.
MarkNonReadable() -- Marks WebCamTexture as unreadable
Pause() -- Pauses the camera.
Play() -- Starts the camera.
Stop() -- Stops the camera.
Bart's answer has a required modification. I used his code and the pic I was getting was black. Required modification is that we have to
convert TakePhoto to a coroutine and add
yield return new WaitForEndOfFrame();
at the start of Coroutine. (Courtsey #fafase)
For more details see
http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
You can also refer to
Take photo using webcam is giving black output[Unity3D]
Yes, You can. I created Android Native camera plugin that can open your Android device camera, capture image, record video and save that in the desired location of your device with just a few lines of code.
you need to find your webcam device Index by search it in the devices list and select it for webcam texture to play.
you can use this code:
using UnityEngine;
using System.Collections;
using System.IO;
using UnityEngine.UI;
using System.Collections.Generic;
public class GetCam : MonoBehaviour
{
WebCamTexture webCam;
string your_path = "C:\\Users\\Jay\\Desktop";// any path you want to save your image
public RawImage display;
public AspectRatioFitter fit;
public void Start()
{
if(WebCamTexture.devices.Length==0)
{
Debug.LogError("can not found any camera!");
return;
}
int index = -1;
for (int i = 0; i < WebCamTexture.devices.Length; i++)
{
if (WebCamTexture.devices[i].name.ToLower().Contains("your webcam name"))
{
Debug.LogError("WebCam Name:" + WebCamTexture.devices[i].name + " Webcam Index:" + i);
index = i;
}
}
if (index == -1)
{
Debug.LogError("can not found your camera name!");
return;
}
WebCamDevice device = WebCamTexture.devices[index];
webCam = new WebCamTexture(device.name);
webCam.Play();
StartCoroutine(TakePhoto());
display.texture = webCam;
}
public void Update()
{
float ratio = (float)webCam.width / (float)webCam.height;
fit.aspectRatio = ratio;
float ScaleY = webCam.videoVerticallyMirrored ? -1f : 1f;
display.rectTransform.localScale = new Vector3(1f, ScaleY, 1f);
int orient = -webCam.videoRotationAngle;
display.rectTransform.localEulerAngles = new Vector3(0, 0, orient);
}
public void callTakePhoto() // call this function in button click event
{
StartCoroutine(TakePhoto());
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCam.width, webCam.height);
photo.SetPixels(webCam.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "\\photo.png", bytes);
}
}
There is a plugin available for this type of functionality called Camera Capture Kit - https://www.assetstore.unity3d.com/en/#!/content/56673 and while the functionality provided is geared towards mobile it contains a demo of how you can use the WebCamTexture to take a still image.
If you want to do that without using a third party plugin then #FuntionR solution will help you. But, if you want to save the captured photo to the gallery (Android & iOS)then it's not possible within unity, you have to write native code to transfer photo to gallery and then call it from unity.
Here is a summarise blog which will guide you to achieve your goal.
http://unitydevelopers.blogspot.com/2018/07/pick-image-from-gallery-in-unity3d.html
Edit: Note that, the above thread describes image picking from the gallery, but the same process will be for saving the image to the gallery.

Packing Speex with Ogg on iOS

I'm using libogg and libogg, I've succeeded to add those libraries to my iPhone xCode project and encode my voice with Speex. The problem is that I cannot figure out how to pack those audio packet with ogg. Does someone know how a packet of that kind should look like or have a reference code I can use.
I know in Java it's pretty simple (you have a dedicated function for that) but not on iOS. Please help.
UPD 10.09.2013: Please, see the demo project, which basically takes pcm audiodata from wave container, encodes it with speex codec and pack everything into ogg container. Maybe later I'll create a full-fledged library/framework for all that speex routines on IOS.
UPD 16.02.2015: The demo project is republished on GitHub.
I also have been experimenting with Speex on iOS recently, with varied success, but here is something I found. Basically, if you want to pack some speex-encoded voice into an ogg file, you need to do follow three steps (assuming libogg and libspeex are already compiled and added to the project).
1) Add the first ogg page with Speex header; libspeex provides built-in tools for that (the code below is from my project, not optimal, just for the sake of example):
// create speex header
SpeexHeader spxHeader;
SpeexMode spxMode = speex_wb_mode;
int spxRate = 16000;
int spxNumberOfChannels = 1;
speex_init_header(&spxHeader, spxRate, spxNumberOfChannels, &spxMode);
// set audio and ogg packing parameters
spxHeader.vbr = 0;
spxHeader.bitrate = 16;
spxHeader.frame_size = 320;
spxHeader.frames_per_packet = 1;
// wrap speex header in ogg packet
int oggPacketSize;
_oggPacket.packet = (unsigned char *)speex_header_to_packet(&spxHeader, &oggPacketSize);
_oggPacket.bytes = oggPacketSize;
_oggPacket.b_o_s = 1;
_oggPacket.e_o_s = 0;
_oggPacket.granulepos = 0;
_oggPacket.packetno = 0;
// submit the packet to the ogg streaming layer
ogg_stream_packetin(&_oggStreamState, &_oggPacket);
free(_oggPacket.packet);
// form an ogg page
ogg_stream_flush(&_oggStreamState, &_oggPage);
// write the page to file
[_oggFile appendBytes:&_oggStreamState.header length:_oggStreamState.header_fill];
[_oggFile appendBytes:_oggStreamState.body_data length:_oggStreamState.body_fill];
2) Add the second ogg page with Vorbis comment:
// form any comment you like (I use custom struct with all fields)
vorbisCommentStruct *vorbisComment = calloc(sizeof(vorbisCommentStruct), sizeof(char));
...
// wrap Vorbis comment in ogg packet
_oggPacket.packet = (unsigned char *)vorbisComment;
_oggPacket.bytes = vorbisCommentLength;
_oggPacket.b_o_s = 0;
_oggPacket.e_o_s = 0;
_oggPacket.granulepos = 0;
_oggPacket.packetno = _oggStreamState.packetno;
// the rest should be same as in previous step
...
3) Add subsequent ogg pages with your speex-encoded audio in the similar manner.
First of all decide how many frames with audio data you want to have on every ogg page (0-255; I choose 79 quite arbitrarily):
_framesPerOggPage = 79;
Then for each frame:
// calculate current granule position of audio data within ogg file
int curGranulePos = _spxSamplesPerFrame * _oggTotalFramesCount;
// wrap audio data in ogg packet
oggPacket.packet = (unsigned char *)spxFrame;
oggPacket.bytes = spxFrameLength;
oggPacket.granulepos = curGranulePos;
oggPacket.packetno = _oggStreamState.packetno;
oggPacket.b_o_s = 0;
oggPacket.e_o_s = 0;
// submit packets to streaming layer until their number reaches _framesPerOggPage
...
// if we've reached this limit, we're ready to create another ogg page
ogg_stream_flush(&_oggStreamState, &_oggPage);
[_oggFile appendBytes:&_oggStreamState.header length:_oggStreamState.header_fill];
[_oggFile appendBytes:_oggStreamState.body_data length:_oggStreamState.body_fill];
// finally, if this is the last frame, flush all remaining packets,
// which have been created but not packed into a page, to the last page
// (don't forget to set oggPacket.e_o_s to 1 for this frame)
That's it. Hope it will help. Any corrections or questions are welcome.

How to capture continious image in Android

I'm trying to develop an android application which should take continuous images just like native camera in continuous shooting mode for 10 to 20 seconds.
I followed the sample program from the site
http://marakana.com/forums/android/examples/39.html
Now , i want to enhance this code to take continuous images (for 10 to 20 seconds) ,
first i tried to take 10 pics by using a for loop ,
i just put the takePicture() function in the loop , but that'S not working .
do i need to use threadS .
IF YES , THEN which part should i put in thread , the image capturing or image saving to
sd card
If any body having some sample code for taking continuous images , pls share.
Just put a counter in the jpegCallBack function, that decrements and calls your takePicture() again until the wished number of pictures is reached.
int pictureCounter = 10;
PictureCallback jpegCallback = new PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
// save your picture
if(--pictureCounter>=0) {
takePicture();
} else {
pictureCounter = 10; // reset the counter
}
}
I know it is very late to reply, but I just came across this question and thought it would be helpful for future visitors.
PictureCallback jpegCallback = new PictureCallback() {
public void onPictureTaken(byte[] data, Camera camera) {
//Save Picture here
preview.camera.stopPreview();
// if condition
preview.camera.startPreview();
// end if condition
}
};

How do I program a stereo-capable graphics card to display stereo images?

I'd like to write my own stereo image viewer, because there are certain features I need which are missing from the one bundled with my NVidia/EVGA GTX 580.
I can't figure out how to program the card to enter "shutterglass" mode where every other frame (at 120 HZ) alternates left and right.
I've looked at the OpenGL, Direct3D, and XNA APIs, as well as information from NVIDIA, and can't figure out how to get started. How do I set separate left and right images, how do I tell the screen to display it, and how to I tell the driver to activate the shutterglass transmitter?
(Another disconcerting thing is that whenever I use the bundled software to view stereo images and video in shutterglass mode, it's in fullscreen, and the screen blinks when entering that mode--even though I run the screen at 120Hz in 2D. Is there a way to have a 3D surface in a window without upsetting the rest of the screen on the NVidia "gamer" cards that are 3D capable (570, 580)?
I'm a bit late to this, but I just got the stereoscopic 3D to work using nothing but a GTX 580 and OpenGL. No need for a quadro card or DirectX.
I have the nVidia 3D Vision driver and IR emitter and simply set the emitter to "Always on" in the nVidia control panel.
In my game engine, I switched to a full screen mode with 120Hz and render the scene twice with a slight frustum offset (as per nVidia's own documentation PDF on the manual implementation "2010_GTC2010.pdf").
No quad buffers or any other tricks needed, it works great. Plus, I am in control of all the settings, like convergence etc.
For the NVidia 3Dvision with the GEForce range you need to write a full screen directX surface twice the width of the display with the left image on the left,right on the right (duh).
Then you need to write a magic value into the bottom left of the image which the NVision driver picks up and turns on the glasses, you don't need the nvapi.dll
With the Nvidia pro glasses and a Quadra card you can use the regular OpenGL stereo API.
ps.I did find some sample code that manages to do this with a normal window.
Edit - it was a low level USB code talking to the xmitter that I could never get to build, I think it eventually became this http://sourceforge.net/projects/libnvstusb/
Here is some sample code for full screen with the NVision glasses.
I'm not a DirectX expert so some of this might be less than optimal.
My app is also based on Qt, there might be some Qt bits left in the code
-----------------------------------------------------------------
// header
void create3D();
void set3D();
IDirect3D9 *_d3d;
IDirect3DDevice9 *_d3ddev;
QSize _size; // full screen size
IDirect3DSurface9 *_imageBuf; //Source stereo image
IDirect3DSurface9 *_backBuf;
--------------------------------------------------------
// the code
#include <windows.h>
#include <windowsx.h>
#include <d3d9.h>
#include <d3dx9.h>
#include <strsafe.h>
#pragma comment (lib, "d3d9.lib")
#define NVSTEREO_IMAGE_SIGNATURE 0x4433564e //NV3D
typedef struct _Nv_Stereo_Image_Header
{
unsigned int dwSignature;
unsigned int dwWidth;
unsigned int dwHeight;
unsigned int dwBPP;
unsigned int dwFlags;
} NVSTEREOIMAGEHEADER, *LPNVSTEREOIMAGEHEADER;
// ORedflags in the dwFlagsfielsof the _Nv_Stereo_Image_Headerstructure above
#define SIH_SWAP_EYES 0x00000001
#define SIH_SCALE_TO_FIT 0x00000002
// call at start to set things up
void DisplayWidget::create3D()
{
_size = QSize(1680,1050); //resolution of my Samsung 2233z
_d3d = Direct3DCreate9(D3D_SDK_VERSION); // create the Direct3D interface
D3DPRESENT_PARAMETERS d3dpp; // create a struct to hold various device information
ZeroMemory(&d3dpp, sizeof(d3dpp)); // clear out the struct for use
d3dpp.Windowed = FALSE; // program fullscreen
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD; // discard old frames
d3dpp.hDeviceWindow = winId(); // set the window to be used by Direct3D
d3dpp.BackBufferFormat = D3DFMT_A8R8G8B8; // set the back buffer format to 32 bit // or D3DFMT_R8G8B8
d3dpp.BackBufferWidth = _size.width();
d3dpp.BackBufferHeight = _size.height();
d3dpp.PresentationInterval = D3DPRESENT_INTERVAL_ONE;
d3dpp.BackBufferCount = 1;
// create a device class using this information and information from the d3dpp stuct
_d3d->CreateDevice(D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
winId(),
D3DCREATE_SOFTWARE_VERTEXPROCESSING,
&d3dpp,
&_d3ddev);
//3D VISION uses a single surface 2x images wide and image high
// create the surface
_d3ddev->CreateOffscreenPlainSurface(_size.width()*2, _size.height(), D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &_imageBuf, NULL);
set3D();
}
// call to put 3d signature in image
void DisplayWidget::set3D()
{
// Lock the stereo image
D3DLOCKED_RECT lock;
_imageBuf->LockRect(&lock,NULL,0);
// write stereo signature in the last raw of the stereo image
LPNVSTEREOIMAGEHEADER pSIH = (LPNVSTEREOIMAGEHEADER)(((unsigned char *) lock.pBits) + (lock.Pitch * (_size.height()-1)));
// Update the signature header values
pSIH->dwSignature = NVSTEREO_IMAGE_SIGNATURE;
pSIH->dwBPP = 32;
//pSIH->dwFlags = SIH_SWAP_EYES; // Src image has left on left and right on right, thats why this flag is not needed.
pSIH->dwFlags = SIH_SCALE_TO_FIT;
pSIH->dwWidth = _size.width() *2;
pSIH->dwHeight = _size.height();
// Unlock surface
_imageBuf->UnlockRect();
}
// call in display loop
void DisplayWidget::paintEvent()
{
// clear the window to a deep blue
//_d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 40, 100), 1.0f, 0);
_d3ddev->BeginScene(); // begins the 3D scene
// do 3D rendering on the back buffer here
RECT destRect;
destRect.left = 0;
destRect.top = 0;
destRect.bottom = _size.height();
destRect.right = _size.width();
// Get the Backbuffer then Stretch the Surface on it.
_d3ddev->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &_backBuf);
_d3ddev->StretchRect(_imageBuf, NULL, _backBuf, &destRect, D3DTEXF_NONE);
_backBuf->Release();
_d3ddev->EndScene(); // ends the 3D scene
_d3ddev->Present(NULL, NULL, NULL, NULL); // displays the created frame
}
// my images come from a camera
// _left and _right are QImages but it should be obvious what the functions do
void DisplayWidget::getImages()
{
RECT srcRect;
srcRect.left = 0;
srcRect.top = 0;
srcRect.bottom = _size.height();
srcRect.right = _size.width();
RECT destRect;
destRect.top = 0;
destRect.bottom = _size.height();
if ( isOdd() ) {
destRect.left = _size.width();
destRect.right = _size.width()*2;
// get camera data for _left here, code not shown
D3DXLoadSurfaceFromMemory(_imageBuf, NULL, &destRect,_right.bits(),D3DFMT_A8R8G8B8,_right.bytesPerLine(),NULL,&srcRect,D3DX_DEFAULT,0);
} else {
destRect.left = 0;
destRect.right = _size.width();
// get camera data for _right here, code not shown
D3DXLoadSurfaceFromMemory(_imageBuf, NULL, &destRect,_left.bits(),D3DFMT_A8R8G8B8,_left.bytesPerLine(),NULL,&srcRect,D3DX_DEFAULT,0);
}
set3D(); // add NVidia signature
}
DisplayWidget::~DisplayWidget()
{
_d3ddev->Release(); // close and release the 3D device
_d3d->Release(); // close and release Direct3D
}

AudioQueue screws up output after modification

I am currently working on an audio DSP App development. The project requires direct access and modification of audio data. Right now I can successfully access and modify the raw audio data using AudioQueue but encounters error during playback. The output audio after any modification turns out be noise.
In short, the code is something like this:
(Modified from Speakhere sample code. The rest remains unchanged.)
void AQPlayer::AQBufferCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inCompleteAQBuffer)
{
AQPlayer *THIS = (AQPlayer *)inUserData;
if (THIS->mIsDone) return;
UInt32 numBytes;
UInt32 nPackets = THIS->GetNumPacketsToRead();
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(),
false,
&numBytes,
inCompleteAQBuffer->mPacketDescriptions,
THIS->GetCurrentPacket(),
&nPackets,
inCompleteAQBuffer->mAudioData);
if (result)
printf("AudioFileReadPackets failed: %d", (int)result);
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
inCompleteAQBuffer->mPacketDescriptionCount = nPackets;
//My modification starts from here
//Modifying audio data
SInt16 *testBuffer = (SInt16*)inCompleteAQBuffer->mAudioData;
for (int i = 0; i < (inCompleteAQBuffer->mAudioDataByteSize)/sizeof(SInt16); i++)
{
//printf("before modification %d", (int)*testBuffer);
*testBuffer = (SInt16) *testBuffer/2; //Say some simple modification
//printf("after modification %d", (int)*testBuffer);
testBuffer++;
}
AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);
}
During debugging, the data in buffer is displayed as expected, but the actual output is nothing but noise.
Here are some other strange behaviors of the code that makes both the whole team crazy:
If there is no change to the data (add/sub by 0, multiply by 1) or the whole buffer is assigned to a constant (say 0, then the audio will be muted), the playback behaves normally (Of course!) But if I perform anything more than it, it still turns out to be noise.
In the case I hardcode a single tone as test audio, the output noise spreads into another channel also.
So where is the bug in this code? Or if I am on the wrong track, what is the correct approach to modify the audio data and perform playback CORRECTLY? Any insight will be sincerely appreciated.
Thank you very much :-)
Cheers,
Manca
are you SURE the sample format is SInt16? And how many channels are there? You seem to treat the audio as a single channel short stream, but suppose the format is actually dual channel Float32 or so, and you do the modifications there, than the effect would be exactly as you describe, including the noise on other channels.