iOS: Get pixel-by-pixel data from camera - iphone

I'm aware of AVFoundation and its capture support (not too familiar though). However, I don't see any readily-accessible API to get pixel-by-pixel data (RGB-per-pixel or similar). I do recall reading in the docs that this is possible, but I don't really see how. So:
Can this be done? If so, how?
Would I be getting raw image data, or data that's been JPEG-compressed?

AV Foundation can give you back the raw bytes for an image captured by either the video or still camera. You need to set up an AVCaptureSession with an appropriate AVCaptureDevice and a corresponding AVCaptureDeviceInput and AVCaptureDeviceOutput (AVCaptureVideoDataOutput or AVCaptureStillImageOutput). Apple has some examples of this process in their documentation, and it requires some boilerplate code to configure.
Once you have your capture session configured and you are capturing data from the camera, you will set up a -captureOutput:didOutputSampleBuffer:fromConnection: delegate method, where one of the parameters will be a CMSampleBufferRef. That will have a CVImageBufferRef within it that you access via CMSampleBufferGetImageBuffer(). Using CVPixelBufferGetBaseAddress() on that pixel buffer will return the base address of the byte array for the raw pixel data representing your camera frame. This can be in a few different formats, but the most common are BGRA and planar YUV.
I have an example application that uses this here, but I'd recommend that you also take a look at my open source framework which wraps the standard AV Foundation boilerplate and makes it easy to perform image processing on the GPU. Depending on what you want to do with these raw camera bytes, I may already have something you can use there or a means of doing it much faster than with on-CPU processing.

lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float luminance = dot(textureColor.rgb, W);
mediump vec2 p = textureCoordinate;
if (p.x == 0.2 && p.x<0.6 && p.y > 0.4 && p.y<0.6) {
gl_FragColor = vec4(textureColor.r * 1.0, textureColor.g * 1.0, textureColor.b * 1.0, textureColor.a);
} else {
gl_FragColor = vec4(textureColor.r * 0.0, textureColor.g * 0.0, textureColor.b * 0.0, textureColor.a *0.0);
}

Related

How to read out an .rgba16Float MTLTexture via getBytes()

I have problems reading out an MTLTexture which has a pixel format of .rgba16Float, the main reason is that Swift does not seem to have a corresponding SIMD4 format.
For .rgba32Float I can simply use the SIMD4< Float > format, like so
if let texture = texture {
let region = MTLRegionMake2D(x, y, 1, 1)
let texArray = Array<SIMD4<Float>>(repeating: SIMD4<Float>(repeating: 0), count: 1)
texture.getBytes(UnsafeMutableRawPointer(mutating: texArray), bytesPerRow: (MemoryLayout<SIMD4<Float>>.size * texture.width), from: region, mipmapLevel: 0)
let value = texArray[0]
}
This works fine as the Swift Float data type is 32bit, how can I do the same for a 16 bit .rgba16Float texture ?
You can use vImage to convert the buffer from 16-bit to 32-bit float first. Check out vImageConvert_Planar16FtoPlanarF. But not that the documentation on the site is wrong (it's from another function...). I found this utility that demonstrate the process.
It would be more efficient, however, if you could use Metal to convert the texture into 32-bit float (or directly render into a 32-bit texture in the first place).

In Unity, how to segment the user's voice from microphone based on loudness?

I need to collect voice pieces from a continuous audio stream. I need to process later the user's voice piece that has just been said (not for speech recognition). What I am focusing on is only the voice's segmentation based on its loudness.
If after at least 1 second of silence, his voice becomes loud enough for a while, and then silent again for at least 1 second, I say this is a sentence and the voice should be segmented here.
I just know I can get raw audio data from the AudioClip created by Microphone.Start(). I want to write some code like this:
void Start()
{
audio = Microphone.Start(deviceName, true, 10, 16000);
}
void Update()
{
audio.GetData(fdata, 0);
for(int i = 0; i < fdata.Length; i++) {
u16data[i] = Convert.ToUInt16(fdata[i] * 65535);
}
// ... Process u16data
}
But what I'm not sure is:
Every frame when I call audio.GetData(fdata, 0), what I get is the latest 10 seconds of sound data if fdata is big enough or shorter than 10 seconds if fdata is not big enough, is it right?
fdata is a float array, and what I need is a 16 kHz, 16 bit PCM buffer. Is it right to convert the data like: u16data[i] = fdata[i] * 65535?
What is the right way to detect loud moments and silent moments in fdata?
No. you have to read starting at the current position within the AudioClip using Microphone.GetPosition
Get the position in samples of the recording.
and pass the optained index to AudioClip.GetData
Use the offsetSamples parameter to start the read from a specific position in the clip
fdata = new float[clip.samples * clip.channels];
var currentIndex = Microphone.GetPosition(null);
audio.GetData(fdata, currentIndex);
I don't understand what exactly you convert this for. fdata will contain
floats ranging from -1.0f to 1.0f (AudioClip.GetData)
so if for some reason you need to get values between short.MinValue (= -32768) and short.MaxValue(= 32767) than yes you can do that using
u16data[i] = Convert.ToUInt16(fdata[i] * short.MaxValue);
note however that Convert.ToUInt16(float):
value, rounded to the nearest 16-bit unsigned integer. If value is halfway between two whole numbers, the even number is returned; that is, 4.5 is converted to 4, and 5.5 is converted to 6.
you might want to rather use Mathf.RoundToInt first to also round up if a value is e.g. 4.5.
u16data[i] = Convert.ToUInt16(Mathf.RoundToInt(fdata[i] * short.MaxValue));
Your naming however suggests that you are actually trying to get unsigned values ushort (or also UInt16). For this you can not have negative values! So you have to shift the float values up in order to map the range (-1.0f | 1.0f ) to the range (0.0f | 1.0f) before multiplaying it by ushort.MaxValue(= 65535)
u16data[i] = Convert.ToUInt16(Mathf.RoundToInt(fdata[i] + 1) / 2 * ushort.MaxValue);
What you receive from AudioClip.GetData are the gain values of the audio track between -1.0f and 1.0f.
so a "loud" moment would be where
Mathf.Abs(fdata[i]) >= aCertainLoudThreshold;
a "silent" moment would be where
Mathf.Abs(fdata[i]) <= aCertainSiltenThreshold;
where aCertainSiltenThreshold might e.g. be 0.2f and aCertainLoudThreshold might e.g. be 0.8f.

Does GLKit limit me to two attributes?

I've been working with some GLKit code for the past few days that has a color attribute and a position attribute, but when I try to add a normal attribute it crashes every time.
Vertex Shader:
attribute vec4 SourceColor;
attribute vec4 aVertexPosition;
attribute vec4 aVertexNormal;
varying vec4 DestinationColor;
uniform mat4 uPMatrix; /* perspectiveView matrix */
uniform mat4 uVMatrix; /* view matrix */
uniform mat4 uOMatrix; /* object matrix */
uniform mat4 Projection;
uniform mat4 ModelView;
uniform float u_time;
void main(void) {
DestinationColor = SourceColor;
gl_Position = aVertexPosition * Projection;
}
Code:
self.colorSlot = glGetAttribLocation(programHandle, "SourceColor")
self.positionSlot = glGetAttribLocation(programHandle, "aVertexPosition")
self.normalSlot = glGetAttribLocation(programHandle, "aVertexNormal")
glEnableVertexAttribArray(GLuint(self.positionSlot))
glEnableVertexAttribArray(GLuint(self.colorSlot))
glEnableVertexAttribArray(GLuint(self.normalSlot))<-crashes here
As found through the comments the reason for this crash that the value self.normalSlot was -1 which is returned when glGetAttribLocation fails to find the attribute with the specified name. The value -1 was then typecast using GLuint(self.normalSlot) which most likely produced a very large value which is not supported in enabling vertex attribute array causing a crash. So before using a location for attribute and uniform you should check if the location was retrieved. A valid locations are positive values so location >= 0 should be checked.
Still the attribute was present in the shader source but the location was not retrieved. It seems the reason for that is the attribute was never used in the shader so it might be a compiler optimization. In any case you may not force the attribute by simply declaring it in the vertex shader, you need to also use it. There seems to be another way to force it which is by using glBindAttribLocation. I would not expect this to guarantee the existence of the attribute so you should still at least check the GL errors after using it to avoid additional issues.
Note:
If you are using glBindAttribLocation make sure you completely understand its documentation. It is very easy to lose track of these indices and you should have a smart system or personal standards on how you index the attributes.
Absolutely agree with Matik, but the limit for number of attributes really exists. It may be checked by GL_MAX_VERTEX_ATTRIBS:
int max;
glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &max);
NSLog(#"GL_MAX_VERTEX_ATTRIBS=%#", #(max));
I got GL_MAX_VERTEX_ATTRIBS=16 for my iPod 5. This is much more than two.

Repeated Scene items in iOS YUV video capturing output

I capture a video and handle the resulting YUV frames.
the output looks like the following:
Although it appears normally on my phone's screen. But my peer receives it like that img above.
Every item is repeated and shifted by some value horizontally and vertically
My captured video is 352x288 and my YPixelCount = 101376, UVPixelCount = YPIXELCOUNT/4
Any clue to solve this or a starting point to understand how to handle YUV video frames on iOS ?
NSNumber* recorderValue = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
[videoRecorderSession setSessionPreset:AVCaptureSessionPreset352x288];
And this is the captureOutput function
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if(CMSampleBufferIsValid(sampleBuffer) && CMSampleBufferDataIsReady(sampleBuffer) && ([self isQueueStopped] == FALSE))
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
UInt8 *baseAddress[3] = {NULL,NULL,NULL};
uint8_t *yPlaneAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0);
UInt32 yPixelCount = CVPixelBufferGetWidthOfPlane(imageBuffer,0) * CVPixelBufferGetHeightOfPlane(imageBuffer,0);
uint8_t *uvPlaneAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer,1);
UInt32 uvPixelCount = CVPixelBufferGetWidthOfPlane(imageBuffer,1) * CVPixelBufferGetHeightOfPlane(imageBuffer,1);
UInt32 p,q,r;
p=q=r=0;
memcpy(uPointer, uvPlaneAddress, uvPixelCount);
memcpy(vPointer, uvPlaneAddress+uvPixelCount, uvPixelCount);
memcpy(yPointer,yPlaneAddress,yPixelCount);
baseAddress[0] = (UInt8*)yPointer;
baseAddress[1] = (UInt8*)uPointer;
baseAddress[2] = (UInt8*)vPointer;
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
}
Is there anything wrong with the above code ?
Your code doesn't look to0 bad. I can see two mistakes and one potential problem:
The uvPixelCount is incorrect. The YUV 420 format means that there is color information for each 2 by 2 pixel block. So the correct count is:
uvPixelCount = (width / 2) * (height / 2);
You write something about yPixelCount / 4, but I cannot see that in your code.
The UV information is interleaved, i.e. the second plane alternatingly contains a U and a V value. Or put differently: there's a U value on all even byte addresses and a V value on all odd byte addresses. If you really need to separate the U and V information, memcpy won't do.
There can be some extra bytes after each pixel row. You should use CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0) to get the number of bytes between two rows. As a consequence, a single memcpy won't do. Instead you need to copy each pixel row separately to get rid of the extra bytes between the rows.
All these things only explain part of the resulting image. The remaining parts are probably due to differences between your code and what the receiving peer expect. You did't write anything about that? Does the peer really need separated U and V values? Does it you 4:2:0 compression as well? Does it you video range instead of full range as well?
If you provide more information, I can give your more hints.

iPhone audio analysis

I'm looking into developing an iPhone app that will potentially involve a "simple" analysis of audio it is receiving from the standard phone mic. Specifically, I am interested in the highs and lows the mic pics up, and really everything in between is irrelevant to me. Is there an app that does this already (just so I can see what its capable of)? And where should I look to get started on such code? Thanks for your help.
Look in the Audio Queue framework. This is what I use to get a high water mark:
AudioQueueRef audioQueue; // Imagine this is correctly set up
UInt32 dataSize = sizeof(AudioQueueLevelMeterState) * recordFormat.mChannelsPerFrame;
AudioQueueLevelMeterState *levels = (AudioQueueLevelMeterState*)malloc(dataSize);
float channelAvg = 0;
OSStatus rc = AudioQueueGetProperty(audioQueue, kAudioQueueProperty_CurrentLevelMeter, levels, &dataSize);
if (rc) {
NSLog(#"AudioQueueGetProperty(CurrentLevelMeter) returned %#", rc);
} else {
for (int i = 0; i < recordFormat.mChannelsPerFrame; i++) {
channelAvg += levels[i].mPeakPower;
}
}
free(levels);
// This works because one channel always has an mAveragePower of 0.
return channelAvg;
You can get peak power in either dB Free Scale (with kAudioQueueProperty_CurrentLevelMeterDB) or simply as a float in the interval [0.0, 1.0] (with kAudioQueueProperty_CurrentLevelMeter).
Don't forget to activate level metering for AudioQueue first:
UInt32 d = 1;
OSStatus status = AudioQueueSetProperty(mQueue, kAudioQueueProperty_EnableLevelMetering, &d, sizeof(UInt32));
Check the 'SpeakHere' sample code. it will show you how to record audio using the AudioQueue API. It also contains some code to analyze the audio realtime to show a level meter.
You might actually be able to use most of that level meter code to respond to 'highs' and 'lows'.
The AurioTouch example code performs Fourier analysis
on the mic input. Could be a good starting point:
https://developer.apple.com/iPhone/library/samplecode/aurioTouch/index.html
Probably overkill for your application.