I recently found that glDrawArrays allocating and releasing huge amounts of memory on every frame.
I suspect that it's related to "Shaders compiled outside of initialization" issue reported by openGL profiler. That occurs on every frame! Should it be only once, and after shaders are compiled, disappear?
EDIT: I also double checked that my vertex are properly aligned. So I'm really confused what memory driver needs to allocate on every frame.
EDIT #2: I'm using VBO's and degenerated triangle strips to render sprites and . I'm passing geometry on every frame (GL_STREAM_DRAW).
EDIT #3:
I think I'm close to issue but still unable to solve it. Problem disappears if I pass same texture id value to shader (see source code comment). Somehow this issue is relate to fragment shader I think.
In my sprite batch I have list of sprites and I render them by texture id and FIFO queue.
Here's source code of my sprite batch class:
void spriteBatch::renderInRange(shader& prog, int start, int count){
int curTexture = textures[start];
int startFrom = start;
//Looping through all vertexes and rendering them by texture id's
for(int i=start;i<start+count;++i){
if(textures[i] != curTexture || i == (start + count) -1){
//Problem occurs after decommenting this line
// prog.setUniform("texture", curTexture-1);
prog.setUniform("texture", 0); // if I pass same texture id everything is OK
int startVertex = startFrom * vertexesPerSprite;
int cnt = ((i - startFrom) * vertexesPerSprite);
//If last one has same texture we just adding it
//to last render call
if(i == (start + count) - 1 && textures[i] == curTexture)
cnt = ((i + 1) - startFrom) * vertexesPerSprite;
render(vbo, GL_TRIANGLE_STRIP, startVertex+1, cnt-1);
//if last element has different texture
//we need to render it separately
if(i == (start + count) - 1 && textures[i] != curTexture){
// prog.setUniform("texture", textures[i]-1);
render(vbo, GL_TRIANGLE_STRIP, (i * vertexesPerSprite) + 1, 5);
}
curTexture = textures[i];
startFrom = i;
}
}
}
inline GLint getUniformLocation(GLuint shaderID, const string& name) {
GLint iLocation = glGetUniformLocation(shaderID, name.data());
if(iLocation == -1){ // shader variable not found
stringstream errorText;
errorText << "Uniform \"" << name << " was not found!";
throw logic_error(errorText.str());
}
return iLocation;
}
void shader::setUniform(const string& name, const matrix& value) {
GLint location = getUniformLocation(this->programID, name.data());
glUniformMatrix4fv(location, 1, GL_FALSE, &(value[0]));
}
void shader::setUniform(const string& name, int value) {
GLint iLocation = getUniformLocation(this->programID, name.data());
//GLenum error = glGetError();
glUniform1i(iLocation, value);
// error = glGetError();
}
EDIT#4: I tried to profile app on IOS 6 and Iphone5 and allocations are much bigger. But methods are different in this case. I'm attaching new screenshot.
Issue is resolved by creating separate shader for each texture.
It looks like bug in driver implementation that does happen on all IOS devices (I tested on IOS 5/6). However on higher iPhone models it's not that noticeable.
On iPhone4 performance hit was very significant from 60 FPS to 38!
More code would help, but have you checked to see if the amount of memory involved is comparable to the amount of geometry you're updating? (although that would seem like a lot of geometry!) It looks like GL is holding your update until glDrawArrays, releasing it when it can be pulled into internal GL state.
If you can run the code in a MacOS app, the OpenGL Profiler tool may be able to further isolate the condition. (look in XCode documentation for more info, if you're not familiar with this tool). I'd also suggest looking at texture use, given the amount of memory involved.
The easiest thing to do might be to conditionally break on malloc() for a large allocation, note the address, and examine what's been loaded there.
try to query the texture uniform just once (in initialization) and cache it. calling "glGetUniformLocation" too much in one frame will hammer the performance (depending on the sprite count).
Related
Hope there will be nothing confusing in what I'm going to talk about, because my mother tongue is not English and my grammar is poor:p
I'm working on a mipmap analyzing tool which need to calculate with pixels from the render texture. Here's a part of the C# code:
private IEnumerator CSGroupColor(RenderTexture rt, GroupColor[] groupColors)
{
var outputBuffer = new ComputeBuffer(groupColors.Length, 8);
csKernelID = cs.FindKernel("CSGroupColor");
cs.SetTexture(csKernelID, "rt", rt);
cs.SetBuffer(csKernelID, "groupColorOut", outputBuffer);
cs.Dispatch(csKernelID, rt.width / 8, rt.height / 8, 1);
var req = AsyncGPUReadback.Request(outputBuffer);
yield return new WaitUntil(() => req.done);
req.GetData<GroupColor>().CopyTo(groupColors);
foreach (var color in groupColors)
{
if (!m_staticsDatas.TryGetValue(color.groupindex, out var vl))
continue;
if (color.value > 0)
vl.allColors.Add(color.value);
}
}
And what I want to implement next, is to make every buffer smaller(e.g.with a length of 4096), like we usually do in other asynchronous communications. Maybe I can pass the first buffer to CPU right away when it's full, and then replace it with the second buffer, and so on.
As I see it, using SetBuffer() again after req.done must be permitted to make that viable. I have been finding on Internet all day for a sample usage, but still found nothing.
Is there anyone who would give some help? Thx very much.
My goal was to create an import script (AssetPostprocessor), that handles a few things for me whenever I drag a spritesheet (a picture that contains multiple frames for an animation) into a specific Unity folder.
I want the script to split the picture up into multiple frames (similar to when its done manually, with Sprite Mode = Multiple), and then create an animation out of it. I parse the instructions (name, sprite frame size, frame holds) from the file name that gets handled.
Example: "Player_Walk_200x100_h3-3-3-3-3.png"
So far, I managed to accomplish all of those points, but whenever I create an animation and link the given Sprites to it, the resulting animation is "empty". As far as I could figure out, that's because the Texture2D and Sprite[] given to OnPostprocessSprites() seems to be temporary. The object exists during creation, but seemingly gets discarded later on. How can I solve this? How can I grab a reference to Sprite[] that doesn't get dropped?
Here is a stripped down version of my current core:
void OnPostprocessTexture( Texture2D texture ) {
// the texture is split into frames here, code not included since this part works fine and would complicate things.
}
void OnPostprocessSprites( Texture2D texture, Sprite[] sprites ) {
if( !this.assetPath.Contains( 'spritetest' ) ) return;
Debug.Log( "Number of Sprites: " + sprites.Length ); // shows the correct number of sprites
if( sprites.Length == 0 ) {
AssetDatabase.ImportAsset( this.assetImporter.assetPath );
return;
}
int frameRate = 24;
AnimationClip clip = new AnimationClip();
clip.frameRate = frameRate;
EditorCurveBinding spriteBinding = new EditorCurveBinding();
spriteBinding.type = typeof( SpriteRenderer );
spriteBinding.path = "";
spriteBinding.propertyName = "m_Sprite";
ObjectReferenceKeyframe[] spriteKeyFrames = new ObjectReferenceKeyframe[ sprites.Length ];
for( int i = 0; i < sprites.Length; i++ ) {
spriteKeyFrames[ i ] = new ObjectReferenceKeyframe();
spriteKeyFrames[ i ].time = i;
spriteKeyFrames[ i ].value = sprites[ i ]; // these sprites are empty in the editor
}
AnimationUtility.SetObjectReferenceCurve( clip, spriteBinding, spriteKeyFrames );
AssetDatabase.CreateAsset( clip, "Assets/Assets/spritetest/Test.anim" );
AssetDatabase.SaveAssets();
AssetDatabase.Refresh();
}
I've looked up forum posts, documentations, and even questions here (like Unity Editor Pipeline Scripting: how to Pass Sprite to SpriteRenderer in OnPostprocessSprites() Event? ) but none really managed to resolve this problem: whenever I try to grab the "real" sprite or texture via AssetDatabase, the resulting object is always just null.
For example, if I try doing this:
// attempt 1
Sprite sprites = AssetDatabase.LoadAssetAtPath<Sprite>( this.assetPath );
Debug.Log( "Real Sprite: " + sprite ); // sprite is null
// attempt 2
Object[] objects = AssetDatabase.LoadAllAssetRepresentationsAtPath( this.assetPath );
Debug.Log( "Real Objects: " + objects.Length ); // is always 0
Calling AssetDatabase.SaveAssets() or AssetDatabase.Refresh() beforehand doesn't change the result. I find plenty of people with similar issues, but no code or examples that seem to really resolve this issue.
Here is another person with a similar issue: ( source: https://answers.unity.com/questions/1080430/create-animation-clip-from-sprites-programmaticall.html )
The given link didn't resolve my problem either.
Essentially: How can I create an animation from an imported asset, by using my generated sprites, in AssetPostprocessor?
I have lots of same simple objects that are affecting gameplay, thousands of them! Well, not thousands, but really many. So if I make them GameObjects, the FPS decreases, especially when spawning them. Even with pooling. I should try a different approach.
You know that particle system in Unity3D can render many particles very fast. It also automatically controls particles, emits and removes them. But in my case, positions and lifetimes of objects are managed by game logic, and particle system is not allowed to do anything without my command, even reorder particles.
I am trying to use SetParticles method to control particles. It works in test project, where I use GetParticles first. I can even remove particles setting lifetime to -1, but can't spawn new ones. Also it does not prevent particle system from controlling particles.
I can disable emission so no particles will be created automatically.
I can set particles speed to 0 so they will not move.
I can set lifetime to huge number so they will not be removed.
I have an pool of Particle instances, to avoid unnessessary GC allocations. Objects receive a reference to particle when they are spawned, change it when updated, and set lifetime -1 and return it to pool when deleted. The pool stores this:
private ParticleSystem.Particle [] _unusedParticles;
private int _unusedCount;
private ParticleSystem.Particle [] _array;
_unused array and counter are needed for pooling, and _array stores all particles, used and unused alike, and is used in SetParticles call.
The main disadvantage of this method is that it doesn't work, probably because SetParticles does not create new particles. Also I guess it doesn't do anything with particles draw order, that's why it's poorly suited for bullet hell games where bullet patterns should look nice.
What should I do to properly disable automatic control of particles and properly setup direct control, with spawning and removing?
What you are looking for might be
List<Matrix4x4> matrixes=new List<Matrix4x4>();
for (...)
{
matrixes.Add(Matrix4x4.TRS( position,rotation,scale));
}
Graphics.DrawMeshInstanced(mesh,0,material, matrixes);
On each frame you can just update positions, rotations and scales, and get all the instances rendered on the GPU in one drawcall (pretty darn fast compared to seperate gameobjects). You can render up to 1000 instances in one call using this way
Create an empty GameObject, then add the ParticleSystem as child. Set playOnAwake to true.
Now when you need it, set GameObject.SetActive to true else false.
To get each of them use ParticleSystem.GetParticles, modify them and ParticleSystem.GetParticles.
I hope this is what you are looking for.
ParticleSystem m_System;
ParticleSystem.Particle[] m_Particles;
public float m_Drift = 0.01f;
private void LateUpdate()
{
InitializeIfNeeded();
// GetParticles is allocation free because we reuse the m_Particles buffer between updates
int numParticlesAlive = m_System.GetParticles(m_Particles);
// Change only the particles that are alive
for (int i = 0; i < numParticlesAlive; i++)
{
m_Particles[i].velocity += Vector3.up * m_Drift;
}
// Apply the particle changes to the Particle System
m_System.SetParticles(m_Particles, numParticlesAlive);
}
void InitializeIfNeeded()
{
if (m_System == null)
m_System = GetComponent<ParticleSystem>();
if (m_Particles == null || m_Particles.Length < m_System.main.maxParticles)
m_Particles = new ParticleSystem.Particle[m_System.main.maxParticles];
}
After we created a particle system in editor, we should disable emission and shape, so only core part and renderer stay active.
The most important part is that Simulation Speed must be zero. A particle system will no longer emit, remove or process particles automatically. Only your code now manages them.
I use this class to control particles. Instead of binding a particle to an object, it has API for registering objects, smoke in this case. Also, it stores a temporary array for particles to avoid GC allocations, and particle count, to avoid using particleCount property of the particle system.
In Update, which is called by my game logic, the following happens:
All objects that were despawned by game logic, are removed from list.
If object count is greater than array length, the array is resized.
If object count is greater than live particle count, _particleSystem.Emit call is made. If we call SetParticles without that, no new particles will appear. We must emit them first.
GetParticles is called. Simple, but probably not the most efficient solution, but it preserves particles data. It may be optimized by setting all particle data on array creation and resizing. If you do so, remove GetParticles call and uncomment the line above. Also, your game logic should manage particles even more carefully.
For each object, we let that object change a particle.
Each particle without object should be removed, so their lifetime is set to negative number.
SetParticles updates particles in system.
public class SmokeSystem {
private ParticleSystem _particleSystem;
private List <Smoke> _smoke = new List <Smoke> ();
private ParticleSystem.Particle [] _particles = new ParticleSystem.Particle[256];
private int _particleCount;
public SmokeSystem (ParticleSystem particleSystem) {
_particleSystem = particleSystem;
}
public void AddSmoke (Smoke smoke) => _smoke.Add (smoke);
public void Update () {
_smoke.RemoveAll (e => e.Despawned);
if (_smoke.Count > _particles.Length) {
int newSize = Max (_smoke.Count, 2 * _particles.Length);
Array.Resize (ref _particles, newSize);
}
int count = _smoke.Count;
if (count > _particleCount) {
_particleSystem.Emit (count - _particleCount);
// _particleCount = count;
}
_particleCount = _particleSystem.GetParticles (_particles);
for (int i = 0; i < count; i++) {
_smoke [i].UpdateParticle (ref _particles [i]);
}
for (int i = count; i < _particleCount; i++) {
_particles [i].remainingLifetime = -1;
}
_particleSystem.SetParticles (_particles, _particleCount);
_particleCount = count;
}
}
It does not depend on GPU instancing support so it will work on WebGL.
I have tried to search for this question a lot, but never have seen any satisfactory answers, so now I have a last hope here.
I have an onPreviewFrame callback set up. Which gives a byte[] of raw frames with supported preview format(NV21 with H.264 encoded type).
Now, the problem is callback always starts giving byte[] frames from a fixed orientation, whenever device rotates it doesn't reflect to captured byte[] frames. I have tried with setDisplayOrientation and setRotation but these api's are only reflecting to preview which is being displayed not at all to the captured byte [] frames.
Android docs even says, Camera.setDisplayOrientation only affects the displaying preview, not the frame bytes:
This does not affect the order of byte array passed in onPreviewFrame(byte[], Camera), JPEG pictures, or recorded videos.
Finally Is there a way, at any API level, to change the orientation of the byte[] frames?
One possible way if you don't care about the format is to the use YuvImage class to get a JPEG buffer, use this buffer to create a Bitmap and rotate it to the corresponding angle. Something like that:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Size previewSize = camera.getParameters().getPreviewSize();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] rawImage = null;
// Decode image from the retrieved buffer to JPEG
YuvImage yuv = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null);
yuv.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), YOUR_JPEG_COMPRESSION, baos);
rawImage = baos.toByteArray();
// This is the same image as the preview but in JPEG and not rotated
Bitmap bitmap = BitmapFactory.decodeByteArray(rawImage, 0, rawImage.length);
ByteArrayOutputStream rotatedStream = new ByteArrayOutputStream();
// Rotate the Bitmap
Matrix matrix = new Matrix();
matrix.postRotate(YOUR_DEFAULT_ROTATION);
// We rotate the same Bitmap
bitmap = Bitmap.createBitmap(bitmap, 0, 0, previewSize.width, previewSize.height, matrix, false);
// We dump the rotated Bitmap to the stream
bitmap.compress(CompressFormat.JPEG, YOUR_JPEG_COMPRESSION, rotatedStream);
rawImage = rotatedStream.toByteArray();
// Do something we this byte array
}
I have modified the onPreviewFrame method of this Open Source Android Touch-To-Record library to take transpose and resize a captured frame.
I defined "yuvIplImage" as following in my setCameraParams() method.
IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);
This is my onPreviewFrame() method:
#Override
public void onPreviewFrame(byte[] data, Camera camera)
{
long frameTimeStamp = 0L;
if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
{
frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
}
else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
{
frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
}
else
{
long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
}
synchronized(FragmentCamera.mVideoRecordLock)
{
if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
{
FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;
if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
{
FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
}
try
{
yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);
int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];
Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width, mPreviewSize.height);
bgrImage.getIntBuffer().put(_temp);
opencv_core.cvTranspose(bgrImage, transposed);
opencv_core.cvFlip(transposed, transposed, 1);
opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
opencv_core.cvCopy(transposed, squared, null);
opencv_core.cvResetImageROI(transposed);
videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
videoRecorder.record(squared);
}
catch(com.googlecode.javacv.FrameRecorder.Exception e)
{
e.printStackTrace();
}
}
lastSavedframe = new SavedFrames(data, frameTimeStamp);
}
}
This code uses a method "YUV_NV21_TO_BGR", which I found from this link
Basically this method is used to resolve, which I call as, "The Green Devil problem on Android". You can see other android devs facing the same problem on other SO threads. Before adding "YUV_NV21_TO_BGR" method when I just took transpose of YuvIplImage, more importantly a combination of transpose, flip (with or without resizing), there was greenish output in resulting video. This "YUV_NV21_TO_BGR" method saved the day. Thanks to #David Han from above google groups thread.
Also you should know that all this processing (transpose, flip and resize), in onPreviewFrame, takes much time which causes you a very serious hit on your Frames Per Second (FPS) rate. When I used this code, inside onPreviewFrame method, the resulting FPS of the recorded video was down to 3 frames/sec from 30fps.
I would advise not to use this approach. Rather you can go for post-recording processing (transpose, flip and resize) of your video file using JavaCV in an AsyncTask. Hope this helps.
I'd like to write my own stereo image viewer, because there are certain features I need which are missing from the one bundled with my NVidia/EVGA GTX 580.
I can't figure out how to program the card to enter "shutterglass" mode where every other frame (at 120 HZ) alternates left and right.
I've looked at the OpenGL, Direct3D, and XNA APIs, as well as information from NVIDIA, and can't figure out how to get started. How do I set separate left and right images, how do I tell the screen to display it, and how to I tell the driver to activate the shutterglass transmitter?
(Another disconcerting thing is that whenever I use the bundled software to view stereo images and video in shutterglass mode, it's in fullscreen, and the screen blinks when entering that mode--even though I run the screen at 120Hz in 2D. Is there a way to have a 3D surface in a window without upsetting the rest of the screen on the NVidia "gamer" cards that are 3D capable (570, 580)?
I'm a bit late to this, but I just got the stereoscopic 3D to work using nothing but a GTX 580 and OpenGL. No need for a quadro card or DirectX.
I have the nVidia 3D Vision driver and IR emitter and simply set the emitter to "Always on" in the nVidia control panel.
In my game engine, I switched to a full screen mode with 120Hz and render the scene twice with a slight frustum offset (as per nVidia's own documentation PDF on the manual implementation "2010_GTC2010.pdf").
No quad buffers or any other tricks needed, it works great. Plus, I am in control of all the settings, like convergence etc.
For the NVidia 3Dvision with the GEForce range you need to write a full screen directX surface twice the width of the display with the left image on the left,right on the right (duh).
Then you need to write a magic value into the bottom left of the image which the NVision driver picks up and turns on the glasses, you don't need the nvapi.dll
With the Nvidia pro glasses and a Quadra card you can use the regular OpenGL stereo API.
ps.I did find some sample code that manages to do this with a normal window.
Edit - it was a low level USB code talking to the xmitter that I could never get to build, I think it eventually became this http://sourceforge.net/projects/libnvstusb/
Here is some sample code for full screen with the NVision glasses.
I'm not a DirectX expert so some of this might be less than optimal.
My app is also based on Qt, there might be some Qt bits left in the code
-----------------------------------------------------------------
// header
void create3D();
void set3D();
IDirect3D9 *_d3d;
IDirect3DDevice9 *_d3ddev;
QSize _size; // full screen size
IDirect3DSurface9 *_imageBuf; //Source stereo image
IDirect3DSurface9 *_backBuf;
--------------------------------------------------------
// the code
#include <windows.h>
#include <windowsx.h>
#include <d3d9.h>
#include <d3dx9.h>
#include <strsafe.h>
#pragma comment (lib, "d3d9.lib")
#define NVSTEREO_IMAGE_SIGNATURE 0x4433564e //NV3D
typedef struct _Nv_Stereo_Image_Header
{
unsigned int dwSignature;
unsigned int dwWidth;
unsigned int dwHeight;
unsigned int dwBPP;
unsigned int dwFlags;
} NVSTEREOIMAGEHEADER, *LPNVSTEREOIMAGEHEADER;
// ORedflags in the dwFlagsfielsof the _Nv_Stereo_Image_Headerstructure above
#define SIH_SWAP_EYES 0x00000001
#define SIH_SCALE_TO_FIT 0x00000002
// call at start to set things up
void DisplayWidget::create3D()
{
_size = QSize(1680,1050); //resolution of my Samsung 2233z
_d3d = Direct3DCreate9(D3D_SDK_VERSION); // create the Direct3D interface
D3DPRESENT_PARAMETERS d3dpp; // create a struct to hold various device information
ZeroMemory(&d3dpp, sizeof(d3dpp)); // clear out the struct for use
d3dpp.Windowed = FALSE; // program fullscreen
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD; // discard old frames
d3dpp.hDeviceWindow = winId(); // set the window to be used by Direct3D
d3dpp.BackBufferFormat = D3DFMT_A8R8G8B8; // set the back buffer format to 32 bit // or D3DFMT_R8G8B8
d3dpp.BackBufferWidth = _size.width();
d3dpp.BackBufferHeight = _size.height();
d3dpp.PresentationInterval = D3DPRESENT_INTERVAL_ONE;
d3dpp.BackBufferCount = 1;
// create a device class using this information and information from the d3dpp stuct
_d3d->CreateDevice(D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
winId(),
D3DCREATE_SOFTWARE_VERTEXPROCESSING,
&d3dpp,
&_d3ddev);
//3D VISION uses a single surface 2x images wide and image high
// create the surface
_d3ddev->CreateOffscreenPlainSurface(_size.width()*2, _size.height(), D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &_imageBuf, NULL);
set3D();
}
// call to put 3d signature in image
void DisplayWidget::set3D()
{
// Lock the stereo image
D3DLOCKED_RECT lock;
_imageBuf->LockRect(&lock,NULL,0);
// write stereo signature in the last raw of the stereo image
LPNVSTEREOIMAGEHEADER pSIH = (LPNVSTEREOIMAGEHEADER)(((unsigned char *) lock.pBits) + (lock.Pitch * (_size.height()-1)));
// Update the signature header values
pSIH->dwSignature = NVSTEREO_IMAGE_SIGNATURE;
pSIH->dwBPP = 32;
//pSIH->dwFlags = SIH_SWAP_EYES; // Src image has left on left and right on right, thats why this flag is not needed.
pSIH->dwFlags = SIH_SCALE_TO_FIT;
pSIH->dwWidth = _size.width() *2;
pSIH->dwHeight = _size.height();
// Unlock surface
_imageBuf->UnlockRect();
}
// call in display loop
void DisplayWidget::paintEvent()
{
// clear the window to a deep blue
//_d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 40, 100), 1.0f, 0);
_d3ddev->BeginScene(); // begins the 3D scene
// do 3D rendering on the back buffer here
RECT destRect;
destRect.left = 0;
destRect.top = 0;
destRect.bottom = _size.height();
destRect.right = _size.width();
// Get the Backbuffer then Stretch the Surface on it.
_d3ddev->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &_backBuf);
_d3ddev->StretchRect(_imageBuf, NULL, _backBuf, &destRect, D3DTEXF_NONE);
_backBuf->Release();
_d3ddev->EndScene(); // ends the 3D scene
_d3ddev->Present(NULL, NULL, NULL, NULL); // displays the created frame
}
// my images come from a camera
// _left and _right are QImages but it should be obvious what the functions do
void DisplayWidget::getImages()
{
RECT srcRect;
srcRect.left = 0;
srcRect.top = 0;
srcRect.bottom = _size.height();
srcRect.right = _size.width();
RECT destRect;
destRect.top = 0;
destRect.bottom = _size.height();
if ( isOdd() ) {
destRect.left = _size.width();
destRect.right = _size.width()*2;
// get camera data for _left here, code not shown
D3DXLoadSurfaceFromMemory(_imageBuf, NULL, &destRect,_right.bits(),D3DFMT_A8R8G8B8,_right.bytesPerLine(),NULL,&srcRect,D3DX_DEFAULT,0);
} else {
destRect.left = 0;
destRect.right = _size.width();
// get camera data for _right here, code not shown
D3DXLoadSurfaceFromMemory(_imageBuf, NULL, &destRect,_left.bits(),D3DFMT_A8R8G8B8,_left.bytesPerLine(),NULL,&srcRect,D3DX_DEFAULT,0);
}
set3D(); // add NVidia signature
}
DisplayWidget::~DisplayWidget()
{
_d3ddev->Release(); // close and release the 3D device
_d3d->Release(); // close and release Direct3D
}