PlatformIO in VSCode - visual-studio-code

I downloaded the latest version of VS Code with PlatformIO, i also downloaded the library Mouse.h from PlatformIO Library Manager, and even so, after i upload the code to my Micro Pro the mouse does not respond to the joystick!
But the same code works when i upload via Arduino IDE!
I compared the Mouse.h from .platformio/lib with the Mouse.h from program files\Arduino\libraries
I compared the Mouse.cpp from .platformio/lib with the Mouse.cpp from program files\Arduino\libraries
And they have exactly the same code!
This is my code for my MICRO PRO 32u4 5v:
/* HID Joystick Mouse Example
by: Jim Lindblom
date: 1/12/2012
license: MIT License - Feel free to use this code for any purpose.
No restrictions. Just keep this license if you go on to use this
code in your future endeavors! Reuse and share.
This is very simplistic code that allows you to turn the
SparkFun Thumb Joystick (http://www.sparkfun.com/products/9032)
into an HID Mouse. The select button on the joystick is set up
as the mouse left click.
*/
#include <Arduino.h>
#include <Mouse.h>
int horzPin = A0; // Analog output of horizontal joystick pin
int vertPin = A1; // Analog output of vertical joystick pin
int selPin = 9; // select button pin of joystick
int vertZero, horzZero; // Stores the initial value of each axis, usually around 512
int vertValue, horzValue; // Stores current analog output of each axis
const int sensitivity = 200; // Higher sensitivity value = slower mouse, should be <= about 500
int mouseClickFlag = 0;
void setup()
{
pinMode(horzPin, INPUT); // Set both analog pins as inputs
pinMode(vertPin, INPUT);
pinMode(selPin, INPUT); // set button select pin as input
digitalWrite(selPin, HIGH); // Pull button select pin high
delay(1000); // short delay to let outputs settle
vertZero = analogRead(vertPin); // get the initial values
horzZero = analogRead(horzPin); // Joystick should be in neutral position when reading these
}
void loop()
{
vertValue = analogRead(vertPin) - vertZero; // read vertical offset
horzValue = analogRead(horzPin) - horzZero; // read horizontal offset
//delay(3000);
if (vertValue != 0)
Mouse.move(0, vertValue/sensitivity, 0); // move mouse on y axis
if (horzValue != 0)
Mouse.move((horzValue/sensitivity) *-1, 0, 0); // move mouse on x axis
if ((digitalRead(selPin) == 0) && (!mouseClickFlag)) // if the joystick button is pressed
{
mouseClickFlag = 1;
Mouse.press(MOUSE_LEFT); // click the left button down
}
else if ((digitalRead(selPin))&&(mouseClickFlag)) // if the joystick button is not pressed
{
mouseClickFlag = 0;
Mouse.release(MOUSE_LEFT); // release the left button
}
}

Did you restart VS Code after installing the lib?
For me it fix this kind of problems in some cases.
Edit: Did you "rebuild IntelliSense Index" by press "Ctrl + shift + p"?

Related

How to change the zoom centerpoint in an ILNumerics scene viewed with a camera

I would like to be able to zoom into an ILNumerics scene viewed by a camera (as in scene.Camera) with the center point of the zoom determined by where the mouse pointer is located when I start spinning the mouse scroll wheel. The default zoom behavior is for the zoom center to be at the scene.Camera.LookAt point. So I guess this would require the mouse to be tracked in (X,Y) continuously and for that point to be used as the new LookAt point? This seems to be like this post on getting the 3D coordinates from a mouse click, but in my case there's no click to indicate the location of the mouse.
Tips would be greatly appreciated!
BTW, this kind of zoom method is standard operating procedure in CAD software to zoom in and out on an assembly of parts. It's super convenient for the user.
One approach is to overload the MouseWheel event handler. The current coordinates of the mouse are available here, too.
Use the mouse screen coordinates to acquire (to "pick") the world
coordinate corresponding to the primitive under the mouse.
Adjust the Camera.Position and Camera.ZoomFactor to 'move' the camera closer to the point under the mouse and to achieve the required 'directional zoom' effect.
Here is a complete example from the ILNumerics website:
using System;
using System.Windows.Forms;
using ILNumerics;
using ILNumerics.Drawing;
using ILNumerics.Drawing.Plotting;
using static ILNumerics.Globals;
using static ILNumerics.ILMath;
namespace ILNumerics.Examples.DirectionalZoom {
public partial class Form1 : Form {
public Form1() {
InitializeComponent();
}
private void panel2_Load(object sender, EventArgs e) {
Array<float> X = 0, Y = 0, Z = CreateData(X, Y);
var surface = new Surface(Z, X, Y, colormap: Colormaps.Winter);
surface.UseLighting = true;
surface.Wireframe.Visible = false;
panel2.Scene.Camera.Add(surface);
// setup mouse handlers
panel2.Scene.Camera.Projection = Projection.Orthographic;
panel2.Scene.Camera.MouseDoubleClick += Camera_MouseDoubleClick;
panel2.Scene.Camera.MouseWheel += Camera_MouseWheel;
// initial zoom all
ShowAll(panel2.Scene.Camera);
}
private void Camera_MouseWheel(object sender, Drawing.MouseEventArgs e) {
// Update: added comments.
// the next conditionals help to sort out some calls not needed. Helpful for performance.
if (!e.DirectionUp) return;
if (!(e.Target is Triangles)) return;
// make sure to start with the SceneSyncRoot - the copy of the scene which receives
// user interaction and is eventually used for rendering. See: https://ilnumerics.net/scene-management.html
var cam = panel2.SceneSyncRoot.First<Camera>();
if (Equals(cam, null)) return; // TODO: error handling. (Should not happen in regular setup, though.)
// in case the user has configured limited interaction
if (!cam.AllowZoom) return;
if (!cam.AllowPan) return; // this kind of directional zoom "comprises" a pan operation, to some extent.
// find mouse coordinates. Works only if mouse is over a Triangles shape (surfaces, but not wireframes):
using (var pick = panel2.PickPrimitiveAt(e.Target as Drawable, e.Location)) {
if (pick.NextVertex.IsEmpty) return;
// acquire the target vertex coordinates (world coordinates) of the mouse
Array<float> vert = pick.VerticesWorld[pick.NextVertex[0], r(0, 2), 0];
// and transform them into a Vector3 for easier computations
var vertVec = new Vector3(vert.GetValue(0), vert.GetValue(1), vert.GetValue(2));
// perform zoom: we move the camera closer to the target
float scale = Math.Sign(e.Delta) * (e.ShiftPressed ? 0.01f : 0.2f); // adjust for faster / slower zoom
var offs = (cam.Position - vertVec) * scale; // direction on the line cam.Position -> target vertex
cam.Position += offs; // move the camera on that line
cam.LookAt += offs; // keep the camera orientation
cam.ZoomFactor *= (1 + scale);
// TODO: consider adding: the lookat point now moved away from the center / the surface due to our zoom.
// In order for better rotations it makes sense to place the lookat point back to the surface,
// by adjusting cam.LookAt appropriately. Otherwise, one could use cam.RotationCenter.
e.Cancel = true; // don't execute common mouse wheel handlers
e.Refresh = true; // immediate redraw at the end of event handling
}
}
private void Camera_MouseDoubleClick(object sender, Drawing.MouseEventArgs e) {
var cam = panel2.Scene.Camera;
ShowAll(cam);
e.Cancel = true;
e.Refresh = true;
}
// Some sample data. Replace this with your own data!
private static RetArray<float> CreateData(OutArray<float> Xout, OutArray<float> Yout) {
using (Scope.Enter()) {
Array<float> x_ = linspace<float>(0, 20, 100);
Array<float> y_ = linspace<float>(0, 18, 80);
Array<float> Y = 1, X = meshgrid(x_, y_, Y);
Array<float> Z = abs(sin(sin(X) + cos(Y))) + .01f * abs(sin(X * Y));
if (!isnull(Xout)) {
Xout.a = X;
}
if (!isnull(Yout)) {
Yout.a = Y;
}
return -Z;
}
}
// See: https://ilnumerics.net/examples.php?exid=7b0b4173d8f0125186aaa19ee8e09d2d
public static double ShowAll(Camera cam) {
// Update: adjusts the camera Position too.
// this example works only with orthographic projection. You will need to take the view frustum
// into account, if you want to make this method work with perspective projection also. however,
// the general functioning would be similar....
if (cam.Projection != Projection.Orthographic) {
throw new NotImplementedException();
}
// get the overall extend of the cameras scene content
var limits = cam.GetLimits();
// take the maximum of width/ height
var maxExt = limits.HeightF > limits.WidthF ? limits.HeightF : limits.WidthF;
// make sure the camera looks at the unrotated bounding box
cam.Reset();
// center the camera view
cam.LookAt = limits.CenterF;
cam.Position = cam.LookAt + Vector3.UnitZ * 10;
// apply the zoom factor: the zoom factor will scale the 'left', 'top', 'bottom', 'right' limits
// of the view. In order to fit exactly, we must take the "radius"
cam.ZoomFactor = maxExt * .50;
return cam.ZoomFactor;
}
}
}
Note, that the new handler performs the directional zoom only when the mouse is located over an object hold by this Camera! If, instead, the mouse is placed on the background of the scene or over some other Camera / plot cube object no effect will be visible and the common zoom feature is performed (zooming in/out to the look-at point).

Stage scaling affecting AS3

I have some code that controls a serious of images to produce and 360 spin when dragging mouseX axis. This all worked fine with the code I have used.
I have since had to design for different platform and enlarge the size of the stage i did this by scale to stage check box in the document settings.
While the mouse down is in action the spin works fine dragging through the images as intended but when you you release and start to drag again it doesn't remember the last frame and jumps to another frame before dragging fine again? Why is it jumping like this when all I have done is change the scale of everything?
please see code use to
//ROTATION OF CONTROL BODY X
spinX_mc.stop();
var spinX_mc:MovieClip;
var offsetFrame:int = spinX_mc.currentFrame;
var offsetX:Number = 0;
var percent:Number = 0;
//Listeners
spinX_mc.addEventListener(MouseEvent.MOUSE_DOWN, startDragging);
spinX_mc.addEventListener(MouseEvent.MOUSE_UP, stopDragging);
function startDragging(e:MouseEvent):void
{
// start listening for mouse movement
spinX_mc.addEventListener(MouseEvent.MOUSE_MOVE,drag);
offsetX = stage.mouseX;
}
function stopDragging(e:MouseEvent):void
{
("stopDrag")
// STOP listening for mouse movement
spinX_mc.removeEventListener(MouseEvent.MOUSE_MOVE,drag);
// save the current frame number;
offsetFrame = spinX_mc.currentFrame;
removeEventListener(MouseEvent.MOUSE_DOWN, startDragging);
}
// this function is called continuously while the mouse is being dragged
function drag(e:MouseEvent):void
{
trace ("Drag")
// work out how far the mouse has been dragged, relative to the width of the spinX_mc
// value between -1 and +1
percent = (mouseX - offsetX) / spinX_mc.width;
// trace(percent);
// work out which frame to go to. offsetFrame is the frame we started from
var frame:int = Math.round(percent * spinX_mc.totalFrames) + offsetFrame;
// reset when hitting the END of the spinX_mc timeline
while (frame > spinX_mc.totalFrames)
{
frame -= spinX_mc.totalFrames;
}
// reset when hitting the START of the spinX_mc timeline
while (frame <= 0)
{
frame += spinX_mc.totalFrames;
}
// go to the correct frame
spinX_mc.gotoAndStop(frame);
}
By changing
spinX_mc.addEventListener(MouseEvent.MOUSE_MOVE,drag);
offsetX = stage.mouseX;
to
spinX_mc.addEventListener(MouseEvent.MOUSE_MOVE,drag);
offsetX = mouseX;
I seem to of solved the problem and everything runs smoothly again.

Arduino sketch that reads serial characters as a command and does something

Currently I am trying to get a sketch working where the Arduino will read a series of characters as a command, and do something based on the series of characters sent from an iDevice. I am using an iPhone 3GS that is jailbroken to send characters to the Arduino. The method that sends the serial characters looks like the following,
- (IBAction)blinkFlow_A_LED:(id)sender {
// Method to blink the Flow_A LED on the kegboard-mini Arduino shield (<https://github.com/Kegbot/kegboard>).
NSLog(#"blink Flow_A btn pressed");
// Open serial port / interface
[serial open:B2400];
NSLog(#"%c", [serial isOpened]);
// Send serial data (TX)
char buffer [7];
buffer[0] = '{';
buffer[1] = 'b';
buffer[2] = 'l';
buffer[3] = 'i';
buffer[4] = 'n';
buffer[5] = 'k';
buffer[6] = '}';
[serial write:buffer length:7];
}
I have created a simple sketch that blinks the LED on the shield I am using, but I want the LED to blink conditionally when the button is clicked in the iOS app. The sketch that blinks the LED looks like the following,
/*
Blink
Turns on an LED on for one second, then off for one second, repeatedly.
This sketch is specific to making the kegboard-mini shield.
http://arduino.cc/forum/index.php?topic=157625.new;topicseen#new
This example code is in the public domain.
*/
// Pin D4 - should be connected to the flow_A LED
// Give it a name
int led = 4;
// The setup routine runs once when you press reset:
void setup() {
// Initialize the digital pin as an output.
pinMode(led, OUTPUT);
}
// The loop routine run over and over again forever:
void loop() {
digitalWrite(led, HIGH); // Turn the LED on (HIGH is the voltage level)
delay(1000); // Wait for one second
digitalWrite(led, LOW); // Turn the LED off by making the voltage LOW
delay(1000); // Wait for a second
}
That is a simple sketch!
You may want to begin with looking at the Arduino reference page: Serial
In setup(), you need at least Serial.begin(2400);.
Now, I'll suggest that reading and decoding the string "{blink}" seems like overkill. Let me suggest you send one character (for example 'b'), and detect one character, at least to start. Check out .available() and .read() on the Serial reference page. With these you can determine if a character has arrived at the Arduino and read in a single character.
You can then use these if you want to build a string of characters one at a time and compare it to String("{blink}"). This is a bit more complicated, especially if you take into account exceptions (like lost or damaged characters).
You can easily test your program using the Serial monitor tool -- just be advised that you have to hit "send" to make the characters go out.
I ended putting a simple sketch together like this which allows me to store an array of serial bytes into a String thanks to the SerialEvent example.
The sketch I am currently working with looks like the following,
/*
* kegboard-serial-simple-blink07
* This code is public domain
*
* This sketch sends a receives a multibyte String from the iPhone
* and performs functions on it.
*
* Examples:
* http://arduino.cc/en/Tutorial/SerialEvent
* http://arduino.cc/en/Serial/read
*/
// Global variables should be identified with "_"
// flow_A LED
int led = 4;
// relay_A
const int RELAY_A = A0;
// Variables from the sketch example
String inputString = ""; // A string to hold incoming data
boolean stringComplete = false; // Whether the string is complete
void setup() {
Serial.begin(2400); // Open a serial port. Sets data rate to 2400 bit/s
Serial.println("Power on test");
inputString.reserve(200);
pinMode(RELAY_A, OUTPUT);
}
void open_valve() {
digitalWrite(RELAY_A, HIGH); // Turn RELAY_A on
}
void close_valve() {
digitalWrite(RELAY_A, LOW); // Turn RELAY_A off
}
void flow_A_blink() {
digitalWrite(led, HIGH); // Turn the LED on (HIGH is the voltage level)
delay(1000); // Wait for one second
digitalWrite(led, LOW); // Turn the LED off by making the voltage LOW
delay(1000); // Wait for a second
}
void flow_A_blink_stop() {
digitalWrite(led, LOW);
}
void loop() {
// Print the string when newline arrives:
if (stringComplete) {
Serial.println(inputString);
// Clear the string:
inputString = "";
stringComplete = false;
}
if (inputString == "{blink_Flow_A}") {
flow_A_blink();
}
}
// SerialEvent occurs whenever a new data comes in the
// hardware serial RX. This routine is run between each
// time loop() runs, so using delay inside loop can delay
// response. Multiple bytes of data may be available.
void serialEvent() {
while(Serial.available()) {
// Get the new byte:
char inChar = (char)Serial.read();
// Add it to the inputString:
inputString += inChar;
// If the incoming character is a newline, set a flag
// so the main loop can do something about it:
if (inChar == '\n') {
stringComplete = true;
}
}
}

bufferuntil('\n) won’t trigger serialevent processing with Eclipse

I want to do the simplest thing to plot a graph from the serial port of Arduino with the Processing software. I use Eclipse.
I did as the tutorials say about the plugins. I also copied the code from the Arduino site which is this:
import processing.serial.*;
Serial myPort; // The serial port
int xPos = 1; // Horizontal position of the graph
void setup () {
// Set the window size:
size(400, 300);
// List all the available serial ports
println(Serial.list());
// I know that the first port in the serial list on my mac
// is always my Arduino, so I open Serial.list()[0].
// Open whatever port is the one you're using.
myPort = new Serial(this, Serial.list()[0], 9600);
// Don't generate a serialEvent() unless you get a newline character:
myPort.bufferUntil('\n');
// Set the initial background:
background(0);
}
void draw () {
// Everything happens in the serialEvent()
}
void serialEvent (Serial myPort) {
// Get the ASCII string:
String inString = myPort.readStringUntil('\n');
if (inString != null) {
// Trim off any whitespace:
inString = trim(inString);
// Convert to an int and map to the screen height:
float inByte = float(inString);
inByte = map(inByte, 0, 1023, 0, height);
// Draw the line:
stroke(127, 34, 255);
line(xPos, height, xPos, height - inByte);
// At the edge of the screen, go back to the beginning:
if (xPos >= width) {
xPos = 0;
background(0);
}
else {
// Increment the horizontal position:
xPos++;
}
}
}
There is a problem that the bufferUntil('\n') does not trigger the serialevent.
I know that there was a bug. In case you try to set an 8-bit int to a 32-bit int, it goes to hell.
The Processing IDE works great though. Eclipse does not trigger at all. Is there a solution?
Note that bufferUntil('\n') takes an integer value. You're giving it a char. At the very least try bufferUntil(10) just to see if there's some oddness going on there, but it might be worth simply printing the values see coming in on myPort and see what comes by when you send a newline.

How do I program a stereo-capable graphics card to display stereo images?

I'd like to write my own stereo image viewer, because there are certain features I need which are missing from the one bundled with my NVidia/EVGA GTX 580.
I can't figure out how to program the card to enter "shutterglass" mode where every other frame (at 120 HZ) alternates left and right.
I've looked at the OpenGL, Direct3D, and XNA APIs, as well as information from NVIDIA, and can't figure out how to get started. How do I set separate left and right images, how do I tell the screen to display it, and how to I tell the driver to activate the shutterglass transmitter?
(Another disconcerting thing is that whenever I use the bundled software to view stereo images and video in shutterglass mode, it's in fullscreen, and the screen blinks when entering that mode--even though I run the screen at 120Hz in 2D. Is there a way to have a 3D surface in a window without upsetting the rest of the screen on the NVidia "gamer" cards that are 3D capable (570, 580)?
I'm a bit late to this, but I just got the stereoscopic 3D to work using nothing but a GTX 580 and OpenGL. No need for a quadro card or DirectX.
I have the nVidia 3D Vision driver and IR emitter and simply set the emitter to "Always on" in the nVidia control panel.
In my game engine, I switched to a full screen mode with 120Hz and render the scene twice with a slight frustum offset (as per nVidia's own documentation PDF on the manual implementation "2010_GTC2010.pdf").
No quad buffers or any other tricks needed, it works great. Plus, I am in control of all the settings, like convergence etc.
For the NVidia 3Dvision with the GEForce range you need to write a full screen directX surface twice the width of the display with the left image on the left,right on the right (duh).
Then you need to write a magic value into the bottom left of the image which the NVision driver picks up and turns on the glasses, you don't need the nvapi.dll
With the Nvidia pro glasses and a Quadra card you can use the regular OpenGL stereo API.
ps.I did find some sample code that manages to do this with a normal window.
Edit - it was a low level USB code talking to the xmitter that I could never get to build, I think it eventually became this http://sourceforge.net/projects/libnvstusb/
Here is some sample code for full screen with the NVision glasses.
I'm not a DirectX expert so some of this might be less than optimal.
My app is also based on Qt, there might be some Qt bits left in the code
-----------------------------------------------------------------
// header
void create3D();
void set3D();
IDirect3D9 *_d3d;
IDirect3DDevice9 *_d3ddev;
QSize _size; // full screen size
IDirect3DSurface9 *_imageBuf; //Source stereo image
IDirect3DSurface9 *_backBuf;
--------------------------------------------------------
// the code
#include <windows.h>
#include <windowsx.h>
#include <d3d9.h>
#include <d3dx9.h>
#include <strsafe.h>
#pragma comment (lib, "d3d9.lib")
#define NVSTEREO_IMAGE_SIGNATURE 0x4433564e //NV3D
typedef struct _Nv_Stereo_Image_Header
{
unsigned int dwSignature;
unsigned int dwWidth;
unsigned int dwHeight;
unsigned int dwBPP;
unsigned int dwFlags;
} NVSTEREOIMAGEHEADER, *LPNVSTEREOIMAGEHEADER;
// ORedflags in the dwFlagsfielsof the _Nv_Stereo_Image_Headerstructure above
#define SIH_SWAP_EYES 0x00000001
#define SIH_SCALE_TO_FIT 0x00000002
// call at start to set things up
void DisplayWidget::create3D()
{
_size = QSize(1680,1050); //resolution of my Samsung 2233z
_d3d = Direct3DCreate9(D3D_SDK_VERSION); // create the Direct3D interface
D3DPRESENT_PARAMETERS d3dpp; // create a struct to hold various device information
ZeroMemory(&d3dpp, sizeof(d3dpp)); // clear out the struct for use
d3dpp.Windowed = FALSE; // program fullscreen
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD; // discard old frames
d3dpp.hDeviceWindow = winId(); // set the window to be used by Direct3D
d3dpp.BackBufferFormat = D3DFMT_A8R8G8B8; // set the back buffer format to 32 bit // or D3DFMT_R8G8B8
d3dpp.BackBufferWidth = _size.width();
d3dpp.BackBufferHeight = _size.height();
d3dpp.PresentationInterval = D3DPRESENT_INTERVAL_ONE;
d3dpp.BackBufferCount = 1;
// create a device class using this information and information from the d3dpp stuct
_d3d->CreateDevice(D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
winId(),
D3DCREATE_SOFTWARE_VERTEXPROCESSING,
&d3dpp,
&_d3ddev);
//3D VISION uses a single surface 2x images wide and image high
// create the surface
_d3ddev->CreateOffscreenPlainSurface(_size.width()*2, _size.height(), D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &_imageBuf, NULL);
set3D();
}
// call to put 3d signature in image
void DisplayWidget::set3D()
{
// Lock the stereo image
D3DLOCKED_RECT lock;
_imageBuf->LockRect(&lock,NULL,0);
// write stereo signature in the last raw of the stereo image
LPNVSTEREOIMAGEHEADER pSIH = (LPNVSTEREOIMAGEHEADER)(((unsigned char *) lock.pBits) + (lock.Pitch * (_size.height()-1)));
// Update the signature header values
pSIH->dwSignature = NVSTEREO_IMAGE_SIGNATURE;
pSIH->dwBPP = 32;
//pSIH->dwFlags = SIH_SWAP_EYES; // Src image has left on left and right on right, thats why this flag is not needed.
pSIH->dwFlags = SIH_SCALE_TO_FIT;
pSIH->dwWidth = _size.width() *2;
pSIH->dwHeight = _size.height();
// Unlock surface
_imageBuf->UnlockRect();
}
// call in display loop
void DisplayWidget::paintEvent()
{
// clear the window to a deep blue
//_d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 40, 100), 1.0f, 0);
_d3ddev->BeginScene(); // begins the 3D scene
// do 3D rendering on the back buffer here
RECT destRect;
destRect.left = 0;
destRect.top = 0;
destRect.bottom = _size.height();
destRect.right = _size.width();
// Get the Backbuffer then Stretch the Surface on it.
_d3ddev->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &_backBuf);
_d3ddev->StretchRect(_imageBuf, NULL, _backBuf, &destRect, D3DTEXF_NONE);
_backBuf->Release();
_d3ddev->EndScene(); // ends the 3D scene
_d3ddev->Present(NULL, NULL, NULL, NULL); // displays the created frame
}
// my images come from a camera
// _left and _right are QImages but it should be obvious what the functions do
void DisplayWidget::getImages()
{
RECT srcRect;
srcRect.left = 0;
srcRect.top = 0;
srcRect.bottom = _size.height();
srcRect.right = _size.width();
RECT destRect;
destRect.top = 0;
destRect.bottom = _size.height();
if ( isOdd() ) {
destRect.left = _size.width();
destRect.right = _size.width()*2;
// get camera data for _left here, code not shown
D3DXLoadSurfaceFromMemory(_imageBuf, NULL, &destRect,_right.bits(),D3DFMT_A8R8G8B8,_right.bytesPerLine(),NULL,&srcRect,D3DX_DEFAULT,0);
} else {
destRect.left = 0;
destRect.right = _size.width();
// get camera data for _right here, code not shown
D3DXLoadSurfaceFromMemory(_imageBuf, NULL, &destRect,_left.bits(),D3DFMT_A8R8G8B8,_left.bytesPerLine(),NULL,&srcRect,D3DX_DEFAULT,0);
}
set3D(); // add NVidia signature
}
DisplayWidget::~DisplayWidget()
{
_d3ddev->Release(); // close and release the 3D device
_d3d->Release(); // close and release Direct3D
}