Drawing text on screen with GLUT not working at all - perl

I don't know what i'm doing wrong, this is perl but it's language agnostic(methinks):
This example draws a lot of snowmans from an example i've found in the web:
(I ported it to perl)
The problems is that there's no sign of the text being rendered.
sub renderScene{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gluLookAt( $x, 1.0, $z,
$lx,1.0,$lz,
0.0,1.0,0.0);
glColor3f(0.5, 0.2, 0.4);
glMatrixMode(GL_MODELVIEW);
#Draw ground
glBegin(GL_QUADS);
glVertex3f(-100.0, 0.0, -100.0);
glVertex3f(-100.0, 0.0, 100.0);
glVertex3f( 100.0, 0.0, 100.0);
glVertex3f( 100.0, 0.0, -100.0);
glEnd();
for ($i=-3; $i<3; $i++)
{
for ($j=-3 ; $j<3; $j++)
{
glPushMatrix();
glTranslatef($i*10.0,0,$j*10.0);
drawSnowMan();
glPopMatrix();
}
}
glColor3f(0.5, 0.5, 0.0);
glRasterPos2f(0.5, 0.5);
my $strin = "Viva peron carajo!";
my #string = split('',$strin);
for my $char(#string){
glutBitmapCharacter(GLUT_BITMAP_HELVETICA_18, $char);
}
glutSwapBuffers();
}

Your code is incomplete. $x, $z, $lx and $lz are undefined and there are no calls to set up the projection matrix. As such, I can only give suggestions:
glRasterPos coordinates are world/model coordinates. Did you really mean to put the text close to the origin, possibly inside the center snowman?
If you want to work in screen coordinates, you should set up an appropriate orthogonal "2D" projection matrix. Remember that you can push/pop/play around with the projection matrix in between drawing commands.
Also, start by rendering a triangle/quad in the text position.

Related

Adjusting contrast on camera preview and image widgets flutter

I need to display a CameraPreview widget and an image (separately) with the contrast adjusted on both. I've been looking into the ColorFiltered and ShaderMask widgets but I'm not sure what blend mode to use or if it will be helpful to change the blend mode. Does anyone have any examples of changing the contrast?
Hey I would recommend you to use you a color matrix to achieve your desired contrast.
You could use following Color matrix within an ColoFiltered widget:
class CustomSubFilters extends ColorFilter {
CustomSubFilters.matrix(List<double> matrix) : super.matrix(matrix);
factory CustomSubFilters.contrast(double c) {
num t = (1.0 - (1 + c)) / 2.0 * 255;
return CustomSubFilters.matrix(<double>[
1 + c,
0,
0,
0,
t,
0,
1 + c,
0,
0,
t,
0,
0,
1 + c,
0,
t,
0,
0,
0,
1,
0,
]);
}
}
Simply just wrap your widget within a ColorFiltered widget and use this colormatrix.
I used Camera plugin and there was no inbuilt feature in package to set contrast/brightness so I have set it by ColorFiltered widget as:
ColorFiltered(
colorFilter: const ColorFilter.mode(
Colors.white,
BlendMode.softLight,
// BlendMode.overlay,
),
child: CameraPreview(camController),
)
It worked for me. I hope, this solution will also help you. Thanks a lot for asking this question.

Running SDL/OpenGLES application on a specific DISPLAY in XServer

I am trying to port an application to an embedded system that I am trying to design. The embedded system is Raspberry Pi Zero W - based, and uses a custom Yocto build.
The application to be ported is written with SDL / OpenGLES to my understanding. I have a hard time understanding how to make a connection similar to the following depiction:
SDL APP -----> XServer ($DISPLAY) -------> Framebuffer /dev/fb1 ($FRAMEBUFFER)
System has two displays: One HDMI on /dev/fb0 and One TFT on /dev/fb1. I am trying to run the SDL application on TFT. The following are the steps I do:
First, start an XServer on DISPLAY=:1 that is connected to /dev/fb1:
FRAMEBUFFER=/dev/fb1 xinit /etc/X11/Xsession -- /usr/bin/Xorg :1 -br -pn -nolisten tcp -dpi 100
The first step seems like it's working. I can see LXDE booting up on my TFT screen. Checking the display, I get the correct display resolution:
~/projects# DISPLAY=:1 xrandr -q
xrandr: Failed to get size of gamma for output default
Screen 0: minimum 320 x 240, current 320 x 240, maximum 320 x 240
default connected 320x240+0+0 0mm x 0mm
320x240 0.00*
Second, I would like to start SDL-written application using x11. I am thinking that should work in seeing the application on the TFT. In order to do so, I try:
SDL_VIDEODRIVER=x11 SDL_WINDOWID=1 DISPLAY=:1 ./SDL_App
No matter which display number I choose, it starts on my HDMI display and not on the TFT. So now I am thinking the person who wrote the application hardcoded somethings in the application code:
void init_ogl(void)
{
int32_t success = 0;
EGLBoolean result;
EGLint num_config;
static EGL_DISPMANX_WINDOW_T nativewindow;
DISPMANX_ELEMENT_HANDLE_T dispman_element;
DISPMANX_DISPLAY_HANDLE_T dispman_display;
DISPMANX_UPDATE_HANDLE_T dispman_update;
VC_DISPMANX_ALPHA_T alpha;
VC_RECT_T dst_rect;
VC_RECT_T src_rect;
static const EGLint attribute_list[] =
{
EGL_RED_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_BLUE_SIZE, 8,
EGL_ALPHA_SIZE, 8,
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_NONE
};
EGLConfig config;
// Get an EGL display connection
display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
assert(display!=EGL_NO_DISPLAY);
// Initialize the EGL display connection
result = eglInitialize(display, NULL, NULL);
assert(EGL_FALSE != result);
// Get an appropriate EGL frame buffer configuration
result = eglChooseConfig(display, attribute_list, &config, 1, &num_config);
assert(EGL_FALSE != result);
// Create an EGL rendering context
context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL);
assert(context!=EGL_NO_CONTEXT);
// Create an EGL window surface
success = graphics_get_display_size( 0 /* LCD */ , &screen_width, &screen_height);
printf ("Screen width= %d\n", screen_width);
printf ("Screen height= %d\n", screen_height);
assert( success >= 0 );
int32_t zoom = screen_width / GAMEBOY_WIDTH;
int32_t zoom2 = screen_height / GAMEBOY_HEIGHT;
if (zoom2 < zoom)
zoom = zoom2;
int32_t display_width = GAMEBOY_WIDTH * zoom;
int32_t display_height = GAMEBOY_HEIGHT * zoom;
int32_t display_offset_x = (screen_width / 2) - (display_width / 2);
int32_t display_offset_y = (screen_height / 2) - (display_height / 2);
dst_rect.x = 0;
dst_rect.y = 0;
dst_rect.width = screen_width;
dst_rect.height = screen_height;
src_rect.x = 0;
src_rect.y = 0;
src_rect.width = screen_width << 16;
src_rect.height = screen_height << 16;
dispman_display = vc_dispmanx_display_open( 0 /* LCD */ );
dispman_update = vc_dispmanx_update_start( 0 );
alpha.flags = DISPMANX_FLAGS_ALPHA_FIXED_ALL_PIXELS;
alpha.opacity = 255;
alpha.mask = 0;
dispman_element = vc_dispmanx_element_add ( dispman_update, dispman_display,
0/*layer*/, &dst_rect, 0/*src*/,
&src_rect, DISPMANX_PROTECTION_NONE, &alpha, 0/*clamp*/, DISPMANX_NO_ROTATE/*transform*/);
nativewindow.element = dispman_element;
nativewindow.width = screen_width;
nativewindow.height = screen_height;
vc_dispmanx_update_submit_sync( dispman_update );
surface = eglCreateWindowSurface( display, config, &nativewindow, NULL );
assert(surface != EGL_NO_SURFACE);
// Connect the context to the surface
result = eglMakeCurrent(display, surface, surface, context);
assert(EGL_FALSE != result);
eglSwapInterval(display, 1);
glGenTextures(1, &theGBTexture);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, theGBTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*) NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0.0f, screen_width, screen_height, 0.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0.0f, 0.0f, screen_width, screen_height);
quadVerts[0] = display_offset_x;
quadVerts[1] = display_offset_y;
quadVerts[2] = display_offset_x + display_width;
quadVerts[3] = display_offset_y;
quadVerts[4] = display_offset_x + display_width;
quadVerts[5] = display_offset_y + display_height;
quadVerts[6] = display_offset_x;
quadVerts[7] = display_offset_y + display_height;
glVertexPointer(2, GL_SHORT, 0, quadVerts);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, kQuadTex);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glClear(GL_COLOR_BUFFER_BIT);
}
void init_sdl(void)
{
if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_GAMECONTROLLER) < 0)
{
Log("SDL Error Init: %s", SDL_GetError());
}
theWindow = SDL_CreateWindow("Gearboy", 0, 0, 0, 0, 0);
if (theWindow == NULL)
{
Log("SDL Error Video: %s", SDL_GetError());
}
...
}
At first glance, I discovered two lines: vc_dispmanx_display_open( 0 /* LCD */ ); and graphics_get_display_size( 0 /* LCD */ , &screen_width, &screen_height);. I tried changing the display parameter to 1, thinking that it refers to DISPLAY=:1, but it did not do anything. I added logs for screen resolution, and I get 1920x1080, which is the resolution of the HDMI display. I think there must be something with the EGL portion of the code that I'm missing. What should I do right now? Is my logic fair enough or am I missing something?
Any requirements, please let me know. Any guidance regarding the issue is much appreciated.
EDIT: I saw that some people use the following, but raspberry pi zero can not find EGL/eglvivante.h for fb functions so I am unable to compile it:
int fbnum = 1; // fbnum is an integer for /dev/fb1 fbnum = 1
EGLNativeDisplayType native_display = fbGetDisplayByIndex(fbnum);
EGLNativeWindowType native_window = fbCreateWindow(native_display, 0, 0, 0, 0);
display = eglGetDisplay(native_display);

How to write a Three-view drawings shader(in Unity3d)?

I attempt implement a Three-view drawings shader for simple geometry and have test with this shader,simple as:
float doCos = dot(viewDirection, normalDirection);
float4 texColor;
//2. need board
bool isBackSide = doCos < 0;
if(isBackSide) {
//_Dotted is a 2d texture the board is black dot line
texColor = tex2D(_Dotted, i.tex.xy * _Dotted_ST.xy + _Dotted_ST.zw);
} else {
//_Dotted is a 2d texture the board is black entity line
texColor = tex2D(_Entity, i.tex.xy * _Entity_ST.xy + _Entity_ST.zw);
}
//ignore some color
if(texColor.x > 0.5f) {
discard;
}
return texColor;
There have a problem the dotted line will obvious then entity line when forward plane become steep.also, I add a rim line but its not point so I ignore it.
in order to clarify, I use two questions:
1. how to modify that shader solve dotted line problem?
2. Does have exist professional Three-view drawings shader?(not find yet)

Points or spheres in 3D cube with Perl

Let's say I have #points[$number][$x][$y][$z][$color] and I just for debug purposes want them visualized in 3D cube to better observe what I have. Typically I export them to *.txt and use R 3D plotting, but maybe there is easy way to do this in Perl?
It would be even better to have spheres with radius.
My answer: use OpenGL perl bindings
I haven't quite done an exact answer to your question but I'm sure you can adopt this code
I haven't done OpenGL before but it was a fun little evening project
use OpenGL qw/ :all /;
use constant ESCAPE => 27;
# Global variable for our window
my $window;
my $CubeRot = 0;
my $xCord = 1;
my $yCord = 1;
my $zCord = 0;
my $rotSpeed = 0.02 ;
($width, $height) = (1366,768);
#points = ( [ 30,40,40,[100,0,0]], #red
[ 100,100,40,[0,100,0]], #green
[ 100,10,60,[0,100,100]], #turquoise
[ 200,200,100,[0,0,100]] #blue
);
sub reshape {
glViewport(0, 0, $width, $height); # Set our viewport to the size of our window
glMatrixMode(GL_PROJECTION); # Switch to the projection matrix so that we can manipulate how our scene is viewed
glLoadIdentity(); # Reset the projection matrix to the identity matrix so that we don't get any artifacts (cleaning up)
gluPerspective(60, $width / $height, 1.0, 100.0); # Set the Field of view angle (in degrees), the aspect ratio of our window, and the new and far planes
glMatrixMode(GL_MODELVIEW); # Switch back to the model view matrix, so that we can start drawing shapes correctly
glOrtho(0, $width, 0, $height, -1, 1); # Map abstract coords directly to window coords.
glScalef(1, -1, 1); # Invert Y axis so increasing Y goes down.
glTranslatef(0, -h, 0); # Shift origin up to upper-left corner.
}
sub keyPressed {
# Shift the unsigned char key, and the x,y placement off #_, in
# that order.
my ($key, $x, $y) = #_;
# If escape is pressed, kill everything.
if ($key == ESCAPE)
{
# Shut down our window
glutDestroyWindow($window);
# Exit the program...normal termination.
exit(0);
}
}
sub InitGL {
# Shift the width and height off of #_, in that order
my ($width, $height) = #_;
# Set the background "clearing color" to black
glClearColor(0.0, 0.0, 0.0, 0.0);
# Enables clearing of the Depth buffer
glClearDepth(1.0);
glDepthFunc(GL_LESS);
# Enables depth testing with that type
glEnable(GL_DEPTH_TEST);
# Enables smooth color shading
glShadeModel(GL_SMOOTH);
# Reset the projection matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity;
# Reset the modelview matrix
glMatrixMode(GL_MODELVIEW);
}
sub display {
glClearColor(1.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity;
glTranslatef(0.0, 0.0, -5.0); # Push eveything 5 units back into the scene, otherwise we won't see the primitive
#glPushMatrix();
#glRotatef($CubeRot, $xCord, $yCord, $zCord);
# this is where the drawing happens, adjust glTranslate to match your coordinates
# the centre is is 0,0,0
for my $sphere ( #points ) {
glPushMatrix();
glColor3b( #{$sphere->[3]}) ;
glRotatef($CubeRot, $xCord, $yCord, $zCord);
glTranslatef($sphere->[0]/50 -2 ,$sphere->[1]/50 -2 ,$sphere->[2]/50 -2);
glutWireSphere(1.0,24,24); # Render the primitive
glPopMatrix();
}
$CubeRot += $rotSpeed;
glFlush; # Flush the OpenGL buffers to the window
}
# Initialize GLUT state
glutInit;
# Depth buffer */
glutInitDisplayMode(GLUT_SINGLE);
# The window starts at the upper left corner of the screen
glutInitWindowPosition(0, 0);
# Open the window
$window = glutCreateWindow("Press escape to quit");
# Register the function to do all our OpenGL drawing.
glutDisplayFunc(\&display);
# Go fullscreen. This is as soon as possible.
glutFullScreen;
glutReshapeFunc(\&reshape);
# Even if there are no events, redraw our gl scene.
glutIdleFunc(\&display);
# Register the function called when the keyboard is pressed.
glutKeyboardFunc(\&keyPressed);
# Initialize our window.
InitGL($width, $height);
# Start Event Processing Engine
glutMainLoop;
return 1;

Questions regarding my OpenGL - code

I have been starting to dive into OpenGL ES 2.0 the last couple days, but I still get really faulty results. One thing I do not quite understand, is how I am supposed to set up my buffers correctly.
I would like to create a shape like this: A kind of tent, if you like, without the left and right side.
3_______________________2
|\ /|
| \_ _ _ _ _ _ _ _ _ _/ |
| /4 5\ |
|/_____________________\|
0 1
So let's start with my Texture/Indices/Vertices Array:
That is what i set up :
#define RECT_TOP_R {1, 1, 0}
#define RECT_TOP_L {-1, 1, 0}
#define RECT_BOTTOM_R {1, -1, 0}
#define RECT_BOTTOM_L {-1, -1, 0}
#define BACK_RIGHT {1, 0, -1.73}
#define BACK_LEFT {-1, 0, -1.73}
const GLKVector3 Vertices[] = {
RECT_BOTTOM_L, //0
RECT_BOTTOM_R, //1
RECT_TOP_R, //2
RECT_TOP_L, //3
BACK_LEFT, //4
BACK_RIGHT //5
};
const GLKVector4 Color[] = {
{1,0,0,1},
{0,1,0,1},
{0,0,1,1},
{0,1,0,1},
{1,0,0,1},
{0,1,0,1},
{0,0,1,1},
{0,1,0,1}
};
const GLubyte Indices[] = {
0,1,3,
2,4,5,
0,1
};
const GLfloat texCoords[] = {
0,0,
1,0,
0,1,
1,1,
1,1,
0,0,
0,0,
1,0
};
Here I generate/bind the buffers.
glGenBuffers(1, &vertexArray);
glBindBuffer(GL_ARRAY_BUFFER, vertexArray);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition,3,GL_FLOAT,GL_FALSE,sizeof(Vertices),0);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
glGenBuffers(1, &indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
glGenBuffers(1, &colArray);
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_FALSE, sizeof(Color), 0);
glBufferData(GL_ARRAY_BUFFER, sizeof(Color), Color, GL_STATIC_DRAW);
glGenBuffers(1, &texArray);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(texCoords),0);
glBufferData(GL_ARRAY_BUFFER, sizeof(texCoords), texCoords, GL_STATIC_DRAW);
So I have a questions regarding buffers:
What is the difference between GL_ARRAY_BUFFER and GL_ELEMENT_ARRAY_BUFFER ?
Here is the gelegate method, which is called whenever it redraws:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
self.contentScaleFactor = 2.0;
self.opaque = NO;
glClearColor(1.0f, 1.0f, 1.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
[self.effect prepareToDraw];
glDrawElements(GL_TRIANGLE_STRIP, sizeof(Indices), GL_UNSIGNED_BYTE, 0);
}
So, the code obviously does not work accordingly. Could you please help me ? I have been trying to get it to work, but I am losing my nerves.
Ok, so I definitely did something wrong there. I reused code from a website which basically stored all the Vertex data in one struct. I, however, have changed the code, in that I have separated the individual attribute arrays (colors, texture coordinates) into individual arrays. Before, the struct was buffered on its own, so the struct was processed by the GPU as a whole with the texture array and the color array. Now - after my changes - I need to generate and bind those buffers individually.
Another problem I could partly resolve was the one with the indices and texture mapping. I do not know whether I understood that right, but if I assign the texture coordinates (x,y) to a certain index and then reuse that index - with the aim of having another texture coordinate in that exact place - then apparently I would not have reason to wonder why everything is messed up.
What I ended up doing did not exactly solve my problem, but I got a whole lot nearer to my set goal and I am quite proud of my learning curve so far as far as openGL is concerned.
This answer is intended for others who might face the same problems and I hope that I do not spread any wrong information here. Please feel free to edit/point out any mistakes.
In response to your own answer, the vertex data in a struct you mentioned is called a struct of arrays. Apple recommend you use this layout.