GTK3 - Create image object from stock - gtk3

I want to create a grid which displays error message if the file does not exist:
/* size */
s = get_file_size(recording->filename);
if (s > 0) {
size = g_format_size_full(s, G_FORMAT_SIZE_LONG_FORMAT);
gtk_label_set_text(GTK_LABEL(size_lbl), size);
gtk_widget_hide(error)
g_free(size);
} else {
size = g_strdup(_("Import Errors"));
gtk_widget_show(error)
}
in gtk grid cannot set type of "error" element to display message as in screen shot:
grid = gtk_grid_new();
gtk_grid_set_row_spacing(GTK_GRID(grid), 6);
gtk_grid_set_column_spacing(GTK_GRID(grid), 15);
gtk_container_set_border_width(GTK_CONTAINER(grid), 6);
s_lbl = gtk_label_new(Size:);
size_lbl = gtk_label_new("");
error = ?
error_pixmap = gtk_image_new_from_stock(GTK_STOCK_DIALOG_ERROR, GTK_ICON_SIZE_SMALL_TOOLBAR);
gtk_container_add(GTK_CONTAINER(error), error_pixmap);
gtk_grid_attach(GTK_GRID(grid), s_lbl, 0, 0, 1, 1);
gtk_grid_attach(GTK_GRID(grid), error, 1, 0, 1, 1);
gtk_grid_attach(GTK_GRID(grid), size_lbl, 2, 0, 1, 1);
For any help, thanks.
Screen shot:
[Size]:

Use a GtkBox with GTK_ORIENTATION_HORIZONTAL, or an adjoining cell in the GtkGrid.

Related

Im trying to make an animation trigger on roblox studio, can someone tell me how?

So when I try touch the trigger, the thing I'm trying to animate doesn't so the animation, I tried the animation id, anything else, can someone send me a model that has this, it will be nice if you can.
I made this with the gui elements already done u fill in what u need.
local library = {}
local function onClicked(frame, gui)
gui:SetVisible(false)
gui:ClearAllChildren()
end
function library:Create(parent)
local frame = Instance.new("Frame")
frame.Size = UDim2.new(1, 0, 1, 0)
frame.BackgroundColor3 = Color3.new(1, 1, 1)
frame.BorderSizePixel = 0
local gui = Instance.new("TextLabel")
gui.Size = UDim2.new(0, 200, 0, 50)
gui.BackgroundColor3 = Color3.new(0.5, 0.5, 0.5)
gui.Position = UDim2.new(0.5, -100, 0.5, -25)
gui.Text = "Click the Button"
gui.TextColor3 = Color3.new(1, 1, 1)
gui.TextXAlignment = Enum.TextXAlignment.Center
gui.TextYAlignment = Enum.TextYAlignment.Center
gui.Font = Enum.Font.SourceSans
gui.TextSize = 24
gui.Parent = frame
local button = Instance.new("TextButton")
button.Size = UDim2.new(0, 50, 0, 25)
button.BackgroundColor3 = Color3.new(1, 0, 0)
button.Position = UDim2.new(0.5, -25, 0.85, 0)
button.Text = "X"
button.TextColor3 = Color3.new(1, 1, 1)
button.Parent = frame
button.MouseButton1Click:Connect(function()
onClicked(frame, gui)
end)
frame.Parent = parent
return frame
end

Part Size seems to be ignored

I have a part, created with
local p = Instance.new("Part")
p.Size = Vector3.new(2, 2, 2)
That part uses a mesh like
local m = Instance.new("SpecialMesh", p)
m.MeshType = Enum.MeshType.FileMesh
m.MeshId = "rbxassetid://7974596857"
which is a cube with rounded corners that I created in blender
When I put those beside each other, it seems like the Size property actually is ignored.
Why?
size 2
p1.Position = Vector3.new(0, 0, 0)
p1.Size = Vector3.new(2, 2, 2)
p2.Position = Vector3.new(5, 5, 0)
p2.Size = Vector3.new(2, 2, 2)
size 5
p1.Position = Vector3.new(0, 0, 0)
p1.Size = Vector3.new(5, 5, 5)
p2.Position = Vector3.new(5, 5, 0)
p2.Size = Vector3.new(5, 5, 5)
That's because special meshes have their own scaling property. If possible, use a MeshPart instead.

how to understand the origin in vtkImagedata?

I don't know how to understand the origin in vtkImageData. The document says that the origin is the coordinate of (0,0,0) in image. However, I use the vtkImageReslice to get two resliced image, and the origins are different but the images are the same. My code is:
from vtk.util.numpy_support import vtk_to_numpy, numpy_to_vtk
import vtk
import numpy as np
def vtkToNumpy(data):
temp = vtk_to_numpy(data.GetPointData().GetScalars())
dims = data.GetDimensions()
numpy_data = temp.reshape(dims[2], dims[1], dims[0])
numpy_data = numpy_data.transpose(2,1,0)
return numpy_data
def numpyToVTK(data):
flat_data_array = data.transpose(2,1,0).flatten()
vtk_data_array = numpy_to_vtk(flat_data_array)
vtk_data = numpy_to_vtk(num_array=vtk_data_array, deep=True, array_type=vtk.VTK_FLOAT)
img = vtk.vtkImageData()
img.GetPointData().SetScalars(vtk_data)
img.SetDimensions(data.shape)
return img
img = np.zeros(shape=[512,512,120])
img[0:300,0:100,:] = 1
vtkImg = numpyToVTK(img)
reslice = vtk.vtkImageReslice()
reslice.SetInputData(vtkImg)
reslice.SetAutoCropOutput(True)
reslice.SetOutputDimensionality(2)
reslice.SetInterpolationModeToCubic()
reslice.SetSlabNumberOfSlices(1)
reslice.SetOutputSpacing(1.0,1.0,1.0)
axialElement = [
1, 0, 0, 256,
0, 1, 0, 100,
0, 0, 1, 100,
0, 0, 0, 1
]
resliceAxes = vtk.vtkMatrix4x4()
resliceAxes.DeepCopy(axialElement)
reslice.SetResliceAxes(resliceAxes)
reslice.Update()
reslicedImg = reslice.GetOutput()
print('case 1', reslicedImg.GetOrigin())
reslicedNpImg = vtkToNumpy(reslicedImg)
import matplotlib.pyplot as plt
plt.figure()
plt.imshow(reslicedNpImg[:,:,0])
plt.show()
For another axialElement:
axialElement = [
1, 0, 0, 356,
0, 1, 0, 100,
0, 0, 1, 100,
0, 0, 0, 1
]
The two different axialElement would generate the same image, but the origin of image are different. So, I am confused about the origin in vtkImageData.
When you read in the image - you are using the numpy to vtk functions that read and create a vtkImageData object. Do these functions set the origins to be what you expect?
img.SetOrigin() needs to be called and fed the expected origins. Otherwise Origin will be set to a default value (the same for your two images). Printing out the imagedata and seeing if the spacing, origin, direction are what you expect is important.

Running SDL/OpenGLES application on a specific DISPLAY in XServer

I am trying to port an application to an embedded system that I am trying to design. The embedded system is Raspberry Pi Zero W - based, and uses a custom Yocto build.
The application to be ported is written with SDL / OpenGLES to my understanding. I have a hard time understanding how to make a connection similar to the following depiction:
SDL APP -----> XServer ($DISPLAY) -------> Framebuffer /dev/fb1 ($FRAMEBUFFER)
System has two displays: One HDMI on /dev/fb0 and One TFT on /dev/fb1. I am trying to run the SDL application on TFT. The following are the steps I do:
First, start an XServer on DISPLAY=:1 that is connected to /dev/fb1:
FRAMEBUFFER=/dev/fb1 xinit /etc/X11/Xsession -- /usr/bin/Xorg :1 -br -pn -nolisten tcp -dpi 100
The first step seems like it's working. I can see LXDE booting up on my TFT screen. Checking the display, I get the correct display resolution:
~/projects# DISPLAY=:1 xrandr -q
xrandr: Failed to get size of gamma for output default
Screen 0: minimum 320 x 240, current 320 x 240, maximum 320 x 240
default connected 320x240+0+0 0mm x 0mm
320x240 0.00*
Second, I would like to start SDL-written application using x11. I am thinking that should work in seeing the application on the TFT. In order to do so, I try:
SDL_VIDEODRIVER=x11 SDL_WINDOWID=1 DISPLAY=:1 ./SDL_App
No matter which display number I choose, it starts on my HDMI display and not on the TFT. So now I am thinking the person who wrote the application hardcoded somethings in the application code:
void init_ogl(void)
{
int32_t success = 0;
EGLBoolean result;
EGLint num_config;
static EGL_DISPMANX_WINDOW_T nativewindow;
DISPMANX_ELEMENT_HANDLE_T dispman_element;
DISPMANX_DISPLAY_HANDLE_T dispman_display;
DISPMANX_UPDATE_HANDLE_T dispman_update;
VC_DISPMANX_ALPHA_T alpha;
VC_RECT_T dst_rect;
VC_RECT_T src_rect;
static const EGLint attribute_list[] =
{
EGL_RED_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_BLUE_SIZE, 8,
EGL_ALPHA_SIZE, 8,
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_NONE
};
EGLConfig config;
// Get an EGL display connection
display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
assert(display!=EGL_NO_DISPLAY);
// Initialize the EGL display connection
result = eglInitialize(display, NULL, NULL);
assert(EGL_FALSE != result);
// Get an appropriate EGL frame buffer configuration
result = eglChooseConfig(display, attribute_list, &config, 1, &num_config);
assert(EGL_FALSE != result);
// Create an EGL rendering context
context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL);
assert(context!=EGL_NO_CONTEXT);
// Create an EGL window surface
success = graphics_get_display_size( 0 /* LCD */ , &screen_width, &screen_height);
printf ("Screen width= %d\n", screen_width);
printf ("Screen height= %d\n", screen_height);
assert( success >= 0 );
int32_t zoom = screen_width / GAMEBOY_WIDTH;
int32_t zoom2 = screen_height / GAMEBOY_HEIGHT;
if (zoom2 < zoom)
zoom = zoom2;
int32_t display_width = GAMEBOY_WIDTH * zoom;
int32_t display_height = GAMEBOY_HEIGHT * zoom;
int32_t display_offset_x = (screen_width / 2) - (display_width / 2);
int32_t display_offset_y = (screen_height / 2) - (display_height / 2);
dst_rect.x = 0;
dst_rect.y = 0;
dst_rect.width = screen_width;
dst_rect.height = screen_height;
src_rect.x = 0;
src_rect.y = 0;
src_rect.width = screen_width << 16;
src_rect.height = screen_height << 16;
dispman_display = vc_dispmanx_display_open( 0 /* LCD */ );
dispman_update = vc_dispmanx_update_start( 0 );
alpha.flags = DISPMANX_FLAGS_ALPHA_FIXED_ALL_PIXELS;
alpha.opacity = 255;
alpha.mask = 0;
dispman_element = vc_dispmanx_element_add ( dispman_update, dispman_display,
0/*layer*/, &dst_rect, 0/*src*/,
&src_rect, DISPMANX_PROTECTION_NONE, &alpha, 0/*clamp*/, DISPMANX_NO_ROTATE/*transform*/);
nativewindow.element = dispman_element;
nativewindow.width = screen_width;
nativewindow.height = screen_height;
vc_dispmanx_update_submit_sync( dispman_update );
surface = eglCreateWindowSurface( display, config, &nativewindow, NULL );
assert(surface != EGL_NO_SURFACE);
// Connect the context to the surface
result = eglMakeCurrent(display, surface, surface, context);
assert(EGL_FALSE != result);
eglSwapInterval(display, 1);
glGenTextures(1, &theGBTexture);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, theGBTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*) NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0.0f, screen_width, screen_height, 0.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0.0f, 0.0f, screen_width, screen_height);
quadVerts[0] = display_offset_x;
quadVerts[1] = display_offset_y;
quadVerts[2] = display_offset_x + display_width;
quadVerts[3] = display_offset_y;
quadVerts[4] = display_offset_x + display_width;
quadVerts[5] = display_offset_y + display_height;
quadVerts[6] = display_offset_x;
quadVerts[7] = display_offset_y + display_height;
glVertexPointer(2, GL_SHORT, 0, quadVerts);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, kQuadTex);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glClear(GL_COLOR_BUFFER_BIT);
}
void init_sdl(void)
{
if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_GAMECONTROLLER) < 0)
{
Log("SDL Error Init: %s", SDL_GetError());
}
theWindow = SDL_CreateWindow("Gearboy", 0, 0, 0, 0, 0);
if (theWindow == NULL)
{
Log("SDL Error Video: %s", SDL_GetError());
}
...
}
At first glance, I discovered two lines: vc_dispmanx_display_open( 0 /* LCD */ ); and graphics_get_display_size( 0 /* LCD */ , &screen_width, &screen_height);. I tried changing the display parameter to 1, thinking that it refers to DISPLAY=:1, but it did not do anything. I added logs for screen resolution, and I get 1920x1080, which is the resolution of the HDMI display. I think there must be something with the EGL portion of the code that I'm missing. What should I do right now? Is my logic fair enough or am I missing something?
Any requirements, please let me know. Any guidance regarding the issue is much appreciated.
EDIT: I saw that some people use the following, but raspberry pi zero can not find EGL/eglvivante.h for fb functions so I am unable to compile it:
int fbnum = 1; // fbnum is an integer for /dev/fb1 fbnum = 1
EGLNativeDisplayType native_display = fbGetDisplayByIndex(fbnum);
EGLNativeWindowType native_window = fbCreateWindow(native_display, 0, 0, 0, 0);
display = eglGetDisplay(native_display);

OfflineAudioContext - adding gain to multiple channels

I am trying to manipulate gain on individual buffers in an OfflineAudioText.
ac and data are previously determined after loading it in
var source = ac.createBufferSource();
source.buffer = data;
var splitter = ac.createChannelSplitter(2);
source.connect(splitter);
var merger = ac.createChannelMerger(2);
var gainNode = ac.createGain();
gainNode.gain.value = 0.5;
splitter.connect(gainNode, 0);
splitter.connect(gainNode, 1);
gainNode.connect(merger, 0, 1);
//error occurs here
gainNode.connect(merger, 1, 0);
var dest = ac.createMediaStreamDestination();
merger.connect(dest);
Error: Failed to execute 'connect' on 'AudioNode': output index (1) exceeds number of outputs (1)
I was not assigning the input correctly:
splitter.connect(gainNode, 0);
splitter.connect(gainNode, 1);
gainNode.connect(merger, 0, 0);
gainNode.connect(merger, 0, 1);