There was a interface (called setColorKey) to control the framebuffer, making a specific color to be transparent. Is there a similar thing in wayland client ?
Transparency must be handled by the client, depending on the used pixel format.
So, the behavior you're asking for depends on your buffer generation.
Related
When creating some diagram in Enterprise Architect, it is an easy task to modify all blocks within the diagram, even the background color of the page itself. But I completely fail in finding the setting to change the background color of the actual frame itself. Is it really that hidden?
Sidemark: when changing theme also this color changes, so there must be something...
UPDATE: when selecting the diagram, I would expect the background color to be modified when using the toolbox, I can change frame color, text color, line width, but not bg color. Probably a bug...
I was deceived and thought this were boundaries. But they are diagram frames. And these do not have coloring. I thought about applying a different Theme to the underying diagram, but that is not inherited by the frame. I tried using a colored boundary as background but that will not fill the whole frame:
So basically: you're out of luck here. You might send a feature request. But you need a veeeery long breath.
In Unity's Camera component, there is a property Clear Flags which allows to choose from four options: Skybox, Solid Color, Depth Only and Don't Clear.
As documentation says:
Don’t clear
This mode does not clear either the color or the depth buffer. The
result is that each frame is drawn over the next, resulting in a
smear-looking effect. This isn’t typically used in games, and would
more likely be used with a custom shader.
Note that on some GPUs (mostly mobile GPUs), not clearing the screen
might result in the contents of it being undefined in the next frame.
On some systems, the screen may contain the previous frame image, a
solid black screen, or random colored pixels.
"This isn't typically used in games and would more likely be used with a custom shader"
So my question is :
How to use it in a custom shader and What effects can be achieved by using it?
Has anyone ever used it or has a good explanation about the basic concept.
Thanks
An idea would be those enemy encounter effects in Final Fantasy games. Look at the top edge of this gif to see the smearing effects of previous frames. This is probably combined with blur/rotation.
Thread question is a bit old, however I had this problem and solved it.
I've made a Screen Image Effect that reproduces this effect, you can see it here:
https://github.com/falconmick/ClearFlagsMobile
Hope this helps!
i just found something on that site: iphoneexamples.com.
Looking to "Display images" i found something new to me.
myImage.opaque = YES; // explicitly opaque for performance
Could someone explain it to me, please? And for which kind (or usecase) of images does it work? When not?
Would be great to know. Thanks for your time...
The iPhone GPU is a tile-based renderer. If an overlaying layer is completely opaque over an entire tile, the GPU can ignore setting up and processing any graphics commands related to the layer underneath for that particular tile, in addition to not having to do compositing of the pixels in that tile.
If your image doesn't cover a complete tile, the GPU will still have to potentially process multiple layers. The size of a tile is implementation dependent, but tiny graphics images are far less likely to cover a tile. Huge images that cover multiple tiles will show the greatest advantage from being opaque.
From the View Programming Guide for iOS:
Declare Views as Opaque Whenever
Possible
UIKit uses the opaque
property of each view to determine
whether the view can optimize
compositing operations. Setting the
value of this property to YES for a
custom view tells UIKit that it does
not need to render any content behind
your view. Less rendering can lead to
increased performance for your drawing
code and is generally encouraged. Of
course, if you set the opaque property
to YES, your view must fills its
bounds rectangle completely with fully
opaque content.
hotpaw2 points out the behind-the-scenes reason for this, which can be found in the OpenGL ES Programming Guide for iOS:
Another advantage of deferred
rendering is that it allows the GPU to
perform hidden surface removal before
fragments are processed. Pixels that
are not visible are discarded without
sampling textures or performing
fragment processing, significantly
reducing the calculations that the GPU
must perform to render the tile. To
gain the most benefit from this
feature, draw as much of the frame
with opaque content as possible and
minimize use of blending, alpha
testing, and the discard instruction
in GLSL shaders. Because the hardware
performs hidden surface removal, it is
not necessary for your application to
sort primitives from front to back.
You get better performance when a view or layer is opaque than when it's not. If it's not opaque, the graphics system has to composite that layer with the layers below to produce the final image. If it is opaque, then it's just a matter of copying the pixels to the frame buffer.
A little tricky issue to be aware of is that UIImageView will reset the opaque property to FALSE any time the image property is changed to a new UIImage. You could explicitly check in your code and set the opaque property after any change to the image property, or you could extend UIImageView and provide you own implementation of setImage that sets opaque after calling the super setImage method. I only found out about this by accident.
I have an iPhone app that does image manipulation via blending two UIImage objects via CoreGraphics, specifically CGContextSetBlendMode. I am currently researching porting it to Android. I've gone through the process of combining to Bitmap objects on Android using PorterDuff modes. However, I want much more complicate compositing. For example, I'm using kCGBlendModeHardLight for many blends:
Either multiplies or screens colors,
depending on the source image sample
color. If the source image sample
color is lighter than 50% gray, the
background is lightened, similar to
screening. If the source image sample
color is darker than 50% gray, the
background is darkened, similar to
multiplying. If the source image
sample color is equal to 50% gray, the
source image is not changed. Image
samples that are equal to pure black
or pure white result in pure black or
white. The overall effect is similar
to what you’d achieve by shining a
harsh spotlight on the source image.
Use this to add highlights to a scene.
But don't know of anyway (if it's even possible) to emulate this via Porter-Duff. Does Android not support better Image Manipulation algorithms out of the box? Is it possible to use Porter-Duff in some way to emulate more advanced blend modes?
In addition to the 12 Porter-Duff blending equations, Android supports Lighten, Darken, Multiply, Screen and soon Overlay. Unfortunately this means HardLight is not available and you would have to implement it yourself.
Being a complete noob in iPhone development, I was wondering what would be the best way to define regions in an Image (for interaction ). So far I've got 2 ideas :
use CGpath to basically draw the areas that I`m interested in but I quickly can see it becoming tedious on complex graphics .
use a Color coded layer with regions containing different RGB values and return those as my regions .
Are those sensible approaches ?
Depends on what you mean by interaction and whether you want the regions to be visible to the user.
A simple approach would be to just add UIButton's above your image. They can be transparent and any size (rectangular) that you like. Or they can contain images or colors to be visible to the user.
If you need arbitrary shapes then this solution won't be useful to you.