sws_scale PAL8 to RGBA returns image that isn't clear - png

I'm using sws_scale to convert images and videos from every format to RGBA, using an SWSContext created thus:
auto context = sws_getContext(width, height, pix_fmt, width, height, AV_PIX_FMT_RGBA,
SWS_BICUBIC, nullptr, nullptr, nullptr);
but when using a PNG with color type Palette (pix_fmt = AV_PIX_FMT_PAL8) sws_scale doesn't seem to take into account the transparent color, and the resulting RGBA raster isn't transparent. Is this a bug with sws_scale, or am I making some assumption about the result?
palette image:
https://drive.google.com/file/d/1CIPkYeHElNSsH2TAGMmr0kfHxOkYiZTK/view?usp=sharing
RGBA image:
https://drive.google.com/open?id=1GMlC7RxJGLy9lpyKLg2RWfup1nJh-JFc

I was making a wrong assumption - sws_scale doesn't promise to return a premultiplied-alpha color, so the values I was getting were r:255,g:255,b:255,a:0.

Related

How can i convert hex color code to color name or rgb value?

The thing is i have an hex color code which is pass to another screen and when i go to another screen i wanna convert that hex color code to color name or to rgb value is that possible? need some help here.
This is my hex color code value type #ff373334
the package color_models will help you convert a Hex color to CMYK, HSI, HSL, HSP, HSB, LAB, Oklab, RGB, and XYZ.
It's straightforward to use, and if you need a custom implementation, you can check the repository of the package to see how each model work to convert your Hex color to RGB.
Its simple
new Color(0xFF373334)
should work
or
Color(0xFF373334).red
Color(0xFF373334).green
Color(0xFF373334).blue
You can use it like this.
hexStringToColor(String hexColor) {
hexColor = hexColor.toUpperCase().replaceAll("#", "");
if (hexColor.length == 6) {
hexColor = "FF" + hexColor;
}
return Color(int.parse(hexColor, radix: 16));
}
Now you can call this function anywhere like this.
Color color = hexStringToColor(hexString);

CGContext.init returns nil

I'm making a simple graphics app with moving objects and some animation for Mac OS.
What I'm trying to do is to create a bitmap in memory which is to be rendered to the actual context at the end of a frame (not sure whether CG is a proper way to do this):
let tempBitmap = CGContext.init (data: nil, width: width,
height: height, bitsPerComponent: 8, bytesPerRow: 0, space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.none.rawValue)
but it always returns nil.
What is the proper way to create an in-memory bitmap?
Your combination is not supported. You can't have RGB color with no alpha. The list of supported pixel formats is in the Quartz 2D Programming Guide.
You can ignore the alpha ("none skip first" or "none skip last") but you can't work directly on 24-bit (3x8) packed pixels.

Unity is returning material color slightly wrong

I have this mini task in my game where you need to click trophies to change color of the wood on them. I have two arrays of colors, one is an array containing all possible colors and the other one contains four colors (the answer) as follows:
I've double checked that the colors are equal between the two arrays. For example the purple in Colors-array has exactly the same r, g, b & a values as the purple in the Right Order-array.
To check whether the trophies has correct color I just loop through them and grab their material color. Then I check that color against the Right Order-array but it's not quite working. For example when my first trophy is purple it should be correct, but it's not because for some reason Unity is returning slightly different material color than excepted:
Hope somebody knows why this is happening.
When you say, they are exactly same color, I assume you are referring rgb values from Color Inspector, which are not precise values.
Now I dont know what could be causing in different values of colors but
You can write an extension method to compare the values after rounding them to closest integer.
public static class Extensions
{
public static bool CompareRGB(this Color thisColor, Color otherColor)
{
return
Mathf.RoundToInt(thisColor.r * 255) == Mathf.RoundToInt(otherColor.r * 255) &&
Mathf.RoundToInt(thisColor.b * 255) == Mathf.RoundToInt(otherColor.b * 255) &&
Mathf.RoundToInt(thisColor.g * 255) == Mathf.RoundToInt(otherColor.g * 255);
}
}
usage:
Color red = Color.Red;
red.CompareRGB(Color.Red); // true;
red.CompareRGB(Color.Green); // false;
Hope this helps.
I would use a palette. This is simply an array of all the possible colors you use (sounds like you have this). Record, for each "trophy", the INDEX into this array, at the same time you assign the color to the renderer. Also, record the index for each "button", at the same time you assign the color to the renderer.
Then you can simply compare the palette index values (simple integers) to see if the color matches.

Issue reading screen pixels for color detaction

I am making a game representing freehand drawing and sprites to animate when pass over it. So i have to use color detection and cause an event when the change in color is encountered by sprite on the screen from where it passes. For this i am using glReadpixel() passing RGBA_8888 and GLES20 version and recieve its value in Red Green Blue form but everytime it returns everything to be 0. Tried to change pixelformat and make many hit and trial but no sucess. Can you please help
My code:
`
ByteBuffer PixelBuffer = ByteBuffer.allocateDirect(4);
PixelBuffer.order(ByteOrder.nativeOrder());
PixelBuffer.position(0);
int mTemp = 0;
GLES20.glReadPixels(100, 100, 1,1,GLES20.GL_RGBA,GLES20.GL_UNSIGNED_BYTE, PixelBuffer);
byte b[] = new byte[4];
PixelBuffer.get(b);
Log.e("COLOR", "R:" + PixelBuffer.get(0) + PixelBuffer.get(1) + PixelBuffer.get(2));
`
Result
Logcat : COLOR R: 000.
I tried using non black background and have red color on screen coordinate provided.
Thanks in advance

Memory Map UIImage

I have a UIImage and I would like to put its data in a file and and then used a mapped file to save some memory. Apparently, the UIImage data is private and it's not possible to access it. Would you have any suggestions to solve that?
Thanks!
If you want to memory map the encoded image data, then mmap a file and provide a reference to the data by passing a CGDataProviderRef to CGImageCreate.
mapped = mmap( NULL , length , ... );
provider = CGDataProviderCreateWithData( mapped , mapped , length , munmap_wrapper );
image = CGImageCreate( ... , provider , ... );
uiimage = [UIImage imageWithCGImage:image];
...
Where munmap_wrapper is something like this:
// conform to CGDataProviderReleaseDataCallback
void munmap_wrapper( void *p , const void *cp , size_t l ) { munmap( p , l ); }
If you want to memory map the actual pixels, instead of the encoded source data, you would do something similar with a CGBitmapContext. You would also create the provider and image so the image refers to the same pixels as the context. Whatever is drawn in the context will be the content of the image. The width, height, color space and other parameters should be identical for the context and image.
context = CGBitmapContextCreate( mapped , ... );
In this case, length will be at least bytes_per_row*height bytes so the file must be at least that large.
If you have an existing image and you want to mmap the pixels, then create the bitmap context with the size and color space of your image and use CGContextDrawImage to draw the image in the context.
You did not say the source of your image, but if you are creating it at runtime it would be more efficient to create it directly in the bitmap context. Any image creation requires a bitmap context behind the scenes, so it might as well be the memory mapped one from the start.