Trying to write 16-bit PNG - png

I'm capturing images from camera, and I have two function for saving 16-bit(!) image one in PNG and one in TIFF formats.
Could you please explain why the PNG is a very noisy image? like this:
PNG function:
bool save_image_png(const char *file_name,const mono16bit& img)
{
[...]
/* write header */
if (setjmp(png_jmpbuf(png_ptr)))
abort_("[write_png_file] Error during writing header");
png_set_IHDR(png_ptr, info_ptr, width, height,
bit_depth,PNG_COLOR_TYPE_GRAY , PNG_INTERLACE_NONE,
PNG_COMPRESSION_TYPE_BASE, PNG_FILTER_TYPE_BASE);
png_write_info(png_ptr, info_ptr);
/* write bytes */
if (setjmp(png_jmpbuf(png_ptr)))
abort_("[write_png_file] Error during writing bytes");
row_pointers = (png_bytep*) malloc(sizeof(png_bytep) * height);
for (y=0; y<height; y++)
row_pointers[y] = (png_byte*) malloc(png_get_rowbytes(png_ptr,info_ptr));
for (y = 0; y < height; y++)
{
row_pointers[y] = (png_bytep)img.getBuffer() + y * width*2;
}
png_write_image(png_ptr, row_pointers);
/* end write */
[...]
}
and TIFF function:
bool save_image(const char *fname,const mono16bit& img)
{
[...]
for(y=0; y<height; y++) {
if((err=TIFFWriteScanline(tif,(tdata_t)(img.getBuffer()+width*y),y,0))==-1)
break;
}
TIFFClose(tif);
if(err==-1) {
fprintf(stderr,"Error writing to %s file\n",fname);
return false;
}
return true;
//#endif //USE_LIBTIFF
}
Thank you!

png_set_swap does nothing. You have to actually flip bytes in each pixel of the image.
If you’re on a PC and have SSSE3 or newer, a good way is _mm_shuffle_epi8 instruction, make a permute vector with _mm_setr_epi8.
If you’re on ARM and have NEON, use vrev16q_u8 instruction instead.

Perhaps you have a byte-order problem.
Try adding:
png_set_swap(png_ptr);
before saving the image

Related

is it possible make changes changes on source file on yocto recipes and take effect?

pls let me know if this question is invalid.
I have include http://cgit.openembedded.org/meta-openembedded/tree/meta-oe/recipes-crypto/botan/botan_2.14.0.bb?h=master in my yocto build.
I just curious is it possible for me to add line of code in one of the library source file? for example i like to add stdout
in function
void CBC_Decryption::finish(secure_vector<uint8_t>& buffer, size_t offset) located in
/home/kjlau/yocto/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/botan/2.14.0-r0/package/usr/src/debug/botan/2.14.0-r0/Botan-2.14.0/src/lib/modes/cbc/cbc.cpp
something as shown below
void CBC_Decryption::finish(secure_vector<uint8_t>& buffer, size_t offset)
{
std::cout<<" CBC_Decryption::finish"<<std::endl;
BOTAN_STATE_CHECK(state().empty() == false);
BOTAN_ASSERT(buffer.size() >= offset, "Offset is sane");
const size_t sz = buffer.size() - offset;
const size_t BS = block_size();
if(sz == 0 || sz % BS)
throw Decoding_Error(name() + ": Ciphertext not a multiple of block size");
update(buffer, offset);
const size_t pad_bytes = BS - padding().unpad(&buffer[buffer.size()-BS], BS);
buffer.resize(buffer.size() - pad_bytes); // remove padding
if(pad_bytes == 0 && padding().name() != "NoPadding")
{
throw Decoding_Error("Invalid CBC padding");
}
}
If i can make this changes, how to compile it to ensure it take effect? i tried bitbake botan or bitbake on application side, i do not observe changes take place.
Let me know if this is this a invalid question, thanks

How to modify a Texture pixels from a compute shader in unity?

I stumbled upon a strange problem in vuforia.When i request a camera image using CameraDevice.GetCameraImage(mypixelformat), the image returned is both flipped sideways and rotated 180 deg. Because of this, to obtain a normal image i have to first rotate the image and then flip it sideways.The approach i am using is simply iterating over pixels of the image and modifying them.This approach is very poor performance wise.Below is the code:
Texture2D image;
CameraDevice cameraDevice = Vuforia.CameraDevice.Instance;
Vuforia.Image vufImage = cameraDevice.GetCameraImage(pixelFormat);
image = new Texture2D(vufImage.Width, vufImage.Height);
vufImage.CopyToTexture(image);
Color32[] colors = image.GetPixels32();
System.Array.Reverse(colors, 0, colors.Length); //rotate 180deg
image.SetPixels32(colors); //apply rotation
image = FlipTexture(image); //flip sideways
//***** THE FLIP TEXTURE METHOD *******//
private Texture2D FlipTexture(Texture2D original, bool upSideDown = false)
{
Texture2D flipped = new Texture2D(original.width, original.height);
int width = original.width;
int height = original.height;
for (int col = 0; col < width; col++)
{
for (int row = 0; row < height; row++)
{
if (upSideDown)
{
flipped.SetPixel(row, (width - 1) - col, original.GetPixel(row, col));
}
else
{
flipped.SetPixel((width - 1) - col, row, original.GetPixel(col, row));
}
}
}
flipped.Apply();
return flipped;
}
To improve the performance i want to somehow schedule these pixel operations on the GPU, i have heard that a compute shader can be used, but i have no idea where to start.Can someone please help me write the same operations in a compute shader so that the GPU can handle them, Thankyou!.
The whole compute shader are new for me too, but i took the occasion to research it a little bit for myself too. The following works for flipping a texture vertically (rotating and flipping horizontally should be just a vertical flip).
Someone might have a more elaborate solution for you, but maybe this is enough to get you started.
The Compute shader code:
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> Result;
Texture2D<float4> ImageInput;
float2 flip;
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
flip = float2(512 , 1024) - id.xy ;
Result[id.xy] = float4(ImageInput[flip].x, ImageInput[flip].y, ImageInput[flip].z, 1.0);
}
and called from any script:
public void FlipImage()
{
int kernelHandle = shader.FindKernel("CSMain");
RenderTexture tex = new RenderTexture(512, 1024, 24);
tex.enableRandomWrite = true;
tex.Create();
shader.SetTexture(kernelHandle, "Result", tex);
shader.SetTexture(kernelHandle, "ImageInput", myTexture);
shader.Dispatch(kernelHandle, 512/8 , 1024 / 8, 1);
RenderTexture.active = tex;
result.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0);
result.Apply();
}
This takes an input Texture2D, flips it in the shader, applies it to a RenderTexture and to a Texture2D, whatever you need.
Note that the image sizes are hardcoded in my instance and should be replaced by whatever size you need. (for within the shader use shader.SetInt(); )

How to write AVFrame out as JPEG image

I'm writing a program to extract images from a video stream. So far I have figured out how to seek to the correct frames, decode the video stream, and gather the relevant data into an AVFrame struct. I'm now trying to write the data out as a JPEG image, but my code isn't working. The code I got is from here: https://gist.github.com/RLovelett/67856c5bfdf5739944ed
int save_frame_as_jpeg(AVCodecContext *pCodecCtx, AVFrame *pFrame, int FrameNo) {
AVCodec *jpegCodec = avcodec_find_encoder(AV_CODEC_ID_JPEG2000);
if (!jpegCodec) {
return -1;
}
AVCodecContext *jpegContext = avcodec_alloc_context3(jpegCodec);
if (!jpegContext) {
return -1;
}
jpegContext->pix_fmt = pCodecCtx->pix_fmt;
jpegContext->height = pFrame->height;
jpegContext->width = pFrame->width;
if (avcodec_open2(jpegContext, jpegCodec, NULL) < 0) {
return -1;
}
FILE *JPEGFile;
char JPEGFName[256];
AVPacket packet = {.data = NULL, .size = 0};
av_init_packet(&packet);
int gotFrame;
if (avcodec_encode_video2(jpegContext, &packet, pFrame, &gotFrame) < 0) {
return -1;
}
sprintf(JPEGFName, "dvr-%06d.jpg", FrameNo);
JPEGFile = fopen(JPEGFName, "wb");
fwrite(packet.data, 1, packet.size, JPEGFile);
fclose(JPEGFile);
av_free_packet(&packet);
avcodec_close(jpegContext);
return 0;
}
If I use that code, the first error I got was about the time_base on the AVCodecContext not being set. I set that to the timebase of my video decoding AVCodecContext struct. Now I'm getting another error
[jpeg2000 # 0x7fd6a4015200] dimensions not set
[jpeg2000 # 0x7fd6a307c400] dimensions not set
[jpeg2000 # 0x7fd6a5800000] dimensions not set
[jpeg2000 # 0x7fd6a307ca00] dimensions not set
[jpeg2000 # 0x7fd6a3092400] dimensions not set
and the images still aren't being written. From that Github Gist, one commenter claimed that the metadata isn't being written to the JPEG image, but how should I write this metadata? I did set the width and height of the encoding context, so I'm not sure why it claims the dimensions are not set.
JPEG2000 isn't jpeg. To encode JPEG images, use AV_CODEC_ID_MJPEG. MJPEG stands for "motion JPEG", which is how a sequence of JPEG pictures making up a video stream is typically called.

How to change orientation of captured byte[] frames through onPreviewFrame callback?

I have tried to search for this question a lot, but never have seen any satisfactory answers, so now I have a last hope here.
I have an onPreviewFrame callback set up. Which gives a byte[] of raw frames with supported preview format(NV21 with H.264 encoded type).
Now, the problem is callback always starts giving byte[] frames from a fixed orientation, whenever device rotates it doesn't reflect to captured byte[] frames. I have tried with setDisplayOrientation and setRotation but these api's are only reflecting to preview which is being displayed not at all to the captured byte [] frames.
Android docs even says, Camera.setDisplayOrientation only affects the displaying preview, not the frame bytes:
This does not affect the order of byte array passed in onPreviewFrame(byte[], Camera), JPEG pictures, or recorded videos.
Finally Is there a way, at any API level, to change the orientation of the byte[] frames?
One possible way if you don't care about the format is to the use YuvImage class to get a JPEG buffer, use this buffer to create a Bitmap and rotate it to the corresponding angle. Something like that:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Size previewSize = camera.getParameters().getPreviewSize();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] rawImage = null;
// Decode image from the retrieved buffer to JPEG
YuvImage yuv = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null);
yuv.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), YOUR_JPEG_COMPRESSION, baos);
rawImage = baos.toByteArray();
// This is the same image as the preview but in JPEG and not rotated
Bitmap bitmap = BitmapFactory.decodeByteArray(rawImage, 0, rawImage.length);
ByteArrayOutputStream rotatedStream = new ByteArrayOutputStream();
// Rotate the Bitmap
Matrix matrix = new Matrix();
matrix.postRotate(YOUR_DEFAULT_ROTATION);
// We rotate the same Bitmap
bitmap = Bitmap.createBitmap(bitmap, 0, 0, previewSize.width, previewSize.height, matrix, false);
// We dump the rotated Bitmap to the stream
bitmap.compress(CompressFormat.JPEG, YOUR_JPEG_COMPRESSION, rotatedStream);
rawImage = rotatedStream.toByteArray();
// Do something we this byte array
}
I have modified the onPreviewFrame method of this Open Source Android Touch-To-Record library to take transpose and resize a captured frame.
I defined "yuvIplImage" as following in my setCameraParams() method.
IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);
This is my onPreviewFrame() method:
#Override
public void onPreviewFrame(byte[] data, Camera camera)
{
long frameTimeStamp = 0L;
if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
{
frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
}
else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
{
frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
}
else
{
long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
}
synchronized(FragmentCamera.mVideoRecordLock)
{
if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
{
FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;
if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
{
FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
}
try
{
yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);
int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];
Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width, mPreviewSize.height);
bgrImage.getIntBuffer().put(_temp);
opencv_core.cvTranspose(bgrImage, transposed);
opencv_core.cvFlip(transposed, transposed, 1);
opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
opencv_core.cvCopy(transposed, squared, null);
opencv_core.cvResetImageROI(transposed);
videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
videoRecorder.record(squared);
}
catch(com.googlecode.javacv.FrameRecorder.Exception e)
{
e.printStackTrace();
}
}
lastSavedframe = new SavedFrames(data, frameTimeStamp);
}
}
This code uses a method "YUV_NV21_TO_BGR", which I found from this link
Basically this method is used to resolve, which I call as, "The Green Devil problem on Android". You can see other android devs facing the same problem on other SO threads. Before adding "YUV_NV21_TO_BGR" method when I just took transpose of YuvIplImage, more importantly a combination of transpose, flip (with or without resizing), there was greenish output in resulting video. This "YUV_NV21_TO_BGR" method saved the day. Thanks to #David Han from above google groups thread.
Also you should know that all this processing (transpose, flip and resize), in onPreviewFrame, takes much time which causes you a very serious hit on your Frames Per Second (FPS) rate. When I used this code, inside onPreviewFrame method, the resulting FPS of the recorded video was down to 3 frames/sec from 30fps.
I would advise not to use this approach. Rather you can go for post-recording processing (transpose, flip and resize) of your video file using JavaCV in an AsyncTask. Hope this helps.

Can't get PNG transparency to work in JOGL

I'm trying to write text using transparent png's in jogl, but I can't for the life of me figure out how to make it work. I've been everywhere on the internet, but proper documentation for JOGL is scarce.
Here's how I load the texture:
private void loadTEXTure() //Har har, get it?
{
File file = new File(fontMap);
try
{
TextureData data = TextureIO.newTextureData(file, GL.GL_RGBA, GL.GL_SRGB8_ALPHA8, false, TextureIO.PNG);
textTexture = TextureIO.newTexture(data);
}
catch (GLException e) { e.printStackTrace(); }
catch (IOException e) { e.printStackTrace(); }
}
And this is how the png is displayed:
public void displayCharacter(GL gl, int[] textureBounds, int x1, int y1, int x2, int y2)
{
float texCordsx1 = ((float) textureBounds[0])/((float) textTexture.getWidth());
float texCordsy1 = ((float) textureBounds[1])/((float) textTexture.getHeight());
float texCordsx2 = ((float) textureBounds[2])/((float) textTexture.getWidth());
float texCordsy2 = ((float) textureBounds[3])/((float) textTexture.getHeight());
gl.glEnable(GL.GL_BLEND);
gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
textTexture.enable();
textTexture.bind();
gl.glBegin(GL.GL_QUADS);
gl.glTexCoord2f(texCordsx1, texCordsy1);
gl.glVertex2f(x1, y1);
gl.glTexCoord2f(texCordsx1, texCordsy2);
gl.glVertex2f(x1, y2);
gl.glTexCoord2f(texCordsx2, texCordsy2);
gl.glVertex2f(x2, y2);
gl.glTexCoord2f(texCordsx2, texCordsy1);
gl.glVertex2f(x2, y1);
gl.glEnd();
textTexture.disable();
}
Any help would be greatly appreciated!
Your blending configuration seems to be fine. They are exactly like mine, which actually work. However the error I think lies on the newTextureData(GLProfile glp... method. Your method says newTextureData(file... the newtexturedata() method doesn't accept File objects instead it is expecting a GLProfile profile instead as the first argument. As I read in the documentation http://jogamp.org/deployment/jogamp-next/javadoc/jogl/javadoc/com/jogamp/opengl/util/texture/TextureIO.html
I suggest you change that lines:
TextureData data = TextureIO.newTextureData(file, GL.GL_RGBA, GL.GL_SRGB8_ALPHA8, false, TextureIO.PNG);
textTexture = TextureIO.newTexture(data);
to
textTexture = TextureIO.newTexture(file,mipmap);
or
textTexture = TextureIO.newTexture(cl.getResource("/my/file/path/myimage.png"), false, null);
instead. If your file variable is correct, it should work.
For further JOGL readings you should consider these tutorials: http://www3.ntu.edu.sg/home/ehchua/programming/opengl/JOGL2.0.html
For JOGL documentation you should consider reading: http://jogamp.org/deployment/jogamp-next/javadoc/jogl/javadoc