Can't get PNG transparency to work in JOGL - png

I'm trying to write text using transparent png's in jogl, but I can't for the life of me figure out how to make it work. I've been everywhere on the internet, but proper documentation for JOGL is scarce.
Here's how I load the texture:
private void loadTEXTure() //Har har, get it?
{
File file = new File(fontMap);
try
{
TextureData data = TextureIO.newTextureData(file, GL.GL_RGBA, GL.GL_SRGB8_ALPHA8, false, TextureIO.PNG);
textTexture = TextureIO.newTexture(data);
}
catch (GLException e) { e.printStackTrace(); }
catch (IOException e) { e.printStackTrace(); }
}
And this is how the png is displayed:
public void displayCharacter(GL gl, int[] textureBounds, int x1, int y1, int x2, int y2)
{
float texCordsx1 = ((float) textureBounds[0])/((float) textTexture.getWidth());
float texCordsy1 = ((float) textureBounds[1])/((float) textTexture.getHeight());
float texCordsx2 = ((float) textureBounds[2])/((float) textTexture.getWidth());
float texCordsy2 = ((float) textureBounds[3])/((float) textTexture.getHeight());
gl.glEnable(GL.GL_BLEND);
gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
textTexture.enable();
textTexture.bind();
gl.glBegin(GL.GL_QUADS);
gl.glTexCoord2f(texCordsx1, texCordsy1);
gl.glVertex2f(x1, y1);
gl.glTexCoord2f(texCordsx1, texCordsy2);
gl.glVertex2f(x1, y2);
gl.glTexCoord2f(texCordsx2, texCordsy2);
gl.glVertex2f(x2, y2);
gl.glTexCoord2f(texCordsx2, texCordsy1);
gl.glVertex2f(x2, y1);
gl.glEnd();
textTexture.disable();
}
Any help would be greatly appreciated!

Your blending configuration seems to be fine. They are exactly like mine, which actually work. However the error I think lies on the newTextureData(GLProfile glp... method. Your method says newTextureData(file... the newtexturedata() method doesn't accept File objects instead it is expecting a GLProfile profile instead as the first argument. As I read in the documentation http://jogamp.org/deployment/jogamp-next/javadoc/jogl/javadoc/com/jogamp/opengl/util/texture/TextureIO.html
I suggest you change that lines:
TextureData data = TextureIO.newTextureData(file, GL.GL_RGBA, GL.GL_SRGB8_ALPHA8, false, TextureIO.PNG);
textTexture = TextureIO.newTexture(data);
to
textTexture = TextureIO.newTexture(file,mipmap);
or
textTexture = TextureIO.newTexture(cl.getResource("/my/file/path/myimage.png"), false, null);
instead. If your file variable is correct, it should work.
For further JOGL readings you should consider these tutorials: http://www3.ntu.edu.sg/home/ehchua/programming/opengl/JOGL2.0.html
For JOGL documentation you should consider reading: http://jogamp.org/deployment/jogamp-next/javadoc/jogl/javadoc

Related

How can i convert FBX embedded texture to TArray<uint8> Data?

I want to make a function that returns the textures built into FBX as 'TArray[uint8]' or UTexture2D using Assimp library. This is my code.
void UAIScene::GetTexture2D(bool& IsValid, UTexture2D*& ReturnedTexture)
{
IsValid = false;
UTexture2D* LoadedT2D = NULL;
TArray<uint8> RawFileData;
IImageWrapperModule& ImageWrapperModule = FModuleManager::LoadModuleChecked<IImageWrapperModule>(FName("ImageWrapper"));
aiString texture_file;
scene->mMaterials[0]->Get(AI_MATKEY_TEXTURE(aiTextureType_DIFFUSE, 0), texture_file);
if (const aiTexture* texture = scene->GetEmbeddedTexture(texture_file.C_Str())) {
//Here Is Problem
RawFileData = *(TArray<uint8>*) & (texture->pcData[0]);
TSharedPtr<IImageWrapper> ImageWrapper = ImageWrapperModule.CreateImageWrapper(ImageWrapperModule.DetectImageFormat(RawFileData.GetData(), RawFileData.Num()));
if (ImageWrapper.IsValid() && ImageWrapper->SetCompressed(RawFileData.GetData(), RawFileData.Num()))
{
TArray<uint8> UncompressedBRGA;
if (ImageWrapper->GetRaw(ERGBFormat::BGRA, (int32)8, UncompressedBRGA))
{
LoadedT2D = UTexture2D::CreateTransient(ImageWrapper->GetWidth(), ImageWrapper->GetHeight(), PF_B8G8R8A8);
if (!LoadedT2D)
{
IsValid = false;
}
void* TextureData = LoadedT2D->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_WRITE);
FMemory::Memcpy(TextureData, UncompressedBRGA.GetData(), UncompressedBRGA.Num());
LoadedT2D->PlatformData->Mips[0].BulkData.Unlock();
LoadedT2D->UpdateResource();
}
}
ReturnedTexture = LoadedT2D;
IsValid = true;
}
else {
}
}
But Assimp's raw image file doesn't cast to TArray. Where do I have to fix to change Assimp's aiTexel to TArray?
This comment is my attempt.
//TextureBinary = *Cast<TArray<uint8>>(texture->pcData);
If I get the documentation for TArray right you need to copy the data with memcpy. You need the starting address of the embedded texture, the start-pointer of the TArray instance by GetData() or something similar and the size for the embedded texture which shall get copyied:
memcpy(RawFileData.GetData(), static_cast<void*>(&LoadedT2D), texSize);
Just make sure that the buffer-size for the TArray instance is big enough. Hope that helps.

AR camera distance measurement

I have a question about AR(Augmented Reality).
I want to know how to show the distance information(like centermeter...) between AR camera and target object. (Using Smartphone)
Can I do that in Unity ? Should I use AR Foundation? and with ARcore? How to write code?
I tried finding some relative code(below), but it seems just like Printing information between object and object, nothing about "AR camera"...
var other : Transform;
if (other) {
var dist = Vector3.Distance(other.position, transform.position);
print ("Distance to other: " + dist);
}
Thank again!
Here is how to do it Unity and AR Foundation 4.1.
This example script prints the depth in meters at the depth texture's center and works both with ARCore and ARKit:
using System;
using System.Collections;
using UnityEngine;
using UnityEngine.Assertions;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class GetDepthOfCenterPixel : MonoBehaviour {
// assign this field in inspector
[SerializeField] AROcclusionManager manager = null;
IEnumerator Start() {
while (ARSession.state < ARSessionState.SessionInitializing) {
// manager.descriptor.supportsEnvironmentDepthImage will return a correct value if ARSession.state >= ARSessionState.SessionInitializing
yield return null;
}
if (!manager.descriptor.supportsEnvironmentDepthImage) {
Debug.LogError("!manager.descriptor.supportsEnvironmentDepthImage");
yield break;
}
while (true) {
if (manager.TryAcquireEnvironmentDepthCpuImage(out var cpuImage) && cpuImage.valid) {
using (cpuImage) {
Assert.IsTrue(cpuImage.planeCount == 1);
var plane = cpuImage.GetPlane(0);
var dataLength = plane.data.Length;
var pixelStride = plane.pixelStride;
var rowStride = plane.rowStride;
Assert.AreEqual(0, dataLength % rowStride, "dataLength should be divisible by rowStride without a remainder");
Assert.AreEqual(0, rowStride % pixelStride, "rowStride should be divisible by pixelStride without a remainder");
var numOfRows = dataLength / rowStride;
var centerRowIndex = numOfRows / 2;
var centerPixelIndex = rowStride / (pixelStride * 2);
var centerPixelData = plane.data.GetSubArray(centerRowIndex * rowStride + centerPixelIndex * pixelStride, pixelStride);
var depthInMeters = convertPixelDataToDistanceInMeters(centerPixelData.ToArray(), cpuImage.format);
print($"depth texture size: ({cpuImage.width},{cpuImage.height}), pixelStride: {pixelStride}, rowStride: {rowStride}, pixel pos: ({centerPixelIndex}, {centerRowIndex}), depthInMeters of the center pixel: {depthInMeters}");
}
}
yield return null;
}
}
float convertPixelDataToDistanceInMeters(byte[] data, XRCpuImage.Format format) {
switch (format) {
case XRCpuImage.Format.DepthUint16:
return BitConverter.ToUInt16(data, 0) / 1000f;
case XRCpuImage.Format.DepthFloat32:
return BitConverter.ToSingle(data, 0);
default:
throw new Exception($"Format not supported: {format}");
}
}
}
I'm working on AR depth image as well and the basic idea is:
Acquire an image using API, normally it's in format Depth16;
Split the image into shortbuffers, as Depth16 means each pixel is 16 bits;
Get the distance value, which is stored in the lower 13 bits of each shortbuffer, you can do this by doing (shortbuffer & 0x1ff), then you can have the distance for each pixel, normally it's in millimeters.
By doing this through all the pixels, you can create a depth image and store it as jpg or other formats, here's the sample code of using AR Engine to get the distance:
try (Image depthImage = arFrame.acquireDepthImage()) {
int imwidth = depthImage.getWidth();
int imheight = depthImage.getHeight();
Image.Plane plane = depthImage.getPlanes()[0];
ShortBuffer shortDepthBuffer = plane.getBuffer().asShortBuffer();
File sdCardFile = Environment.getExternalStorageDirectory();
Log.i(TAG, "The storage path is " + sdCardFile);
File file = new File(sdCardFile, "RawdepthImage.jpg");
Bitmap disBitmap = Bitmap.createBitmap(imwidth, imheight, Bitmap.Config.RGB_565);
for (int i = 0; i < imheight; i++) {
for (int j = 0; j < imwidth; j++) {
int index = (i * imwidth + j) ;
shortDepthBuffer.position(index);
short depthSample = shortDepthBuffer.get();
short depthRange = (short) (depthSample & 0x1FFF);
//If you only want the distance value, here it is
byte value = (byte) depthRange;
byte value = (byte) depthRange ;
disBitmap.setPixel(j, i, Color.rgb(value, value, value));
}
}
//I rotate the image for a better view
Matrix matrix = new Matrix();
matrix.setRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(disBitmap, 0, 0, imwidth, imheight, matrix, true);
try {
FileOutputStream out = new FileOutputStream(file);
rotatedBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
MainActivity.num++;
} catch (Exception e) {
e.printStackTrace();
}
} catch (Exception e) {
e.printStackTrace();
}
}
While the answers are great, they may be too complicated and advanced for this question, which is about the distance between the ARCamera and another object, and not about the depth of pixels and their occlusion.
transform.position gives you the position of whatever game object you attach the script to in the hierarchy. So attach the script to the ARCamera object. And obviously, other should be the target object.
Alternately, you can get references to the two game objects using inspector variables or GetComponent
/raycasting should be in update/
Ray ray = new Ray(cam.transform.position, cam.transform.forward);
if (Physics.Raycast(ray, out info, 50f, layerMaskAR))//50 meters detection range bcs of 50f
{
distanca.text = string.Format("{0}: {1:N2}m", info.collider.name, info.distance, 2);
}
This is func that does it what u need with this is ofc on UI txt element and layer assigne to object/prefab.
int layerMaskAR = 1 << 6; (here u see 6 bcs 6th is my custom layer ,,layerMaskAR,,)
This is ray cating on to objects in only this layer rest object are ignored(if u dont want to ignore anything remove layerMask from raycast and it will print out name of anything with collider).
Totally doable by this line of code
Vector3.Distance(gameObject.transform.position, Camera.main.transform.position)

Unable to draw in processing language when I integrated it with Eclipse

My setup method looks like below, I want to read one location file(City names with x and y co-ordinates) and then I am creating one hash-map of all cities so that I can draw(Will make points) them all on canvas
public void setup(){
background(0);
PFont title = createFont("Georgia", 16);
textFont(title);
text("This is a visualization of A* algorithm", 240, 20);
stroke(255);
line(0,25,800,25);
selectInput("Select a file for Locations:", "locFileSelected");
}
locFileSelected method(locFilePath is a global variable used):
public void locFileSelected(File locFile) {
locFilePath = locFile.toString();
this.readLocFileAndDraw();
}
Now control is transferred to readLocFileAndDraw (Each line in file has space separated 3 words, 1st is city name followed by x and y co-ordinates:
private void readLocFileAndDraw() {
try (Stream<String> lines = Files.lines(Paths.get(locFilePath))) {
for (String line : (Iterable<String>) lines::iterator){
// Last line in file is END, skip it
if(!line.equalsIgnoreCase("END")) {
List<Double> list = new ArrayList<Double>();
String[] arr= line.split(" ");
// adding coordinates into the list
list.add(Double.valueOf(arr[1]));
list.add(Double.valueOf(arr[2]));
// adding the list into the map with key as city name
locationsMap.put(arr[0], list);
}
}
} catch (IOException e) {
e.printStackTrace();
System.exit(0);
}
// Draw cities on map
// Draw graph of all cities
int w=1, h=1;
Set<Entry<String, List<Double>>> locationKeyEntries = locationsMap.entrySet();
for(Entry<String, List<Double>> currEntry: locationKeyEntries) {
String currCity = currEntry.getKey();
List<Double> currLocationList = currEntry.getValue();
int x = currLocationList.get(0).intValue();
int y = currLocationList.get(1).intValue();
stroke(255);
ellipse(x, y, w, h);
if(x>755)
x = x-(8*currCity.length());
if(y>755)
y=y-(8*currCity.length());
text(currCity, x,y);
}
return;
}
I tried to debug it, control is going to ellipse method but nothing is getting drew. Any idea? As far as I understand, I am missing passing reference of PApplet but I don't know how to do it...
Like you've mentioned, you really need to debug your program. Verifying that you're calling the ellipse() function is a great first step, but now you should be asking yourself more questions:
What is the value of x, y, w, and h being passed into the ellipse() function?
What is the value of currEntry in the for loop? What is the value of line when you're reading it in?
What are the fill, stroke, and background colors when you're drawing?
Note that I'm not asking you to tell me the answer to these questions. I'm pointing out these questions because they're what you should be asking yourself when you debug your program.
If you still can't figure it out, I really recommend breaking your problem down into smaller pieces and approaching each of those steps one at a time. For example, can you just show a single circle at a hard-coded point? Then work your way up from there. Can you read a single point in from a file and draw that to the screen? Then read two points. Work your way forward in small incremental steps, and post a MCVE if you get stuck. Good luck.

Processing button class

The exercise is as follows:
Rewrite programming exercise 3 from lecture 6 by
creating a class called Button to replace the arrays.
a) Create the class and define class variables that
hold information about position, dimensions and
color. In addition a class variable should be made,
which contains the title of the particular button.
Use the constructor to set the initial values of the
class variables.
So basically, I have to convert a previous exercise I have done into a class.
This is how I made the previous exercise in case you need it: http://pastebin.com/RqM6hj6K
So I tried to convert it into class, but apparently it gives me an error and I cannot see how to fix it.
My teached also said that I don't have to keep it as an array, and could instead make many variables instead of the data in the array.
The language is processing and gives error code NullPointerException
class Button
{
int[] nums;
Button(int n1, int n2, int n3, int n4)
{
nums[0] = n1;
nums[1] = n2;
nums[2] = n3;
nums[3] = n4;
}
void display()
{
fill(255, 0, 0);
rect(nums[0], nums[1], nums[2], nums[3]);
}
};
void setup()
{
size(800, 800);
Button butt = new Button(75, 250, 200, 200);
butt.display();
}
You're only declared nums, but not initialized it.
This results in a NullPointerException: in the constructor you're accessing nums[0], but nums doesn't have a length yet. Try this:
class Button
{
//remember to initialize/allocate the array
int[] nums = new int[4];
Button(int n1, int n2, int n3, int n4)
{
nums[0] = n1;
nums[1] = n2;
nums[2] = n3;
nums[3] = n4;
}
void display()
{
fill(255, 0, 0);
rect(nums[0], nums[1], nums[2], nums[3]);
}
};
void setup()
{
size(800, 800);
Button butt = new Button(75, 250, 200, 200);
butt.display();
}
In the future, always make sure the variables you try to access properties of(arrays/objects) are initialized/allocated first(otherwise you'll get the NullPointerException again and it's no fun)
As #v.k. so nicely points out, it's better to have readable code and remove some of the redundancy.
Before the x,y,width and height of your button were stored in an array. That is all the array could do: store data and that's it! Your class however can not only store the same data as individual easy to read properties, but can also do more: functions! (e.g. display())
So, the more readable version:
class Button
{
//remember to initialize/allocate the array
int x,y,width,height;
Button(int x,int y,int width,int height)
{
this.x = x;
this.y = y;
this.width = width;
this.height = height;
}
void display()
{
fill(255, 0, 0);
rect(x,y,width,height);//why don't we use this.here or everywhere ?
}
};
void setup()
{
size(800, 800);
Button butt = new Button(75, 250, 200, 200);
butt.display();
}
Yeah, it's sorta easier to read, but what's the deal with this you may ask ?
Well, it's a keyword that allows you to gain access to the object's instance (which ever that may be in the future when you choose to instantiate) and therefore it's properties (classes version of variables) and methods (classes version of functions). There's quite a lot of neat things to learn in terms of OOP in Java, but you can take one step at a time with a nice and visual approach in Processing.
If you haven't already, check out Daniel Shiffman's Objects tutorial
Best of luck learning OOP in Processing!

Trying to write 16-bit PNG

I'm capturing images from camera, and I have two function for saving 16-bit(!) image one in PNG and one in TIFF formats.
Could you please explain why the PNG is a very noisy image? like this:
PNG function:
bool save_image_png(const char *file_name,const mono16bit& img)
{
[...]
/* write header */
if (setjmp(png_jmpbuf(png_ptr)))
abort_("[write_png_file] Error during writing header");
png_set_IHDR(png_ptr, info_ptr, width, height,
bit_depth,PNG_COLOR_TYPE_GRAY , PNG_INTERLACE_NONE,
PNG_COMPRESSION_TYPE_BASE, PNG_FILTER_TYPE_BASE);
png_write_info(png_ptr, info_ptr);
/* write bytes */
if (setjmp(png_jmpbuf(png_ptr)))
abort_("[write_png_file] Error during writing bytes");
row_pointers = (png_bytep*) malloc(sizeof(png_bytep) * height);
for (y=0; y<height; y++)
row_pointers[y] = (png_byte*) malloc(png_get_rowbytes(png_ptr,info_ptr));
for (y = 0; y < height; y++)
{
row_pointers[y] = (png_bytep)img.getBuffer() + y * width*2;
}
png_write_image(png_ptr, row_pointers);
/* end write */
[...]
}
and TIFF function:
bool save_image(const char *fname,const mono16bit& img)
{
[...]
for(y=0; y<height; y++) {
if((err=TIFFWriteScanline(tif,(tdata_t)(img.getBuffer()+width*y),y,0))==-1)
break;
}
TIFFClose(tif);
if(err==-1) {
fprintf(stderr,"Error writing to %s file\n",fname);
return false;
}
return true;
//#endif //USE_LIBTIFF
}
Thank you!
png_set_swap does nothing. You have to actually flip bytes in each pixel of the image.
If you’re on a PC and have SSSE3 or newer, a good way is _mm_shuffle_epi8 instruction, make a permute vector with _mm_setr_epi8.
If you’re on ARM and have NEON, use vrev16q_u8 instruction instead.
Perhaps you have a byte-order problem.
Try adding:
png_set_swap(png_ptr);
before saving the image