Gwt Image original size - gwt

I use an Image object to load a png image as a thumbnail by calling its setPixelSize() method to resize the image. I also need to retrieve the original size of the image as integers at some point. How can I get the original sizes (width, height) of the image?
ok i found a workaround: I use a dummy container (SimplePanel) and load the image without scaling save its real dimensions and then remove the container from the parent and discard the new Image object. I don't know if this a good workaround, but it works. Although i would like to know if there is another way...
problem of the workaround: I have a droplist from which i can select logical folders (which contain images). When i select a new folder, and the new set of images is loaded on display, then i get 0 for width and 0 for height.
private void getTrueSize(String fullUrl) {
Image trueImage = new Image();
this.tstCon.add(trueImage);
trueImage.setUrl(fullUrl);
this.trueHeight = trueImage.getHeight();
this.trueWidth = trueImage.getWidth();
//this.tstCon.remove(trueImage);
//trueImage = null;
GWT.log("Image [" + this.imgTitle + "] -> height=" + this.trueHeight + " -> width=" + this.trueWidth);//
}

Extend the Image class and implement onAttach() method. This method is called when a widget is attached to the browser's document.
public class Thumb extends Image {
public Thumb(){
super();
}
protected void onAttach(){
Window.alert(getOffsetHeight()+" "+getOffsetWidth()); //this should give you the original value
setPixelSize(10,10); // this should be the visible value
super.onAttach();
}
}
if this doesn't work, try implementing onLoad() instead of onAttach(), since onLoad() will be called after the image is added to DOM so that it definately should work.

Its the easiest way to create a new hidden Image (placed absolute outside the viewport) and read the size. There is a JavaScript lib that can read the exif data of the image but this would be overkill in this case.

Read naturalHeight and naturalWidth of the image element after image load.
public Panel() {
image = new Image();
image.addLoadHandler(this::onLoad);
}
private void onLoad(#SuppressWarnings("unused") LoadEvent event) {
origImageWidth = getNaturalWidth(image);
origImageHeight = getNaturalHeight(image);
}
private static int getNaturalHeight(Image img) {
return img.getElement().getPropertyInt("naturalHeight"); //$NON-NLS-1$
}
private static int getNaturalWidth(Image img) {
return img.getElement().getPropertyInt("naturalWidth"); //$NON-NLS-1$
}

Related

AndroidX Camera Core ImageAnalysis.Analyser results in distorted image

I am using ImageAnalysis library to extract live previews to then barcode scanning and OCR on.
I'm not having any issues with barcode scanning at all, but OCR is resulting in some weak results. I'm sure this could be from a few reasons. My current attempt at working on the solution is to send the frames to GCP - Storage before I run OCR (or barcode) on the frames in order to look at them in bulk. All of them look very similar:
My best guess is the way i'm processing the frames could be causing the pixels to be organized in the buffer incorrectly (i'm inexperienced to Android - sorry). Meaning rather than organizing 0,0 then 0,1.....it's randomly taking pixels and putting them in random areas. I can't figure out where this is happening though. Once I can look at the image quality, then i'll be able to analyze what the issue is with OCR but this is my current blocker unfortunately.
Extra note: I am uploading the image to GCP - Storage prior to even running OCR, so for the sake of looking at this, we can ignore the OCR statement I made - I just wanted to give some background.
Below is the code where I initiate the camera and analyzer then observe the frames
private void startCamera() {
//make sure there isn't another camera instance running before starting
CameraX.unbindAll();
/* start preview */
int aspRatioW = txView.getWidth(); //get width of screen
int aspRatioH = txView.getHeight(); //get height
Rational asp = new Rational (aspRatioW, aspRatioH); //aspect ratio
Size screen = new Size(aspRatioW, aspRatioH); //size of the screen
//config obj for preview/viewfinder thingy.
PreviewConfig pConfig = new PreviewConfig.Builder().setTargetResolution(screen).build();
Preview preview = new Preview(pConfig); //lets build it
preview.setOnPreviewOutputUpdateListener(
new Preview.OnPreviewOutputUpdateListener() {
//to update the surface texture we have to destroy it first, then re-add it
#Override
public void onUpdated(Preview.PreviewOutput output){
ViewGroup parent = (ViewGroup) txView.getParent();
parent.removeView(txView);
parent.addView(txView, 0);
txView.setSurfaceTexture(output.getSurfaceTexture());
updateTransform();
}
});
/* image capture */
//config obj, selected capture mode
ImageCaptureConfig imgCapConfig = new ImageCaptureConfig.Builder().setCaptureMode(ImageCapture.CaptureMode.MAX_QUALITY)
.setTargetRotation(getWindowManager().getDefaultDisplay().getRotation()).build();
final ImageCapture imgCap = new ImageCapture(imgCapConfig);
findViewById(R.id.imgCapture).setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
Log.d("image taken", "image taken");
}
});
/* image analyser */
ImageAnalysisConfig imgAConfig = new ImageAnalysisConfig.Builder().setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE).build();
ImageAnalysis analysis = new ImageAnalysis(imgAConfig);
analysis.setAnalyzer(
Executors.newSingleThreadExecutor(), new ImageAnalysis.Analyzer(){
#Override
public void analyze(ImageProxy imageProxy, int degrees){
Log.d("analyze", "just analyzing");
if (imageProxy == null || imageProxy.getImage() == null) {
return;
}
Image mediaImage = imageProxy.getImage();
int rotation = degreesToFirebaseRotation(degrees);
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(toBitmap(mediaImage));
if (!isMachineLearning){
Log.d("analyze", "isMachineLearning is about to be true");
isMachineLearning = true;
String haha = MediaStore.Images.Media.insertImage(getContentResolver(), toBitmap(mediaImage), "image" , "theImageDescription");
Log.d("uploadingimage: ", haha);
extractBarcode(image, toBitmap(mediaImage));
}
}
});
//bind to lifecycle:
CameraX.bindToLifecycle(this, analysis, imgCap, preview);
}
Below is how I structure my detection (pretty straightforward and simple):
FirebaseVisionBarcodeDetectorOptions options = new FirebaseVisionBarcodeDetectorOptions.Builder()
.setBarcodeFormats(FirebaseVisionBarcode.FORMAT_ALL_FORMATS)
.build();
FirebaseVisionBarcodeDetector detector = FirebaseVision.getInstance().getVisionBarcodeDetector(options);
detector.detectInImage(firebaseVisionImage)
Finally, when I'm uploading the image to GCP - Storage, this is what it looks like:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.JPEG, 100, baos); //bmp being the image that I ran barcode scanning on - as well as OCR
byte[] data = baos.toByteArray();
UploadTask uploadTask = storageRef.putBytes(data);
Thank you all for your kind help (:
My problem was that I was trying to convert to a bitmap AFTER barcode scanning. The conversion wasn't properly written but I found a way around without having to write my own bitmap conversion function (though I plan on going back to it as I see myself needing it, and genuine curiosity wants me to figure it out)

How to get the path and file size of some Unity built-in assets?

Background
I am developing a Unity editor plugin that enables users to send a selected image file to a REST API endpoint in the cloud for processing (e.g. adding transforms and optimizations). The plugin also shows a comparison of the selected image's details before and after processing (e.g. width/height/size before vs after).
The user selects the desired image through the following piece of code:
selected_texture = (Texture2D) EditorGUI.ObjectField(drawing_rect, selected_texture, typeof(Texture2D), false);
Once its selected, I can then get the respective file size by doing this:
file_size = new FileInfo(AssetDatabase.GetAssetPath(selected_texture)).Length;
Problem
This works for most textures selected, but I encounter an error when I choose a built-in Unity texture. Any guidance would be greatly appreciated.
FileNotFoundException: Could not find file 'Resources/unity_builtin_extra'
There are two built-in asset-librarys in Unity:
BuiltIn-Library in "Resources/unity_builtin_extra": contains UGUI sprite、Default-Material、Shader and so on.
BuiltIn-Library in "Library/unity default resources": contains built-in 3D mesh and OnGUI assets.
If you are using AssetDatabase.GetAssetPath, you will always get one or another path above.
To solve the problem, you need do something like below code:
public const string BuiltinResources = "Resources/unity_builtin_extra";
public const string BuiltinExtraResources = "Library/unity default resources";
public static bool IsBuiltInAsset(string assetPath)
{
return assetPath.Equals(BuiltinResources) || assetPath.Equals(BuiltinExtraResources);
}
public static long GetTextureFileLength(Texture texture)
{
string texturePath = AssetDatabase.GetAssetPath(texture);
if (IsBuiltInAsset(texturePath))
{
/*
* You can get all built-in assets by this way.
*
var allAssets = AssetDatabase.LoadAllAssetsAtPath(BuiltinResources);
var allExtraAssets = AssetDatabase.LoadAllAssetsAtPath(BuiltinExtraResources);
*/
// not supportted
// return -1;
// using MemorySize
return Profiler.GetRuntimeMemorySizeLong(texture);
}
else
{
return new FileInfo(texturePath).Length;
}
}

Taking a screenshot of the simulation area

Hi does anyone know a good way to take a screenshot of a simulation, in such way that you can specify the resolution and get a higher quality image?
the only way i can think of is zooming in and stitch multiple image together, but it takes a long time...
update:
I've managed to successfully export the whole area, the magic parameter is the: .setAnimationParameterEnabled(Panel.ANIM_BOUNDS_CLIPPING_XJAL, false)
it will force Anylogic to draw the whole area, and not just the visible area.
But it doesn't always work. I have to run the code, move around the area, zoom in/out and try again. At some point it gets really glitchy, probably because it starts to draw every thing, and then the code works. The problem is that i can't figure out exactly what to do to make it work...
java.awt.Component alPanel = getExperiment().getPresentation().getPanel();
getExperiment().getPresentation().getPanel().setAnimationParameterEnabled(Panel.ANIM_BOUNDS_CLIPPING_XJAL, false);
getExperiment().getPresentation().setMaximized(false);
getExperiment().getPresentation().setPanelSize(5000, 5000);
java.awt.image.BufferedImage imageExperiment = new java.awt.image.BufferedImage(
alPanel.getWidth(),
alPanel.getHeight(),
java.awt.image.BufferedImage.TYPE_INT_RGB
);
getExperiment().drawPresentation(getExperiment().getPresentation().getPanel(), imageExperiment.createGraphics(), false);
java.awt.Component component = getExperiment().getPresentation().getPanel();
// call the Component's paint method, using
// the Graphics object of the image.
component.paintAll( imageExperiment.getGraphics() ); // alternately use .printAll(..)
try {
// write the image as a PNG
javax.imageio.ImageIO.write(
imageExperiment,
"png",
new File("screenshotAnylogic.png"));
} catch(Exception e) {
e.printStackTrace();
}
Okay... so after a lot of experimentation, i found that the "magic parameter" wasn't as magic as i thought.
but this piece of code should be able to create a screenshot that extends the visible area:
public void capturePanel (ShapeGroup p, String fileName) {
Panel argPanel = p.getPresentable().getPresentation().getPanel();
BufferedImage capture = new BufferedImage(4000, 4000, BufferedImage.TYPE_INT_ARGB);
Graphics2D g = capture.createGraphics();
g.setClip( -200, -200, 4000, 4000 );
p.draw( argPanel, g, null, true );
g.dispose();
try {
ImageIO.write(capture, "png", new File(fileName));
} catch (IOException ioe) {
System.out.println(ioe);
}
}
Well, afaik there is no built in method in Anylogic for that.You could try to use Java to realize that though. It is possible to get the Panel that contains the simulation via getExperiment().getPresentation().getPanel() and you can create an image from that. This is explained here for example and the code would look like this:
public static BufferedImage getScreenShot(Component component)
{
BufferedImage image = new BufferedImage(
component.getWidth(),
component.getHeight(),
BufferedImage.TYPE_INT_RGB
);
// call the Component's paint method, using
// the Graphics object of the image.
component.paint( image.getGraphics() ); // alternately use .printAll(..)
return image;
}
public static void saveComponentScreenshot(Component component)
{
try {
// write the image as a PNG
ImageIO.write(
getScreenShot(component),
"png",
new File("screenshot.png"));
} catch(Exception e) {
e.printStackTrace();
}
}
Unfortunately this does not give you the bigger viewport you probably want to have. Maybe the method public final void drawPresentation(Panel panel, java.awt.Graphics2D g, boolean publicOnly) that is available from the Experiment object returned from getExperiment() could help you to draw the simulation on a custom Panel with the wanted dimensions. Pretty hacky but it's all that I can come up with ^^

Images getting cut off using Swing

I am writing a tile based platform game. At the moment I am trying to get 400 tiles to display at once. This is my panel. On the top and left sides everything is working great but on the right and bottom sides the images are cut off by a few pixels. Each image is 32*32. All of blocks are initialized. None are null. What is wrong here?
public class Pane extends JPanel implements ActionListener{
private static final long serialVersionUID = 1L;
Timer timer;
boolean setup = false;
Block[][] blocks;
Level level;
public Pane()
{
level = new Level();
level.Generate();
blocks = level.Parse();
setBackground(Color.WHITE);
timer = new Timer(25, this);
timer.start();
}
public void paintComponent(Graphics g) {
super.paintComponent(g);
Graphics2D g2d = (Graphics2D)g;
for(Block[] b : blocks)
{
for(Block bx : b)
{
// Debug code if(bx.letter.equals("D"))
// Debug codeSystem.out.println(bx.y*32 +" = "+ bx.x*32);
g2d.drawImage(bx.bpic, bx.x*32, bx.y*32, this);
}
}
Toolkit.getDefaultToolkit().sync();
g.dispose();
}
#Override
public void actionPerformed(ActionEvent e) {
repaint();
}
}
on the right and bottom sides the images are cut off by a few pixels
If you mean the right and bottom sides of the whole panel (not on the single tiles), than it's probably a LayoutManager related problem. The solution depends on the layout manager you are using for the component your JPanel will be added to.
You could try to specify the minimum/preferred size of your JPanel with:
Pane pane = new Pane();
pane.setPreferredSize(...);
pane.setMinimumSize(...);
In order to specify its minimum dimension accordingly to the size of the generated image (32 * COL , 32 * ROW).
Unfortunately the effectiveness of the setPreferredSize call depends on the layout manager of your Pane parent component.
JComponent can do that basically and can return very easily something as MinimumSize or PreferredSize, valid for majority of standard Swing LayoutManagers, examples here.

using GWTCanvas with ImageResource

I want to use the clientBundle capability of GWT to load only 1 image, which is composed of many sprites, with GWTCanvas. My initial take was to just convert the ImageResource into an ImageElement, but apparently that doesn't seem to work:
public interface Bundle implements ClientBundle{
public static Bundle INSTANCE = GWT.create(Bundle .class);
#Source("/img/tile1.png")
public ImageResource tile1()
}
final GWTCanvas canvas = new GWTCanvas(400,400);
canvas.drawImage(ImageElement.as(new Image(Bundle.INSTANCE.tile1()).getElement()), 0, 0);
i tried adding the Image to RootPanel first (to force a load), but that doesn't seem to work too. Perhaps the timings are incorrect. Does anyone have a clue as to how I can draw an imageResource using GWTCanvas?
This works: (GWT 2.4)
// load:
SafeUri uri= imageResource.getSafeUri();
ImageElement imgElement= ImageElement.as((new Image(uri)).getElement());
// render
context.drawImage(imgElement, 0, 0);
You can get the image from a bundle using a data URI, but you'll need to manage the asynchrony as you would with a remote image.
final Image image = new Image(resources.imageResource().getURL());
RootPanel.get().add(image);
image.setVisible(false);
fetchedImage.addLoadHandler(new LoadHandler() {
public void onLoad(LoadEvent event) {
context.drawImage(ImageElement.as(image.getElement()), 0, 0);
}
});
Using ClientBundled image in the way you want isn't possible. Images combined to one big image are displayed as background images which is based on the feature of the browser to show only part of an image. GWT calls this 'clipped' mode. So when you try to get the element of that image, the actual src tag is empty as the image is a background image. If you want to display images on the Canvas it must be an actual link to an image.
You might try using the ImageResource's getURL() method when you create the image:
canvas.drawImage(ImageElement.as(new Image(Bundle.INSTANCE.tile1().getUrl()).getElement(), 0, 0);
I was having the same problem when using a ClientBundle with GWT 2.2.0's Canvas type and this fixed it for me.
They are correct, just use getSafeUri() instead of getURL()
The solution of Tom Fishman didn't work for me, because the images weren't loaded at the time when i called canvas.drawImage(). This solution works (also for big images):
// Create Image
SafeUri uri = resource.getSafeUri();
Utils.console("URI: "+ uri);
final Image img = new Image(uri);
// Add the image to the RootPanel to force the image to be loaded.
RootPanel.get().add(img);
// The image has to be added to the canvas after the image has been loaded.
// Otherwise the bounding box of the image is 0,0,0,0 and nothing will be drawn.
img.addLoadHandler(new LoadHandler() {
#Override
public void onLoad(LoadEvent event) {
// Create the image element.
ImageElement imgElement = ImageElement.as(img.getElement());
// Render the image on the canvas.
context2d.drawImage(imgElement, offsetX, offsetY);
// Delete the image from the root panel.
RootPanel.get().remove(img);
}
});