Android: How to split Picture captured from webview in multiple images - android-webview

I have a webview and I capture the view like this
Picture picture = m_browser.capturePicture();
Bitmap b = Bitmap.createBitmap( picture.getWidth(),
picture.getHeight(), Bitmap.Config.ARGB_8888);
Canvas c = new Canvas( b );
picture.draw( c );
FileOutputStream fos = null;
try {
fos = new FileOutputStream("/sdcard/temp_1.jpg");
//fos = openFileOutput("samsp_1.jpg", MODE_WORLD_WRITEABLE);
if ( fos != null )
{
b.compress(Bitmap.CompressFormat.JPEG,90, fos);
fos.close();
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Now if the loaded web site is long, for eg. times.com then this image is also too big.
I want to split this image in multiple images and with out reducing the quality.
I would like to get these images in the highest quality possible.
Pls. help
Thanks

It's a bit of a hack, but this will work on VERY long webpages without running out of memory. First, put your webview inside another layout (RelativeLayout rl). Size the RelativeLayout for the image size you want. Then call:
rl.draw(canvas);
// do something here to save your bitmap
webview.scrollby(0, rl.getHeight());
and repeat that till you're at the bottom of the page!

Bitmap.createBitmap(source, x, y, width, height)
I allways use this code to cut the a large image to multiple images.
I can use it easy

Related

AndroidX Camera Core ImageAnalysis.Analyser results in distorted image

I am using ImageAnalysis library to extract live previews to then barcode scanning and OCR on.
I'm not having any issues with barcode scanning at all, but OCR is resulting in some weak results. I'm sure this could be from a few reasons. My current attempt at working on the solution is to send the frames to GCP - Storage before I run OCR (or barcode) on the frames in order to look at them in bulk. All of them look very similar:
My best guess is the way i'm processing the frames could be causing the pixels to be organized in the buffer incorrectly (i'm inexperienced to Android - sorry). Meaning rather than organizing 0,0 then 0,1.....it's randomly taking pixels and putting them in random areas. I can't figure out where this is happening though. Once I can look at the image quality, then i'll be able to analyze what the issue is with OCR but this is my current blocker unfortunately.
Extra note: I am uploading the image to GCP - Storage prior to even running OCR, so for the sake of looking at this, we can ignore the OCR statement I made - I just wanted to give some background.
Below is the code where I initiate the camera and analyzer then observe the frames
private void startCamera() {
//make sure there isn't another camera instance running before starting
CameraX.unbindAll();
/* start preview */
int aspRatioW = txView.getWidth(); //get width of screen
int aspRatioH = txView.getHeight(); //get height
Rational asp = new Rational (aspRatioW, aspRatioH); //aspect ratio
Size screen = new Size(aspRatioW, aspRatioH); //size of the screen
//config obj for preview/viewfinder thingy.
PreviewConfig pConfig = new PreviewConfig.Builder().setTargetResolution(screen).build();
Preview preview = new Preview(pConfig); //lets build it
preview.setOnPreviewOutputUpdateListener(
new Preview.OnPreviewOutputUpdateListener() {
//to update the surface texture we have to destroy it first, then re-add it
#Override
public void onUpdated(Preview.PreviewOutput output){
ViewGroup parent = (ViewGroup) txView.getParent();
parent.removeView(txView);
parent.addView(txView, 0);
txView.setSurfaceTexture(output.getSurfaceTexture());
updateTransform();
}
});
/* image capture */
//config obj, selected capture mode
ImageCaptureConfig imgCapConfig = new ImageCaptureConfig.Builder().setCaptureMode(ImageCapture.CaptureMode.MAX_QUALITY)
.setTargetRotation(getWindowManager().getDefaultDisplay().getRotation()).build();
final ImageCapture imgCap = new ImageCapture(imgCapConfig);
findViewById(R.id.imgCapture).setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
Log.d("image taken", "image taken");
}
});
/* image analyser */
ImageAnalysisConfig imgAConfig = new ImageAnalysisConfig.Builder().setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE).build();
ImageAnalysis analysis = new ImageAnalysis(imgAConfig);
analysis.setAnalyzer(
Executors.newSingleThreadExecutor(), new ImageAnalysis.Analyzer(){
#Override
public void analyze(ImageProxy imageProxy, int degrees){
Log.d("analyze", "just analyzing");
if (imageProxy == null || imageProxy.getImage() == null) {
return;
}
Image mediaImage = imageProxy.getImage();
int rotation = degreesToFirebaseRotation(degrees);
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(toBitmap(mediaImage));
if (!isMachineLearning){
Log.d("analyze", "isMachineLearning is about to be true");
isMachineLearning = true;
String haha = MediaStore.Images.Media.insertImage(getContentResolver(), toBitmap(mediaImage), "image" , "theImageDescription");
Log.d("uploadingimage: ", haha);
extractBarcode(image, toBitmap(mediaImage));
}
}
});
//bind to lifecycle:
CameraX.bindToLifecycle(this, analysis, imgCap, preview);
}
Below is how I structure my detection (pretty straightforward and simple):
FirebaseVisionBarcodeDetectorOptions options = new FirebaseVisionBarcodeDetectorOptions.Builder()
.setBarcodeFormats(FirebaseVisionBarcode.FORMAT_ALL_FORMATS)
.build();
FirebaseVisionBarcodeDetector detector = FirebaseVision.getInstance().getVisionBarcodeDetector(options);
detector.detectInImage(firebaseVisionImage)
Finally, when I'm uploading the image to GCP - Storage, this is what it looks like:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.JPEG, 100, baos); //bmp being the image that I ran barcode scanning on - as well as OCR
byte[] data = baos.toByteArray();
UploadTask uploadTask = storageRef.putBytes(data);
Thank you all for your kind help (:
My problem was that I was trying to convert to a bitmap AFTER barcode scanning. The conversion wasn't properly written but I found a way around without having to write my own bitmap conversion function (though I plan on going back to it as I see myself needing it, and genuine curiosity wants me to figure it out)

iTextSharp get coordinates of [duplicate]

In my project, I want to find co-ordinates of image in pdf. I tried searching itext and pdfbox, but I was not succesful. Using these co-ordinates and extracted image, I want to verify whether the extracted image is same as image present in database, and co-ordinates of image are same as present in database.
When you say that you've tried with iText, I assume that you've used the ExtractImages example as the starting point for your code. This example uses the helper class MyImageRenderListener, which implements the RenderListener interface.
In that helper class the renderImage() method is implemented like this:
public void renderImage(ImageRenderInfo renderInfo) {
try {
String filename;
FileOutputStream os;
PdfImageObject image = renderInfo.getImage();
if (image == null) return;
filename = String.format(path, renderInfo.getRef().getNumber(), image.getFileType());
os = new FileOutputStream(filename);
os.write(image.getImageAsBytes());
os.flush();
os.close();
} catch (IOException e) {
System.out.println(e.getMessage());
}
}
It uses the ImageRenderInfo object to obtain a PdfImageObject instance and it creates an image file using that object.
If you inspect the ImageRenderInfo class, you'll discover that you can also ask for other info about the image. What you need, is the getImageCTM() method. This method returns a Matrix object. This matrix can be interpreted using ordinary high-school algebra. The values I31 and I32 give you the X and Y position. In most cases I11 and I22 will give you the width and the height (unless the image is rotated).
If the image is rotated, you'll have to consult your high-school schoolbooks, more specifically the ones discussing analytic geometry.

Taking a screenshot of the simulation area

Hi does anyone know a good way to take a screenshot of a simulation, in such way that you can specify the resolution and get a higher quality image?
the only way i can think of is zooming in and stitch multiple image together, but it takes a long time...
update:
I've managed to successfully export the whole area, the magic parameter is the: .setAnimationParameterEnabled(Panel.ANIM_BOUNDS_CLIPPING_XJAL, false)
it will force Anylogic to draw the whole area, and not just the visible area.
But it doesn't always work. I have to run the code, move around the area, zoom in/out and try again. At some point it gets really glitchy, probably because it starts to draw every thing, and then the code works. The problem is that i can't figure out exactly what to do to make it work...
java.awt.Component alPanel = getExperiment().getPresentation().getPanel();
getExperiment().getPresentation().getPanel().setAnimationParameterEnabled(Panel.ANIM_BOUNDS_CLIPPING_XJAL, false);
getExperiment().getPresentation().setMaximized(false);
getExperiment().getPresentation().setPanelSize(5000, 5000);
java.awt.image.BufferedImage imageExperiment = new java.awt.image.BufferedImage(
alPanel.getWidth(),
alPanel.getHeight(),
java.awt.image.BufferedImage.TYPE_INT_RGB
);
getExperiment().drawPresentation(getExperiment().getPresentation().getPanel(), imageExperiment.createGraphics(), false);
java.awt.Component component = getExperiment().getPresentation().getPanel();
// call the Component's paint method, using
// the Graphics object of the image.
component.paintAll( imageExperiment.getGraphics() ); // alternately use .printAll(..)
try {
// write the image as a PNG
javax.imageio.ImageIO.write(
imageExperiment,
"png",
new File("screenshotAnylogic.png"));
} catch(Exception e) {
e.printStackTrace();
}
Okay... so after a lot of experimentation, i found that the "magic parameter" wasn't as magic as i thought.
but this piece of code should be able to create a screenshot that extends the visible area:
public void capturePanel (ShapeGroup p, String fileName) {
Panel argPanel = p.getPresentable().getPresentation().getPanel();
BufferedImage capture = new BufferedImage(4000, 4000, BufferedImage.TYPE_INT_ARGB);
Graphics2D g = capture.createGraphics();
g.setClip( -200, -200, 4000, 4000 );
p.draw( argPanel, g, null, true );
g.dispose();
try {
ImageIO.write(capture, "png", new File(fileName));
} catch (IOException ioe) {
System.out.println(ioe);
}
}
Well, afaik there is no built in method in Anylogic for that.You could try to use Java to realize that though. It is possible to get the Panel that contains the simulation via getExperiment().getPresentation().getPanel() and you can create an image from that. This is explained here for example and the code would look like this:
public static BufferedImage getScreenShot(Component component)
{
BufferedImage image = new BufferedImage(
component.getWidth(),
component.getHeight(),
BufferedImage.TYPE_INT_RGB
);
// call the Component's paint method, using
// the Graphics object of the image.
component.paint( image.getGraphics() ); // alternately use .printAll(..)
return image;
}
public static void saveComponentScreenshot(Component component)
{
try {
// write the image as a PNG
ImageIO.write(
getScreenShot(component),
"png",
new File("screenshot.png"));
} catch(Exception e) {
e.printStackTrace();
}
}
Unfortunately this does not give you the bigger viewport you probably want to have. Maybe the method public final void drawPresentation(Panel panel, java.awt.Graphics2D g, boolean publicOnly) that is available from the Experiment object returned from getExperiment() could help you to draw the simulation on a custom Panel with the wanted dimensions. Pretty hacky but it's all that I can come up with ^^

How to change orientation of captured byte[] frames through onPreviewFrame callback?

I have tried to search for this question a lot, but never have seen any satisfactory answers, so now I have a last hope here.
I have an onPreviewFrame callback set up. Which gives a byte[] of raw frames with supported preview format(NV21 with H.264 encoded type).
Now, the problem is callback always starts giving byte[] frames from a fixed orientation, whenever device rotates it doesn't reflect to captured byte[] frames. I have tried with setDisplayOrientation and setRotation but these api's are only reflecting to preview which is being displayed not at all to the captured byte [] frames.
Android docs even says, Camera.setDisplayOrientation only affects the displaying preview, not the frame bytes:
This does not affect the order of byte array passed in onPreviewFrame(byte[], Camera), JPEG pictures, or recorded videos.
Finally Is there a way, at any API level, to change the orientation of the byte[] frames?
One possible way if you don't care about the format is to the use YuvImage class to get a JPEG buffer, use this buffer to create a Bitmap and rotate it to the corresponding angle. Something like that:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Size previewSize = camera.getParameters().getPreviewSize();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] rawImage = null;
// Decode image from the retrieved buffer to JPEG
YuvImage yuv = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null);
yuv.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), YOUR_JPEG_COMPRESSION, baos);
rawImage = baos.toByteArray();
// This is the same image as the preview but in JPEG and not rotated
Bitmap bitmap = BitmapFactory.decodeByteArray(rawImage, 0, rawImage.length);
ByteArrayOutputStream rotatedStream = new ByteArrayOutputStream();
// Rotate the Bitmap
Matrix matrix = new Matrix();
matrix.postRotate(YOUR_DEFAULT_ROTATION);
// We rotate the same Bitmap
bitmap = Bitmap.createBitmap(bitmap, 0, 0, previewSize.width, previewSize.height, matrix, false);
// We dump the rotated Bitmap to the stream
bitmap.compress(CompressFormat.JPEG, YOUR_JPEG_COMPRESSION, rotatedStream);
rawImage = rotatedStream.toByteArray();
// Do something we this byte array
}
I have modified the onPreviewFrame method of this Open Source Android Touch-To-Record library to take transpose and resize a captured frame.
I defined "yuvIplImage" as following in my setCameraParams() method.
IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);
This is my onPreviewFrame() method:
#Override
public void onPreviewFrame(byte[] data, Camera camera)
{
long frameTimeStamp = 0L;
if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
{
frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
}
else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
{
frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
}
else
{
long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
}
synchronized(FragmentCamera.mVideoRecordLock)
{
if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
{
FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;
if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
{
FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
}
try
{
yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);
int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];
Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width, mPreviewSize.height);
bgrImage.getIntBuffer().put(_temp);
opencv_core.cvTranspose(bgrImage, transposed);
opencv_core.cvFlip(transposed, transposed, 1);
opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
opencv_core.cvCopy(transposed, squared, null);
opencv_core.cvResetImageROI(transposed);
videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
videoRecorder.record(squared);
}
catch(com.googlecode.javacv.FrameRecorder.Exception e)
{
e.printStackTrace();
}
}
lastSavedframe = new SavedFrames(data, frameTimeStamp);
}
}
This code uses a method "YUV_NV21_TO_BGR", which I found from this link
Basically this method is used to resolve, which I call as, "The Green Devil problem on Android". You can see other android devs facing the same problem on other SO threads. Before adding "YUV_NV21_TO_BGR" method when I just took transpose of YuvIplImage, more importantly a combination of transpose, flip (with or without resizing), there was greenish output in resulting video. This "YUV_NV21_TO_BGR" method saved the day. Thanks to #David Han from above google groups thread.
Also you should know that all this processing (transpose, flip and resize), in onPreviewFrame, takes much time which causes you a very serious hit on your Frames Per Second (FPS) rate. When I used this code, inside onPreviewFrame method, the resulting FPS of the recorded video was down to 3 frames/sec from 30fps.
I would advise not to use this approach. Rather you can go for post-recording processing (transpose, flip and resize) of your video file using JavaCV in an AsyncTask. Hope this helps.

Gwt Image original size

I use an Image object to load a png image as a thumbnail by calling its setPixelSize() method to resize the image. I also need to retrieve the original size of the image as integers at some point. How can I get the original sizes (width, height) of the image?
ok i found a workaround: I use a dummy container (SimplePanel) and load the image without scaling save its real dimensions and then remove the container from the parent and discard the new Image object. I don't know if this a good workaround, but it works. Although i would like to know if there is another way...
problem of the workaround: I have a droplist from which i can select logical folders (which contain images). When i select a new folder, and the new set of images is loaded on display, then i get 0 for width and 0 for height.
private void getTrueSize(String fullUrl) {
Image trueImage = new Image();
this.tstCon.add(trueImage);
trueImage.setUrl(fullUrl);
this.trueHeight = trueImage.getHeight();
this.trueWidth = trueImage.getWidth();
//this.tstCon.remove(trueImage);
//trueImage = null;
GWT.log("Image [" + this.imgTitle + "] -> height=" + this.trueHeight + " -> width=" + this.trueWidth);//
}
Extend the Image class and implement onAttach() method. This method is called when a widget is attached to the browser's document.
public class Thumb extends Image {
public Thumb(){
super();
}
protected void onAttach(){
Window.alert(getOffsetHeight()+" "+getOffsetWidth()); //this should give you the original value
setPixelSize(10,10); // this should be the visible value
super.onAttach();
}
}
if this doesn't work, try implementing onLoad() instead of onAttach(), since onLoad() will be called after the image is added to DOM so that it definately should work.
Its the easiest way to create a new hidden Image (placed absolute outside the viewport) and read the size. There is a JavaScript lib that can read the exif data of the image but this would be overkill in this case.
Read naturalHeight and naturalWidth of the image element after image load.
public Panel() {
image = new Image();
image.addLoadHandler(this::onLoad);
}
private void onLoad(#SuppressWarnings("unused") LoadEvent event) {
origImageWidth = getNaturalWidth(image);
origImageHeight = getNaturalHeight(image);
}
private static int getNaturalHeight(Image img) {
return img.getElement().getPropertyInt("naturalHeight"); //$NON-NLS-1$
}
private static int getNaturalWidth(Image img) {
return img.getElement().getPropertyInt("naturalWidth"); //$NON-NLS-1$
}