Ionic Capacitor camera how to reduce image size before uploading - ionic-framework

I have set the Capacitor camera plugin with the following options
const image = await Camera.getPhoto({
quality: 20,
width : 200,
height : 200,
allowEditing: false,
source: CameraSource.Camera,
resultType: CameraResultType.Base64
});
After that I take the base64 and upload it to the server. However I am getting images in the 2-5MB range of size (Android). Is there any way of reducing/compressing the size further?
Already checked this and other similar postings but they seem to adjust the size by reducing the quality param and the width and height (which are already low in my case)

Related

Azure Media Services - v3 Overlay position issue

I am working on an encoding flow that adds an overlay image to the video. I want this watermark to be in the bottom right corner. No matter what I try with the position parameter, the watermark is positioned in the top left.
Can someone point me to a sample that covers that scenario? The Microsoft docs and samples are vague about positioning the overlay and setting the opacity.
Here is a snippet of code that contains my transform definition. Regardless of the overlay settings, I can use any values for the rectangle position element and nothing changes. The watermark ends up in the upper right of the video. Its like the position and opacity properties are being ignored.
new AMSModels.TransformOutput(
new AMSModels.StandardEncoderPreset(
filters: new AMSModels.Filters
{
Overlays = new List<AMSModels.Overlay>
{
new AMSModels.VideoOverlay()
{
InputLabel = "tbvvideooverlay",
Opacity = .5,
Position = new AMSModels.Rectangle(){ Left = "100", Top = "100", Width = "100", Height = "100"} //**I've tried all types of values here including percentages, nothing changes when I reencode the video.**
}
}
},
codecs: new AMSModels.Codec[]
{
// Add an AAC Audio layer for the audio encoding
new AMSModels.AacAudio(
channels: 2,
samplingRate: 48000,
bitrate: 128000,
profile: AMSModels.AacAudioProfile.AacLc
),
// Next, add a H264Video for the video encoding
new AMSModels.H264Video (
// Set the GOP interval to 2 seconds for all H264Layers
keyFrameInterval:TimeSpan.FromSeconds(2),
// Add H264Layers. Assign a label that you can use for the output filename
layers: new AMSModels.H264Layer[]
{
new AMSModels.H264Layer (
bitrate: 3600000, // Units are in bits per second and not kbps or Mbps - 3.6 Mbps or 3,600 kbps
width: "1280",
height: "720",
label: "3600" // This label is used to modify the file name in the output formats
),
new AMSModels.H264Layer (
bitrate: 1600000, // Units are in bits per second and not kbps or Mbps - 1.6 Mbps or 1600 kbps
width: "960",
height: "540",
label: "1600" // This label is used to modify the file name in the output formats
),
new AMSModels.H264Layer (
bitrate: 600000, // Units are in bits per second and not kbps or Mbps - 0.6 Mbps or 600 kbps
width: "640",
height: "360",
label: "600" // This label is used to modify the file name in the output formats
),
}
),
// Also generate a set of PNG thumbnails
new AMSModels.PngImage(
start: "10%",
step: "10%",
range: "90%",
layers: new Microsoft.Azure.Management.Media.Models.PngLayer[]{
new AMSModels.PngLayer(
width: "100%",
height: "100%"
)
}
),
new AMSModels.JpgImage(
start: "10%",
step: "10%",
range: "90%",
layers: new Microsoft.Azure.Management.Media.Models.JpgLayer[]{
new AMSModels.JpgLayer(
quality: 100,
width: "100%",
height: "100%"
)
}
)
},
// Specify the format for the output files - one for video+audio, and another for the thumbnails
formats: new AMSModels.Format[]
{
// Mux the H.264 video and AAC audio into MP4 files, using basename, label, bitrate and extension macros
// Note that since you have multiple H264Layers defined above, you have to use a macro that produces unique names per H264Layer
// Either {Label} or {Bitrate} should suffice
new AMSModels.Mp4Format(
filenamePattern:"{Basename}_{Resolution}_{Bitrate}{Extension}"
),
new AMSModels.PngFormat(
filenamePattern:"Thumbnail-{Basename}-{Index}{Extension}"
),
new AMSModels.JpgFormat(
filenamePattern:"Thumbnail-{Basename}-{Index}{Extension}"
)
}
),
onError: AMSModels.OnErrorType.StopProcessingJob,
relativePriority: AMSModels.Priority.Normal
)
};
string description = "A simple custom encoding transform with 2 MP4 bitrates";
// Create the custom Transform with the outputs defined above
transform = await client.Transforms.CreateOrUpdateAsync(resourceGroupName, accountName, transformName, outputs, description);
}
Thank you!
Thank you very much for identifying this concern and pointing it out. We took a look at this and have confirmed that there is indeed a bug that you have found here! First off, thank you for finding this issue and reporting it.
Now the bad news, right now we are going to have to look at fixing that and redeploying a fix to production. That could take some time on our side, so if you are in need of a quick solution the only thing I can suggest is to use the older v2 API (yes, the one we announced deprecation on) to work around this issue until we can get a code fix out to production.
Here is the older method of doing this in v2 if that works for you - https://learn.microsoft.com/en-us/azure/media-services/previous/media-services-advanced-encoding-with-mes#overlay
Thanks for confirming this. I've already migrated and committed to v3 so turning back would be messy.
I am still working on it, but I do have a workaround that will get me by. If I make and image the same size as my largest video size, I can then position the watermark in the bottom right in that image. As long as its a transparent PNG, it "looks" like the watermark is bottom right justified.
I have to put this solution to the test across all the video assets I have, but so far I think it will work until the patch above will work. Once that is fixed, I plan on allowing our users to pick from a variety of watermarks.
Thanks again!
Eric

How to increase the resolution and the quality of the CanvasRecorder in RecordRTC?

I have tried to increase the quality and resolution of the CanvasRecorder in HTML Element Recording using RecordRTC but it doesn't change. I need the highest video quality to record CSS animations inside an HTML element. Did I miss something?
Update: I have tried to increase the dpi/scale in html2canvas library by adding {scale: 2} as well but the video is still blurry.
Updated code:
https://jsfiddle.net/ztgqbu6x/
var options = {
type: 'canvas', // Mandatory STRING
video: {
width: 1920,
height: 1280
},
canvas: {
width: 1920,
height: 1280
},
timeSlice: 10,
// used by CanvasRecorder and WhammyRecorder
// it is kind of a "frameRate"
frameInterval: 90,
// used by CanvasRecorder and WhammyRecorder
// you can pass {width:640, height: 480} as well
//video: HTMLVideoElement,
// used by WebAssemblyRecorder
frameRate: 90,
// used by WebAssemblyRecorder
bitrate: 128000
};
var recorder = new RecordRTC(canvas2d, options);

How can I access a Google Home devices screen dimensions?

I am trying to add image views for my Google action, however the images I have supplied do not always fit the screen. I noticed that the image reference says I can supply a height and width in pixels - however I need to know what their screen dimensions are to make it fit! Is there a way I can access this?
Here is my current implementation:
'use strict';
const { dialogflow } = require("actions-on-google");
const functions = require("firebase-functions");
const app = dialogflow();
app.intent('Default Welcome Intent', (conv) => {
conv.ask("testing....");
conv.ask(
new Image({
url: 'https://myurl.com/someimage.jpg',
alt: 'some image',
// width: NEED THIS VALUE
// height: NEED THIS VALUE
})
);
conv.ask("bla bla bla. ");
});
exports.yourAction = functions.https.onRequest(app);
There is currently no way of getting the screen dimensions for the device.
You can set the image to a reasonable size through device testing then use ImageDisplayOptions to adjust the behavior of the image resizing for different screen sizes.

Can I limit the file size for a attachinary/cloudinary upload

Using attachinary, in combination with cloudinary on Rails - is there a way to limit the size (width & height) of the uploaded image file before uploading it?
You can use an incoming transformation to limit the dimensions of the uploaded image. Larger images will be scaled down, e.g.
<%= form.attachinary_file_field :image, cloudinary: { transformation: { width: 200, height: 200, crop: :limit } } %>

Change padding on fancybox depending on image width

I am new to javascript and jQuery and am trying to make a slideshow using Fancybox.
The problem is that I want the images to display on a box of the same size regardless of whether they are portrait or landscape images. My images are all either 700 X 525 for landscape or 525 X 700 for portrait.
The way I have it the landscape images load like it is showed on the top of the image below, the portrait images load as shown in the middle, and I want the portrait images to load as shown on the bottom, with the box with the same dimensions as if it were landscape:
I think what I should do is change the left padding depending on the image dimensions but I have no idea how.
Thank you for your help in advance.
I am using Fancybox version: 2.1.4 and I have set the defaults as such:
padding : 15,
margin : 20,
width : 800,
height : 600,
minWidth : 100,
minHeight : 100,
maxWidth : 9999,
maxHeight : 9999,
autoSize : true,
autoHeight : false,
autoWidth : false,
autoResize : true,
autoCenter : !isTouch,
fitToView : true,
aspectRatio : false,
topRatio : 0.5,
leftRatio : 0.5,
I know this post is 11 months old, but I thought I would share my solution in case it helps someone else out in the future.
Basically I have set a min-width css attribute onto the fancybox element and am comparing that against the current image width, if it is smaller I am adding padding to the fancybox element so that the fancybox element stays the same width but the image inside is horizontally centered.
Step 1: in your css set a min-width on the fancybox element. This is the width that you want your fancybox element to stay at regardless of image width.
.fancybox-wrap { min-width: 1120px; }
Step 2: add in the afterLoad function when you call fancybox
$(".fancybox").fancybox({
afterLoad: function(current) {
var $el = $(".fancybox-wrap").eq(0); // grab the main fancybox element
var getcurrwidth = current.width; // grab the currrent width of the element
var getdesiredwidth = $el.css('min-width'); // grab our min-width that we set from the css
var currwidth = Number(getcurrwidth);
var desiredwidth = Number(getdesiredwidth.replace('px', ''));
if (currwidth < desiredwidth)
{
var custompadding = (desiredwidth - currwidth) * 0.5; // if the width of the element is smaller than our desired width then set padding amount
this.skin.css({'padding-left': custompadding+'px', 'padding-right': custompadding+'px' }); // add equal padding to the fancybox skin element
}
}
});