I am working on an encoding flow that adds an overlay image to the video. I want this watermark to be in the bottom right corner. No matter what I try with the position parameter, the watermark is positioned in the top left.
Can someone point me to a sample that covers that scenario? The Microsoft docs and samples are vague about positioning the overlay and setting the opacity.
Here is a snippet of code that contains my transform definition. Regardless of the overlay settings, I can use any values for the rectangle position element and nothing changes. The watermark ends up in the upper right of the video. Its like the position and opacity properties are being ignored.
new AMSModels.TransformOutput(
new AMSModels.StandardEncoderPreset(
filters: new AMSModels.Filters
{
Overlays = new List<AMSModels.Overlay>
{
new AMSModels.VideoOverlay()
{
InputLabel = "tbvvideooverlay",
Opacity = .5,
Position = new AMSModels.Rectangle(){ Left = "100", Top = "100", Width = "100", Height = "100"} //**I've tried all types of values here including percentages, nothing changes when I reencode the video.**
}
}
},
codecs: new AMSModels.Codec[]
{
// Add an AAC Audio layer for the audio encoding
new AMSModels.AacAudio(
channels: 2,
samplingRate: 48000,
bitrate: 128000,
profile: AMSModels.AacAudioProfile.AacLc
),
// Next, add a H264Video for the video encoding
new AMSModels.H264Video (
// Set the GOP interval to 2 seconds for all H264Layers
keyFrameInterval:TimeSpan.FromSeconds(2),
// Add H264Layers. Assign a label that you can use for the output filename
layers: new AMSModels.H264Layer[]
{
new AMSModels.H264Layer (
bitrate: 3600000, // Units are in bits per second and not kbps or Mbps - 3.6 Mbps or 3,600 kbps
width: "1280",
height: "720",
label: "3600" // This label is used to modify the file name in the output formats
),
new AMSModels.H264Layer (
bitrate: 1600000, // Units are in bits per second and not kbps or Mbps - 1.6 Mbps or 1600 kbps
width: "960",
height: "540",
label: "1600" // This label is used to modify the file name in the output formats
),
new AMSModels.H264Layer (
bitrate: 600000, // Units are in bits per second and not kbps or Mbps - 0.6 Mbps or 600 kbps
width: "640",
height: "360",
label: "600" // This label is used to modify the file name in the output formats
),
}
),
// Also generate a set of PNG thumbnails
new AMSModels.PngImage(
start: "10%",
step: "10%",
range: "90%",
layers: new Microsoft.Azure.Management.Media.Models.PngLayer[]{
new AMSModels.PngLayer(
width: "100%",
height: "100%"
)
}
),
new AMSModels.JpgImage(
start: "10%",
step: "10%",
range: "90%",
layers: new Microsoft.Azure.Management.Media.Models.JpgLayer[]{
new AMSModels.JpgLayer(
quality: 100,
width: "100%",
height: "100%"
)
}
)
},
// Specify the format for the output files - one for video+audio, and another for the thumbnails
formats: new AMSModels.Format[]
{
// Mux the H.264 video and AAC audio into MP4 files, using basename, label, bitrate and extension macros
// Note that since you have multiple H264Layers defined above, you have to use a macro that produces unique names per H264Layer
// Either {Label} or {Bitrate} should suffice
new AMSModels.Mp4Format(
filenamePattern:"{Basename}_{Resolution}_{Bitrate}{Extension}"
),
new AMSModels.PngFormat(
filenamePattern:"Thumbnail-{Basename}-{Index}{Extension}"
),
new AMSModels.JpgFormat(
filenamePattern:"Thumbnail-{Basename}-{Index}{Extension}"
)
}
),
onError: AMSModels.OnErrorType.StopProcessingJob,
relativePriority: AMSModels.Priority.Normal
)
};
string description = "A simple custom encoding transform with 2 MP4 bitrates";
// Create the custom Transform with the outputs defined above
transform = await client.Transforms.CreateOrUpdateAsync(resourceGroupName, accountName, transformName, outputs, description);
}
Thank you!
Thank you very much for identifying this concern and pointing it out. We took a look at this and have confirmed that there is indeed a bug that you have found here! First off, thank you for finding this issue and reporting it.
Now the bad news, right now we are going to have to look at fixing that and redeploying a fix to production. That could take some time on our side, so if you are in need of a quick solution the only thing I can suggest is to use the older v2 API (yes, the one we announced deprecation on) to work around this issue until we can get a code fix out to production.
Here is the older method of doing this in v2 if that works for you - https://learn.microsoft.com/en-us/azure/media-services/previous/media-services-advanced-encoding-with-mes#overlay
Thanks for confirming this. I've already migrated and committed to v3 so turning back would be messy.
I am still working on it, but I do have a workaround that will get me by. If I make and image the same size as my largest video size, I can then position the watermark in the bottom right in that image. As long as its a transparent PNG, it "looks" like the watermark is bottom right justified.
I have to put this solution to the test across all the video assets I have, but so far I think it will work until the patch above will work. Once that is fixed, I plan on allowing our users to pick from a variety of watermarks.
Thanks again!
Eric
Related
I have set the Capacitor camera plugin with the following options
const image = await Camera.getPhoto({
quality: 20,
width : 200,
height : 200,
allowEditing: false,
source: CameraSource.Camera,
resultType: CameraResultType.Base64
});
After that I take the base64 and upload it to the server. However I am getting images in the 2-5MB range of size (Android). Is there any way of reducing/compressing the size further?
Already checked this and other similar postings but they seem to adjust the size by reducing the quality param and the width and height (which are already low in my case)
I have tried to increase the quality and resolution of the CanvasRecorder in HTML Element Recording using RecordRTC but it doesn't change. I need the highest video quality to record CSS animations inside an HTML element. Did I miss something?
Update: I have tried to increase the dpi/scale in html2canvas library by adding {scale: 2} as well but the video is still blurry.
Updated code:
https://jsfiddle.net/ztgqbu6x/
var options = {
type: 'canvas', // Mandatory STRING
video: {
width: 1920,
height: 1280
},
canvas: {
width: 1920,
height: 1280
},
timeSlice: 10,
// used by CanvasRecorder and WhammyRecorder
// it is kind of a "frameRate"
frameInterval: 90,
// used by CanvasRecorder and WhammyRecorder
// you can pass {width:640, height: 480} as well
//video: HTMLVideoElement,
// used by WebAssemblyRecorder
frameRate: 90,
// used by WebAssemblyRecorder
bitrate: 128000
};
var recorder = new RecordRTC(canvas2d, options);
Here is a full jsfiddle example
I use a custom series and draw a polygon:
data = [
[80.9251933067, 207.9047427038],
[52.8853803102, 337.7443022089],
[25.9926385814, 120.3586150136]
];
I use echarts.graphi.clipPointsByRect() (like in this echarts-example) to make sure, the polygon is not drawn outside of the grid.
echarts.graphic.clipPointsByRect(points, {
x: params.coordSys.x,
y: params.coordSys.y,
width: params.coordSys.width,
height: params.coordSys.height
})
Initially the polygon is drawn correctly, like this:
But when I zoom in, the polygon is distorted: e.g. you can click the zoom part buttom below the chart to zoom from 40 to 60 - in this case I'd expect to see the part of the shape (like highlighted in yellow in the image above) - but instead I see this distorted image:
Maybe this function is not meant for this use-case, or is this a bug?
Is there another function for this use-case or does anyone know a workaround?
Update
Version 4.4.x contains a new clip feature. This makes it easy to avoid the distortion:
in the render function we don't need to clip our shapes: i.e. no need to call clipPointsByRect()
instead we just activate clip on the custom series:
New series definition with clip: 'true':
series: [{
type: 'custom',
clip: 'true',
renderItem: renderItem,
data: data
}]
Here is an updated jsfiddle expample
Original Version
it seems that the function is really not working as expected - see echarts-source code comment:
export function clipPointsByRect(points, rect) {
// FIXME: this way migth be incorrect when grpahic clipped by a corner.
// and when element have border.
I've created an issue #10222 for the e-charts project
A workaround for now is to use a custom clipping function
e.g. lineclip supports the Sutherland-Hodgman algorithm for polygon clipping
Here is the updated jsfiddle-example that shows the correct result when you zoom in:
I can create a theme and replace the default palette like this:
const theme = createMuiTheme({
primary: {
main: '#aa2222',
},
extra: {
main: '#22aa22',
},
});
This automatically sets theme.primary.light and theme.primary.dark. However it does not set the equivalent light and dark values for the extra object.
Is there a way to do this for custom elements like extra without having to manually calculate the RGB values? Or am I limited to only primary, secondary, and error getting calculated automatically?
Worked this one out. This is added on to the end of the code in the question above:
theme.palette.augmentColor(theme.palette.extra, 500, 300, 700);
The three numeric parameters are the mainShade, lightShade and darkShade values. These are the ones used for the default palettes:
augmentColor(primary, 500, 300, 700);
augmentColor(secondary, 'A400', 'A200', 'A700');
augmentColor(error, 500, 300, 700);
I think they are there so you can tweak how much the colours are lightened or darkened in case the default isn't readable enough.
I'm manually porting an extension I wrote in Chrome over to Firefox. I'm attaching a panel to a widget, and setting the content of that panel as an HTML file. How can I make the panel shrink and grow with the content? There's a lot of unsightly scroll bars and grey background right now.
var data = require("self").data;
var text_entry = require("panel").Panel({
width: 320,
height: 181,
contentURL: data.url("text-entry.html"),
contentScriptFile: data.url("get-text.js")
});
require("widget").Widget({
label: "Text entry",
id: "text-entry",
contentURL: "http://www.mozilla.org/favicon.ico",
panel: text_entry
});
Not setting the height property of the panel makes it quite tall.
You might want to check out this example that resizes the panel based on the document loaded. If you want to resize based on changes to the content size, at least on initial load:
https://builder.addons.mozilla.org/package/150225/latest/
( sorry for the delay in respinding, been afk travelling )