Can I determine FLV dimensions using FlowPlayer? - flv

I'm currently integrating a custom Flash Video player plugin into a .Net CMS. The plugin editor currently requires the user to provide the video's width and height in order for my code to push out the relevant dimensions to the FlowPlayer.
I was wondering, is there a way to automatically determine the FLV width and height rather than having the editor having to provide this information each time? Ideally, I'd prefer it if the user simply had to provide the FLV location and let the new plugin automatically provide the width / height to the FlowPlayer.

Add that into Your flowplayer's configuration:
clip: {
onMetaData: function(clip) {
var width = parseInt(clip.metaData.width, 100);
var height = parseInt(clip.metaData.height, 100);
$(this.getParent()).css({ width: width, height: height });
}
},
Event 'metadata' is called after file's meta information is loaded, thus width and height.
In the example above I change size of a player according to movie's dimentions

Related

Unity Agora screenshare blurry video quality

How to improve the image quality while sharing the screen using agora sdk with unity. I've used below
settings for VideoProfile as
mRtcEngine.SetVideoEncoderConfiguration(new VideoEncoderConfiguration()
{
// Sets the video encoding bitrate (Kbps).
minBitrate = 100,
bitrate = 1130,
// Sets the video frame rate.
minFrameRate = 10,
frameRate = FRAME_RATE.FRAME_RATE_FPS_24,
// Sets the video resolution.
dimensions = new VideoDimensions() { width = EncodeWidth, height = EncodeHeight },
// Sets the video encoding degradation preference under limited bandwidth. MIANTAIN_QUALITY means to degrade the frame rate to maintain the video quality.
degradationPreference = DEGRADATION_PREFERENCE.MAINTAIN_QUALITY,
// Note if your remote user video surface to set to flip Horizontal, then we should flip it before sending
mirrorMode = VIDEO_MIRROR_MODE_TYPE.VIDEO_MIRROR_MODE_ENABLED,
// Sets the video orientation mode of the video
orientationMode = ORIENTATION_MODE.ORIENTATION_MODE_FIXED_PORTRAIT
});
the Output from Editor to Device looks like below:
And Ouput from Device to Editor or another Deivce looks blurry as below:
Ive tested with WIFI on both device and ensure with good quality and also with forced settings as Image Quality than Frame Rate.
mRtcEngine.SetVideoQualityParameters(false);
mRtcEngine.EnableDualStreamMode(false);
mRtcEngine.SetRemoteDefaultVideoStreamType(REMOTE_VIDEO_STREAM_TYPE.REMOTE_VIDEO_STREAM_HIGH);
Do i missed anything else to improve the image quality?
How to share a Rect part of the screen and this rect can be draggable by user at part of the screen
Blurry videos may be caused by low bitrates and resolution ratios. Check the following:
Check videoProfile. If possible, set videoProfile to a higher level
to see whether the video is clearer.
Check the stream type of the receiver. If the stream type is low, call the setRemoteVideoStreamType method to switch from a low stream to high stream. (You did this)
Switch to another WiFi network to ensure that the blurry video is not caused by poor Internet connections.
Turn off all pre-processing options.
If this issue persists, contact Agora customer support (via ticket system) with the following information:
The uid of the user who sees the blurry video.
The time frame during which the blurry video appears.
SDK logs and screen recording files of the user.
You can check the statistics of every call in Agora Analytics in Dashboard.

Bad video quality while using custom video source in Unity

I try to use SetExternalVideoSource and PushVideoFrame to send custom video frames to the RTC engine with methods described here. However, the video quality is not as good as using the default video streaming options despite that I am pushing video frames with the same resolution. Does anyone notice this difference before? I wonder if this is expected? Or maybe there is a way to set the custom video quality, but I overlooked?
Hope the slack channel answer helped you on this. For others reading this, please see what the discussion was:
"You should give a resolution configuration. Here is the config I use in my advanced demo app:"
mRtcEngine.SetVideoEncoderConfiguration(new VideoEncoderConfiguration()
{
bitrate = 1130,
frameRate = FRAME_RATE.FRAME_RATE_FPS_15,
dimensions = new VideoDimensions() { width = Screen.width, height = Screen.height },
// Note if your remote user video surface to set to flip Horizontal, then we should flip it before sending
mirrorMode = VIDEO_MIRROR_MODE_TYPE.VIDEO_MIRROR_MODE_ENABLED
});

Camera Zoom Issue on iPhone X, iPhone XS etc

The main issue I am having is with my camera. The main issue is that the camera is zoomed in too far on the iPhone X, Xs, Xs Max, and XR Models.
My camera is a full screen camera which is okay on the smaller iPhones but once I get to the models mentioned above the camera seems to be stuck on the max zoom level. What I really want to accomplish is a nature similar to how instagrams camera works. Where it is full screen on all models up until the iPhone X series and then seemingly respect the edge insets or if it is going to be full screen I want it to not be zoomed in so far the way it is now.
My thought process is to use something like this.
Determine the device. I figure I can use something like Device Guru which can be found here to determine the type of device.
GitHub repo can be found here --> https://github.com/InderKumarRathore/DeviceGuru
Using this tool or a similar tool I should be able to get the screen dimensions for the device. Then I can do some type of math to determine the proper screen size for the camera view.
Assuming DeviceGuru didn't work I would just use something like this to get the width and height of the screen.
// Screen width.
public var screenWidth: CGFloat {
return UIScreen.main.bounds.width
}
// Screen height.
public var screenHeight: CGFloat {
return UIScreen.main.bounds.height
}
This is the block of code I am using to fill the camera. However I want to turn into something that is based on the device size as opposed to just filling it despite the phone
import Foundation
import UIKit
import AVFoundation
class PreviewView: UIView {
var videoPreviewLayer: AVCaptureVideoPreviewLayer {
guard let layer = layer as? AVCaptureVideoPreviewLayer else {
fatalError("Expected `AVCaptureVideoPreviewLayer` type for layer. Check PreviewView.layerClass implementation.")
}
layer.videoGravity = AVLayerVideoGravity.resizeAspectFill
layer.connection?.videoOrientation = .portrait
return layer
}
var session: AVCaptureSession? {
get {
return videoPreviewLayer.session
}
set {
videoPreviewLayer.session = newValue
}
}
// MARK: UIView
override class var layerClass: AnyClass {
return AVCaptureVideoPreviewLayer.self
}
}
I want my camera took look something like this
or this
Not this ( what my current camera looks like )
I have looked at many questions and nobody has a concrete solution so please don't mark it as a duplicate and please don't say it's just an issue with the iPhone X series.
Firstly, you need to update your code with the apt information, since this will give a vague idea to a anyone else with less experience.
Looking at the Images it is clear that the type of camera you are trying to access is quite different than the one you have. With introduction of iPhone 7+ and iPhone X, apple has introduced many different camera devices to the users. All these are accesible through AVCaptureDevice.DeviceType.
So by looking at what you want to achieve, it is clear that you want more field of view within the screen. This is accessible by .builtInWideAngleCamera property of the above given capture device. Changing it to this will solve your problem.
Cheers

JW Player width can be 160 px?

I am trying to embed jw player in small space.
So I need to make jw player having 160 px width.
And it looks good in pc browser, but it is broken in iphone safari.
In details, Height is getting larger than expected, and event more than width.
Javascript code looks like the following.
jwplayer("small_player1").setup({
width: 160,
height: 90,
image: "/assets/videos/thumbnails/empty-523x320.png",
file: "longtail sample video",
title: ""
So JW-Player can have 160*90 size, and available in iphone?
Thank you.

How to write a web-based music visualizer?

I'm trying to find the best approach to build a music visualizer to run in a browser over the web. Unity is an option, but I'll need to build a custom audio import/analysis plugin to get the end user's sound output. Quartz does what I need but only runs on Mac/Safari. WebGL seems not ready. Raphael is mainly 2D, and there's still the issue of getting the user's sound... any ideas? Has anyone done this before?
Making something audio reactive is pretty simple. Here's an open source site with lots audio reactive examples.
As for how to do it you basically use the Web Audio API to stream the music and use its AnalyserNode to get audio data out.
"use strict";
const ctx = document.querySelector("canvas").getContext("2d");
ctx.fillText("click to start", 100, 75);
ctx.canvas.addEventListener('click', start);
function start() {
ctx.canvas.removeEventListener('click', start);
// make a Web Audio Context
const context = new AudioContext();
const analyser = context.createAnalyser();
// Make a buffer to receive the audio data
const numPoints = analyser.frequencyBinCount;
const audioDataArray = new Uint8Array(numPoints);
function render() {
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
// get the current audio data
analyser.getByteFrequencyData(audioDataArray);
const width = ctx.canvas.width;
const height = ctx.canvas.height;
const size = 5;
// draw a point every size pixels
for (let x = 0; x < width; x += size) {
// compute the audio data for this point
const ndx = x * numPoints / width | 0;
// get the audio data and make it go from 0 to 1
const audioValue = audioDataArray[ndx] / 255;
// draw a rect size by size big
const y = audioValue * height;
ctx.fillRect(x, y, size, size);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
// Make a audio node
const audio = new Audio();
audio.loop = true;
audio.autoplay = true;
// this line is only needed if the music you are trying to play is on a
// different server than the page trying to play it.
// It asks the server for permission to use the music. If the server says "no"
// then you will not be able to play the music
// Note if you are using music from the same domain
// **YOU MUST REMOVE THIS LINE** or your server must give permission.
audio.crossOrigin = "anonymous";
// call `handleCanplay` when it music can be played
audio.addEventListener('canplay', handleCanplay);
audio.src = "https://twgljs.org/examples/sounds/DOCTOR%20VOX%20-%20Level%20Up.mp3";
audio.load();
function handleCanplay() {
// connect the audio element to the analyser node and the analyser node
// to the main Web Audio context
const source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
}
}
canvas { border: 1px solid black; display: block; }
<canvas></canvas>
Then it's just up to you to draw something creative.
note some troubles you'll likely run into.
At this point in time (2017/1/3) neither Android Chrome nor iOS Safari support analysing streaming audio data. Instead you have to load the entire song. Here'a a library that tries to abstract that a little
On Mobile you can not automatically play audio. You must start the audio inside an input event based on user input like 'click' or 'touchstart'.
As pointed out in the sample you can only analyse audio if the source is either from the same domain OR you ask for CORS permission and the server gives permission. AFAIK only Soundcloud gives permission and it's on a per song basis. It's up to the individual artist's song's settings whether or not audio analysis is allowed for a particular song.
To try to explain this part
The default is you have permission to access all data from the same domain but no permission from other domains.
When you add
audio.crossOrigin = "anonymous";
That basically says "ask the server for permission for user 'anonymous'". The server can give permission or not. It's up to the server. This includes asking even the server on the same domain which means if you're going to request a song on the same domain you need to either (a) remove the line above or (b) configure your server to give CORS permission. Most servers by default do not give CORS permission so if you add that line, even if the server is the same domain, if it does not give CORS permission then trying to analyse the audio will fail.
music: DOCTOR VOX - Level Up
By WebGL being "not ready", I'm assuming that you're referring to the penetration (it's only supported in WebKit and Firefox at the moment).
Other than that, equalisers are definitely possible using HTML5 audio and WebGL. A guy called David Humphrey has blogged about making different music visualisers using WebGL and was able to create some really impressive ones. Here's some videos of the visualisations (click to watch):
I used SoundManager2 to pull the waveform data from the mp3 file. That feature requires Flash 9 so it might not be the best approach.
My waveform demo with HMTL5 Canvas:
http://www.momentumracer.com/electriccanvas/
and WebGL:
http://www.momentumracer.com/electricwebgl/
Sources:
https://github.com/pepez/Electric-Canvas
Depending on complexity you might be interested in trying out Processing (http://www.processing.org), it has really easy tools to make web-based apps, and it has tools to get the FFT and waveform of an audio file.