TinyMCE can't resize Youtube videos embeded on Plone 4.3.10rc1 - tinymce

I've upgraded to Products.TinyMCE 1.3.25 on my Plone 4.3.10rc1 installation. When I add an embeded video in edition mode, I can't resize the frame. It occurs only with Youtube videos, but it works fine with Vimeo, por instance.
I have tried an answer in https://github.com/tinymce/tinymce/issues/3614?_pjax=%23js-repo-pjax-container, but without answer yet.
Any issues about that? Thanks in advance...

You cannot resize the video because it's dimensions are being explicitly set by the media plugin.
I am on TinyMCE version 3.5.12 (2016-10-31). I tried to debug the JavaScript. And in the media plugin, there is part of code, which compares URL with some pattern, and if the URL is YouTube, than it sets the size to exactly 425x350.
The part of code is this:
// YouTube Embed
if (src.match(/youtube\.com\/embed\/\w+/)) {
data.width = 425;
data.height = 350;
data.params.frameborder = '0';
data.type = 'iframe';
setVal('src', src);
setVal('media_type', data.type);
}
...
setVal('width', data.width || (data.type == 'audio' ? 300 : 320));
setVal('height', data.height || (data.type == 'audio' ? 32 : 240));
I don't understand the purpose of the code yet, but clearly, it is there for purpose, not just some broken code.
Maybe it is meant as setting the dimensions for the first time, as initial size, but it sets them always.

Related

Camera doesn't capture anythin in AirSim

in my code i have the following line
imgs = client.simGetImages([
airsim.ImageRequest("0", airsim.ImageType.Scene, False, False)], vehicle_name = name)
imgs doesn't contain anything. Do we need to enable some camera in AirSim ? I am using Unreal Engine 4. Please help me out as very less documentation is available on the web regarding airsim
Did you modify the setting file to give specific description about the camera?
Or did you hit the button to run the simulation?

Kinect V2 - Loading XEF files recorded in Kinect Studio, accessing the Color and Depth frames

I need to get the Color and Depth frames from an XEF file recorded using Kinect Studio.
My code for accessing the Color and Depth frames when using the Kinect directly looks like this:
_sensor = KinectSensor.GetDefault();
if (_sensor != null)
{
_sensor.Open();
_reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Color | FrameSourceTypes.Depth | FrameSourceTypes.Infrared | FrameSourceTypes.Body);
_reader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived;
_coordinateMapper = _sensor.CoordinateMapper;
}
In private void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e) I do my magic, which works.
Now how do I go about that using a pre-recorded XEF file?
I got that I can load an XEF file like this:
var kStudioClient = KStudio.CreateClient();
var eventFile = kStudioClient.OpenEventFile(#"D:\Kinect Studio Recordings\20170922_083134_00.xef");
But how can I get a MultiSourceFrame from that?
Any help is greatly appreciated! Thanks!
You are on the right track with the KStudioClient API. If you haven't implemented it yourself already, there is also a KStudioPlayback class you should use to play back XEF clips asynchronously. I will not explain and give you exact code how to playback at this stage - the API is very easy to understand. Correct usage of this class will issue MultiSourceFrameArrived events automatically, so you do now need to change the way you handle them.
Here is everything you need to know to get up to speed with the KStudioPlayback class - KStudioPlayback class API. If you need code samples, post a comment, and I will get back to you.

IMFSourceReader giving error 0x80070491 for some resolutions

I'm trying to catpure video from a 5MP UVC camera using an IMFSourceReader from Microsoft Media Foundation (on Windows 7 x64). Everything works just like the documentation with no errors on any API calls until the first callback into OnReadSample() which has "0x80070491 There was no match for the specified key in the index" as it's hrStatus parameter.
When I set the resolution down to 1080p it works fine even though 5MP is the camera's native resolution and 5MP (2592x1944) enumerates as an available format.
I can't find anything in the the Microsoft documentation to say that this behaviour is by design but it seems consistent so far. Has anyone else got IMFSourceReader to work at more that 1080p?
I see the same effects on the Microsoft MFCaptureToFile example when it's forced to select the native resolution:
HRESULT nativeTypeErrorCode = S_OK;
DWORD count = 0;
UINT32 streamIndex = 0;
UINT32 requiredWidth = 2592;
UINT32 requiredheight = 1944;
while ( nativeTypeErrorCode == S_OK )
{
IMFMediaType * nativeType = NULL;
nativeTypeErrorCode = m_pReader->GetNativeMediaType( streamIndex, count, &nativeType );
if ( nativeTypeErrorCode != S_OK ) continue;
// get the media type
GUID nativeGuid = { 0 };
hr = nativeType->GetGUID( MF_MT_SUBTYPE, &nativeGuid );
if ( FAILED( hr ) ) return hr;
UINT32 width, height;
hr = ::MFGetAttributeSize( nativeType, MF_MT_FRAME_SIZE, &width, &height );
if ( FAILED( hr ) ) return hr;
if ( nativeGuid == MFVideoFormat_YUY2 && width == requiredWidth && height == requiredheight )
{
// found native config, set it
hr = m_pReader->SetCurrentMediaType( streamIndex, NULL, nativeType );
if ( FAILED( hr ) ) return hr;
break;
}
SafeRelease( &nativeType );
count++;
}
Is there some undocumented maximum resolution with Media Framework?
It turns out that the problem was with the camera I was using, NOT the media streaming framework or UVC cameras generally.
I have switched back to using DirectShow sample grabbing which seems to work ok so far.
I ran into this same problem on windows 7 with a usb camera module I got from Amazon.com (ELP-USBFHD01M-L21). The default resolution of 1920x1080x30fps (MJPEG) works fine, but when I try to select 1280x720x60fps (also MJPEG, NOT h.264) I get the 0x80070491 error in the ReadSample callback. Various other resolutions work OK, such as 640x480x120fps. 1280x720x9fps (YUY2) also works.
The camera works fine at 1280x720x60fps in Direct Show.
Unfortunately, 1280x720x60fps is the resolution I want to use to do some fairly low latency augmented reality stuff with the Oculus Rift.
Interestingly, 1280x720x60fps works fine with the MFCaptureD3D sample in Windows 10 technical preview. I tried copying the ksthunk.sys and usbvideo.sys drivers from my windows 10 installation to my windows 7 machine, but they failed to load even when I booted in "Disable Driver Signing" mode.
After looking around on the web, it seems like various people with various webcams have run into this problem. I'm going to have to use DirectShow to do my video capture, which is annoying since it is a very old API which can't be used with app store applications.
I know this is a fairly obscure problem, but since Microsoft seems to have fixed it in Windows 10 it would be great if they backported the fix it to Windows 7. As it is, I can't use their recommended media foundation API because it won't work on most of the machines I have to run it on.
In any case, if you are having this problem, and Windows 10 is an option, try that as a fix.
Max Behensky

how to merge the mp4 videos using mp4parser which are taken from both front and back camera alternatively

i am developing an app to merge the n number of videos using mp4parser.The videos which are to be merged are taken in both front camera and back camera. if i merge these videos into single , it is merging all videos fine, but the alternative videos which are taken via front camera are merged as inverted.
what can i do.
please any one help me.
this is my code to merge videos:
try {
String f1,f2,f3;
f1 = Environment.getExternalStorageDirectory() + "/DCIM/testvideo1.mp4";// video took via back camera
f2 = Environment.getExternalStorageDirectory() + "/DCIM/testvideo2.mp4";// video took via front camera
f3 = Environment.getExternalStorageDirectory() + "/DCIM/testvideo3.mp4";// video took via front camera
Movie[] inMovies = new Movie[] {
MovieCreator.build(f1),
MovieCreator.build(f2),
MovieCreator.build(f3)
};
List<Track> videoTracks = new LinkedList<Track>();
List<Track> audioTracks = new LinkedList<Track>();
for (Movie m : inMovies) {
for (Track t : m.getTracks()) {
if (t.getHandler().equals("soun")) {
audioTracks.add(t);
}
if (t.getHandler().equals("vide")) {
videoTracks.add(t);
}
}
}
Movie result = new Movie();
if (audioTracks.size() > 0) {
result.addTrack(new AppendTrack(audioTracks
.toArray(new Track[audioTracks.size()])));
}
if (videoTracks.size() > 0) {
result.addTrack(new AppendTrack(videoTracks
.toArray(new Track[videoTracks.size()])));
}
BasicContainer out = (BasicContainer) new DefaultMp4Builder().build(result);
WritableByteChannel fc = new RandomAccessFile(
String.format(Environment.getExternalStorageDirectory()+ "/DCIM/CombinedVideo.mp4"), "rw").getChannel();
out.writeContainer(fc);
fc.close();
} catch (Exception e) {
Log.d("Rvg", "exeption" + e);
Toast.makeText(getApplicationContext(), "" + e, Toast.LENGTH_LONG)
.show();
}
Explanation:
This is caused due to an orientation issue explained here. In short, it means that because of the different rotation metadata of videos recorded with the front camera and the back camera, when merging the videos using mp4parser every second video will be upside down.
How to solve this issue:
I have managed to solve this issue in a two different ways.
1. Re-encoding video - using FFmpeg I managed to merge the videos in the correct orientation.
How to use, and my suggestion for using FFmpeg in Android
Cons:
The main problem with this solution is that it requires re-encoding the videos into the merged video, and this process can take a pretty long time.
Also this method is lossy due to the re-encoding
Pro:
This method can work regardless even if the videos resolution, frame rate, and bitrate are not the same.
2. Using mp4parser/demuxer method - in order to use mp4parser or FFmpeg demuxer method to merge the videos in the correct orientation, the videos must be provided with the same orientation - or with no rotation metadata. Using CameraView takeVideoSnapshot() allows to record a video with no rotation metadata, so when merging it with mp4parser it is oriented correctly.
Con:
The main problem with this solution is that for the output to be with correct orientation and not fail, the input videos must have the same resolution, frame rate, bit rate and also rotation metadata(for correct orientation).
Pros:
This method requires no re-encoding so it is very very fast.
Also it is lossless.

Playing media with gwt

I have a simple mail system developed with G.W.T and i am trying add a feature to play audio and video files if there is a video or audio file comes as an attachment.
I have been trying bst player and HTML video tag to get work , but i can not play some video formats such as .avi, .mpeg ,.mpg and so on.
What else can be done to play those kind of video formats?
On the other hand, i am thinking of converting the video file in a java servlet then giving that url to player, but i do not know if that makes sense.Is that should be the way?
Last thing is; is there a general format (maybe .flv?) that a video file first have to be converted into so it can be playable by VlcPlayerPlugin or another video player? Any other tip will be helpful.
Thank you for your helps.
the html5 video tag can only play certain formats. You can find a list of the supported browser formats here.
I also had some problems with the BST player but at least it worked with the following code:
public YoutubeVideoPopup( String youtubeUrl )
{
// PopupPanel's constructor takes 'auto-hide' as its boolean parameter.
// If this is set, the panel closes itself automatically when the user
// clicks outside of it.
super( true );
this.setAnimationEnabled( true );
Widget player = null;
try
{
player = new YouTubePlayer( youtubeUrl, "500", "375" );
player.setStyleName( "player" );
}
catch ( PluginVersionException e )
{
// catch plugin version exception and alert user to download plugin first.
// An option is to use the utility method in PlayerUtil class.
player = PlayerUtil.getMissingPluginNotice( Plugin.Auto, "Missing Plugin",
"You have to install a flash plaxer first!",
false );
}
catch ( PluginNotFoundException e )
{
// catch PluginNotFoundException and tell user to download plugin, possibly providing
// a link to the plugin download page.
player = new HTML( "You have to install a flash plaxer first!" );
}
setWidget( player );
}
As you can see, we used the youtube player here, which has the positive effect that the vide can be placed at youtube and must not be pished to server every time the GWT app is redeployed.
You also can play flash other formats, simply use the correct Player class in the try block; example for flash:
player = new com.bramosystems.oss.player.core.client.ui.FlashMediaPlayer( GWT.getHostPageBaseURL( ) +
f4vFileName, true, "375", "500" );
player.setWidth( 500 + "px" );
player.setHeight( "100%" );
Sorry for the delay, did not have the chance to reply.Because of VlcPlayer was behaving strange and showing different control buttons on Ubuntu and Windows, i decided to use FlashPlayerPlugin of BstPlayer.I first convert file to flv by using jave is described here in documentation, then it serves the converted video to FlashPlayer, it works without problem now , thank you all for your helps.