I have a simple mail system developed with G.W.T and i am trying add a feature to play audio and video files if there is a video or audio file comes as an attachment.
I have been trying bst player and HTML video tag to get work , but i can not play some video formats such as .avi, .mpeg ,.mpg and so on.
What else can be done to play those kind of video formats?
On the other hand, i am thinking of converting the video file in a java servlet then giving that url to player, but i do not know if that makes sense.Is that should be the way?
Last thing is; is there a general format (maybe .flv?) that a video file first have to be converted into so it can be playable by VlcPlayerPlugin or another video player? Any other tip will be helpful.
Thank you for your helps.
the html5 video tag can only play certain formats. You can find a list of the supported browser formats here.
I also had some problems with the BST player but at least it worked with the following code:
public YoutubeVideoPopup( String youtubeUrl )
{
// PopupPanel's constructor takes 'auto-hide' as its boolean parameter.
// If this is set, the panel closes itself automatically when the user
// clicks outside of it.
super( true );
this.setAnimationEnabled( true );
Widget player = null;
try
{
player = new YouTubePlayer( youtubeUrl, "500", "375" );
player.setStyleName( "player" );
}
catch ( PluginVersionException e )
{
// catch plugin version exception and alert user to download plugin first.
// An option is to use the utility method in PlayerUtil class.
player = PlayerUtil.getMissingPluginNotice( Plugin.Auto, "Missing Plugin",
"You have to install a flash plaxer first!",
false );
}
catch ( PluginNotFoundException e )
{
// catch PluginNotFoundException and tell user to download plugin, possibly providing
// a link to the plugin download page.
player = new HTML( "You have to install a flash plaxer first!" );
}
setWidget( player );
}
As you can see, we used the youtube player here, which has the positive effect that the vide can be placed at youtube and must not be pished to server every time the GWT app is redeployed.
You also can play flash other formats, simply use the correct Player class in the try block; example for flash:
player = new com.bramosystems.oss.player.core.client.ui.FlashMediaPlayer( GWT.getHostPageBaseURL( ) +
f4vFileName, true, "375", "500" );
player.setWidth( 500 + "px" );
player.setHeight( "100%" );
Sorry for the delay, did not have the chance to reply.Because of VlcPlayer was behaving strange and showing different control buttons on Ubuntu and Windows, i decided to use FlashPlayerPlugin of BstPlayer.I first convert file to flv by using jave is described here in documentation, then it serves the converted video to FlashPlayer, it works without problem now , thank you all for your helps.
Related
I've been playing around with Flutter and trying to get it so I can record the front facing camera (using the camera plugin [https://pub.dev/packages/camera]) as well as playing a video to the user (using the video_player plugin [https://pub.dev/packages/video_player]).
Next I use ffmpeg to horizontally stack the videos together. This all works fine but when I play back the final output there is a slight delay when listening to the audio. I'm calling Future.wait([cameraController.startVideoRecording(filePath), videoController.play()]); but there is a slight delay in these tasks actually starting. I don't even need them to fire at the exact same time (which I'm realising is probably impossible), instead if I knew exactly when each of the tasks begun then I can use the time difference to sync the audio using ffmpeg or similar.
I've tried adding a listener on the videoController to see when isPlaying first returns true, and also watching the output directory for when the recorded video appears on the filesystem:
listener = () {
if (videoController.value.isPlaying) {
isPlaying = DateTime.now().microsecondsSinceEpoch;
log('isPlaying '+isPlaying.toString());
}
videoController.removeListener(listener);
};
videoController.addListener(listener);
var watcher = DirectoryWatcher('${extDir.path}/');
watcher.events.listen((event) {
if (event.type == ChangeType.ADD) {
fileAdded = DateTime.now().microsecondsSinceEpoch;
log('added '+fileAdded.toString());
}
});
Then likewise for checking if the camera is recording:
var listener;
listener = () {
if (cameraController.value.isRecordingVideo) {
log('isRecordingVideo '+DateTime.now().microsecondsSinceEpoch.toString());
//cameraController.removeListener(listener);
}
};
cameraController.addListener(listener);
This results in (for example) the following order and microseconds for each event:
is playing: 1606478994797247
is recording: 1606478995492889 (695642 microseconds later)
added: 1606478995839676 (346787 microseconds later)
However, when I play back the video the syncing is off by approx 0.152 seconds so doesn't marry up to the time differences reported above.
Does anyone have any idea how I could accomplish near perfect syncing when combining 2 videos? Thanks.
I was wondering if it was possible to not include text in my SSML, since my audio file says 'Are you ready to play?', I dont need any speech from the google assistant itself.
app.intent('Default Welcome Intent',(conv) =>{
const reply = `<speak>
<audio src="intro.mp3"></audio>
</speak>`;
conv.ask(reply);
});
The code above produces an error since I do not have any text input.
The error you probably got was something like
expected_inputs[0].input_prompt.rich_initial_prompt.items[0].simple_response: 'display_text' must be set or 'ssml' must have a valid display rendering.
As it notes, there are conditions where the Assistant runs on a device with a display (such as your phone), and it should show a message that is substantively the same as what the audio plays.
You have a couple of options that are appropriate for these cases.
First, you can provide optional text inside the <audio> tag that will be shown, but not read out (unless the audio file couldn't be loaded for some reason).
<speak>
<audio src="intro.mp3">Are you ready to play?</audio>
</speak>
Alternately, you can provide separate strings that represent the SSML version and the plain text version of what you're saying.
const ssml = `<speak><audio src="intro.mp3"></audio></speak>`;
const text = "Are you ready to play?";
conv.ask( new SimpleResponse({
speech: ssml,
text: text
}) );
Found a hacky work around for this, By adding a very short string and then putting it in a prosody tag with a silent volume:
app.intent('Default Welcome Intent',(conv) =>{
const reply = `<speak>
<audio src="intro.mp3"></audio>
<prosody volume ="silent">a</prosody> </speak>`;
conv.ask(reply);
});
This plays the audio and does not speak the 'a' text.
The other way to trick, try to use blank space to don't get No Response error (... is not responding now)
conv.ask(new SimpleResponse(" "))
const reply = `<speak>
<audio src="intro.mp3"></audio>
</speak>`;
conv.ask(reply);
I've upgraded to Products.TinyMCE 1.3.25 on my Plone 4.3.10rc1 installation. When I add an embeded video in edition mode, I can't resize the frame. It occurs only with Youtube videos, but it works fine with Vimeo, por instance.
I have tried an answer in https://github.com/tinymce/tinymce/issues/3614?_pjax=%23js-repo-pjax-container, but without answer yet.
Any issues about that? Thanks in advance...
You cannot resize the video because it's dimensions are being explicitly set by the media plugin.
I am on TinyMCE version 3.5.12 (2016-10-31). I tried to debug the JavaScript. And in the media plugin, there is part of code, which compares URL with some pattern, and if the URL is YouTube, than it sets the size to exactly 425x350.
The part of code is this:
// YouTube Embed
if (src.match(/youtube\.com\/embed\/\w+/)) {
data.width = 425;
data.height = 350;
data.params.frameborder = '0';
data.type = 'iframe';
setVal('src', src);
setVal('media_type', data.type);
}
...
setVal('width', data.width || (data.type == 'audio' ? 300 : 320));
setVal('height', data.height || (data.type == 'audio' ? 32 : 240));
I don't understand the purpose of the code yet, but clearly, it is there for purpose, not just some broken code.
Maybe it is meant as setting the dimensions for the first time, as initial size, but it sets them always.
I am trying to load some sounds in OGG format into my game at runtime in a WebGL build. I use a WWW class to fetch the file which has an ".ogg" extension and then I call www.audioClip to get the downloaded file. This works on other platforms but fails in WebGL.
Unity throws up this error message: "Streaming of 'ogg' on this platform is not supported". Strange since I am not trying to stream it, and I have tried explicitly calling GetAudioClip(false, false, AudioType.OGGVORBIS) and got the same result.
I have tried converting my OGG file to AAC (with M4A and MP4 extensions) and loading this with www.audioClip (error that it cannot determine the file type from the URL) and www.GetAudioClip(false, false, AudioType.MPEG) (no error but also no sound). The closest thing to a solution I've seen online is to use MP3 instead but I don't want to do this for licensing reasons.
Is WebGL in Unity restricted to audio assets that are build into the application?
try:
WWW data = new WWW (url); yield return data;
AudioClip ac = data.GetAudioClipCompressed(false, AudioType.AUDIOQUEUE) as AudioClip;
if(ac != null)
{
ac.name = "mySoundFile.ogg";
gameObject.GetComponent<AudioSource> ().clip = ac;
}
else
{
gameObject.GetComponent<AudioSource> ().clip = null;
Debug.Log("no audio found.");
}
works for me with .ogg files.
i am developing an app to merge the n number of videos using mp4parser.The videos which are to be merged are taken in both front camera and back camera. if i merge these videos into single , it is merging all videos fine, but the alternative videos which are taken via front camera are merged as inverted.
what can i do.
please any one help me.
this is my code to merge videos:
try {
String f1,f2,f3;
f1 = Environment.getExternalStorageDirectory() + "/DCIM/testvideo1.mp4";// video took via back camera
f2 = Environment.getExternalStorageDirectory() + "/DCIM/testvideo2.mp4";// video took via front camera
f3 = Environment.getExternalStorageDirectory() + "/DCIM/testvideo3.mp4";// video took via front camera
Movie[] inMovies = new Movie[] {
MovieCreator.build(f1),
MovieCreator.build(f2),
MovieCreator.build(f3)
};
List<Track> videoTracks = new LinkedList<Track>();
List<Track> audioTracks = new LinkedList<Track>();
for (Movie m : inMovies) {
for (Track t : m.getTracks()) {
if (t.getHandler().equals("soun")) {
audioTracks.add(t);
}
if (t.getHandler().equals("vide")) {
videoTracks.add(t);
}
}
}
Movie result = new Movie();
if (audioTracks.size() > 0) {
result.addTrack(new AppendTrack(audioTracks
.toArray(new Track[audioTracks.size()])));
}
if (videoTracks.size() > 0) {
result.addTrack(new AppendTrack(videoTracks
.toArray(new Track[videoTracks.size()])));
}
BasicContainer out = (BasicContainer) new DefaultMp4Builder().build(result);
WritableByteChannel fc = new RandomAccessFile(
String.format(Environment.getExternalStorageDirectory()+ "/DCIM/CombinedVideo.mp4"), "rw").getChannel();
out.writeContainer(fc);
fc.close();
} catch (Exception e) {
Log.d("Rvg", "exeption" + e);
Toast.makeText(getApplicationContext(), "" + e, Toast.LENGTH_LONG)
.show();
}
Explanation:
This is caused due to an orientation issue explained here. In short, it means that because of the different rotation metadata of videos recorded with the front camera and the back camera, when merging the videos using mp4parser every second video will be upside down.
How to solve this issue:
I have managed to solve this issue in a two different ways.
1. Re-encoding video - using FFmpeg I managed to merge the videos in the correct orientation.
How to use, and my suggestion for using FFmpeg in Android
Cons:
The main problem with this solution is that it requires re-encoding the videos into the merged video, and this process can take a pretty long time.
Also this method is lossy due to the re-encoding
Pro:
This method can work regardless even if the videos resolution, frame rate, and bitrate are not the same.
2. Using mp4parser/demuxer method - in order to use mp4parser or FFmpeg demuxer method to merge the videos in the correct orientation, the videos must be provided with the same orientation - or with no rotation metadata. Using CameraView takeVideoSnapshot() allows to record a video with no rotation metadata, so when merging it with mp4parser it is oriented correctly.
Con:
The main problem with this solution is that for the output to be with correct orientation and not fail, the input videos must have the same resolution, frame rate, bit rate and also rotation metadata(for correct orientation).
Pros:
This method requires no re-encoding so it is very very fast.
Also it is lossless.