Does the Brightcove 508 compliant player support the Javascript API? - section508

First, the Brightcove forums are useless, since you must have a Brightcove account to contribute. IA m just a dev, no account.
I have been given the source for the Brightcove 508 compliant player and need to do some simple things (stop, start, jump) using the Javascript API.
No need for code here, a standard player accepts the calls fine, the 508 does not. Why is that?
Does API support have to be specifically turned on when the player is generated?

Yes, there's a flag in the player (template, if I remember correctly) that enables API access.
I think the point there is that if you don't own the player, you shouldn't control how it behaves.
BC support docs

Related

Enable screen capture on Widevine video

Using Azure media services, I am presenting sensitive medical videos to three types of users. In order to be HIPAA compliant, the videos are encrypted using DRM. I need one type of users to be able to record the video on the browser and add to it a small section in which they play their comments. I can't do it given DRM prevents me from screen capture. How do I create a policy which enables screen capture?
It's very unlikely that you need full DRM (PlayReady, Widevine) to be HIPAA compliant. I would recommend that you look into using just AES-128 Clear Key encryption and avoid DRM altogether.
DRM is built to disable a lot of things like analog and digital outputs, as well as screen recording. That's the desire of DRM - to prevent piracy. You can check with the documentation for Playready and Widevine to see if that restriction can be loosened in the license template.

how to use google play service in Unity 3D?

I'm working at my new mobile game. The work is going well but when I tried to use google play services at the game, it didn't work by several reasons that I couldn't understand.
There are many guides about using google play service, but still they are all scattered. So I can't totally understand about this.
Could you help me to how to use google play service step by step?
Or link that can solve my problem.
You may check this link for Google Play Games plugin for Unity. This plugin allows you to access the Google Play Games API through Unity's social interface. The plugin provides support for the following features of the Google Play Games API:
sign in
unlock/reveal/increment achievement
post score to leaderboard
cloud save read/write
show built-in achievement/leaderboards UI
events
video recording of gameplay
nearby connections
turn-based multiplayer
real-time multiplayer
To use the plugin, you must first configure your game in the Google Play Developer Console. Follow the instructions on creating a client ID. Be particularly careful when entering your package name and your certificate fingerprints, since mistakes on those screens can be difficult to recover from.
check it
play-unity-plugins
-play billing
-play instant
-play assets delivery
-anfd so on

Google Assistant for Game

I'm interested in using Actions and the Assistant to create dynamic dialog for a video game.
Specifically I would want players to be able to speak (literally) to characters and for the characters responses to be determined by Actions, just like the Assistant.
Is there any version of the Assistant available that can be integrated into a game? As far as I can see they offer a lot of the building block services to developers, through the cloud, but nothing as fully featured as Google Assistant
Sounds like a cool scenario. Not something Actions on Google supports directly, but if you want to experiment, you could use the Google Assistant SDK to host the Assistant in your game and respond to queries that are meant for your players.
https://developers.google.com/assistant/sdk/
Love to see what you come up with.
It pretty much comes down to which Framework you use when building your game. If you use Unity for instance, you can use API.AI's Unity SDK.
There are also a lot of other SDKs provided. I don't think you really have to include the complete Google Assistant SDK, since you most likely will want to write your own responses (?). Some SDKs have speech recognition included, for others you will need a Speech Recignition framework, for instance Google Cloud Speech API.

Personalized Video / Facebook App - What is the best approach?

I want to build a facebook app featuring a personalized video which imports content assets from the user's facebook profile and their extended social graph and integrates these assets within the timeline. I am thinking of using Flash however a key stipulation is that the app works on mobile - and so I would need to use HTML5. My question is: Can I use Flash to build the application and then compile the app as HTML5 - or is there an alternative solution in the form of a HTML5 video toolkit with a programming layer that would allow me to build a web app / access the Facebook API?
I have done this a few times over the years and yes flash was the easiest however there are a few options which you have available to you that I know of which will be purely HTML5 based, personally I'd stay away from flash here as it will end up just getting int he way:
1- The cleanest method is to use a video compositing tool on the server side which can be programmed to accept variables. Personally I have only ever done this using ffmpeg however there a couple of alternatives which are out there.
The basic process would be to grab the media from FB then to composite them at certain point on top/below/around a base video which is sitting on the server using a shell script which you then pass the media assets to as variables. There are so many options as to how you might want this to be done, probably best id to have a look at some of these examples:
http://broadcasterproject.wordpress.com/2010/05/18/how-to-layerremix-videos-with-free-command-line-tools/
http://graphcomp.com/ffmpeg/
ffmpeg watermark without vhook?
note that last time I did this I used vhooks and custom filters, vhooks are now deprecated
This method will mean a reasonably heavy server load if your app is popular but it's probably the most robust across devices etc.
2- Use Popcorn.js, and let the processing be done on the client side. you could hard code it using css/js/html but popcorn is pretty stable although I havent seen how it runs on devices but in theory it should work (all standardized technologies). Basically the process would be to use javascript to fire the display of images overlayed on the video base file at preset cue points. Popcorn has all of the methods and means for you to do this already.
Hope this helps a bit. Good luck, sounds fun.
we realised some interactive video apps and one recent project was quite like your question describes.
We used adobe flash to track the motion - and published the project via create.js. You could have an image sequence from within create.js or put a video in a layer behind. This video would then control the player head time of the create.js motion tracked sequence via jquery.
worked fine - here a link to a testsetup with an image sequence.
Video Integration would be the next step.
http://www.jungeroemer.net/projekte/testpersvid/elftest01.html
(German text, sorry but it's nothing important to read there.
Just click the images and go for it)
you can download the sources from the link, if you need i can also upload the flash file to show you the motion tracking.

Audio recording with HTML5 and Javascript

I'm trying to build a web application for iPhone and Android that deals with audio input.
Is this possible?
Apparently ... yes, or it should be able to when it's finished at least. It will supposedly become possible using the device API which is due to be part of HTML5 when it's finished and released (HTML5 isn't finalised yet however, and information is subject to potential for change).
W3C Device API Requirements (camera section)
Sony Erricson community blog posting, with examples (pre-final API)
While it isn't explicitly mentioned in the W3C spec, audio recording as part of (web)camera interactions is, so it's definitely hopeful. There seems to be a shortage of good information at this stage though. I'd expect to see more as HTML5 comes closer to being finalised.
As of now, HTML5 Can't record Audio. but in future, it will be able to, by using the Device's native features.
HTML 5 can not record audio (at least currently). HTML basically is a markup language and therefore only declares how a browser should display certain content. Although HTML 5 introduces new features that make some interaction possible, you can't record audio straight into.. HTML (even saying that sounds wrong). When the HTML5 spec is finished, it might become reality, until then, no way.
Web applications that record audio normally require a plugin like Flash or Silverlight, because those can access system resources like audio hardware. Both are a no-go on iOS, although Flash is theoretically possible on Android, I don't know if it supports audio input.
I would suggest you write a native app (for iOS and Android) that can access the audio hardware and connects to your web application in the background, so that the recording takes place natively and the recorded audio will be transmitted to your servers (think of Shazam, for example).
Here are the basic developer guides on recording audio in:
Android
OS X, iOS
A new MediaStream Recording API is being worked on. It is currently availble only in the Firefox Nightly build for demo purposes
Here's the draft with the latest updates directly form W3C site:
https://dvcs.w3.org/hg/dap/raw-file/default/media-stream-capture/MediaRecorder.html
Also the following article covers up other attempts on recording audio and video directly in the browser:
http://hdfvr.com/html5-video-recording