Is WiFi Display Sink source code migrated into Google Cast? - android-source

This is about AOSP and maybe Google Cast. I posted this question to Google group android-platform for several days, but no answer.
In AOSP git log, it says obsolete miracast sink code and friends are removed as below commit, from frameworks/av/media/libstagefright/wifi-display/.
commit 6ea551fa13b69e5ce359a7dba7485d857a005304
Author: Andreas Huber <andih#google.com>
Date: Wed Oct 2 13:06:06 2013 -0700
Remove obsolete miracast sink code and friends.
Change-Id: I8bbb22fb0cfe2d73881d9f05bf8112ae86d8040b
related-to-bug: 11047222
Would anyone point to me where is the latest WiFi Display Sink source code? Is it integrated into Google Cast? Thanks.
xuebing wang

Would anyone point to me where is the latest WiFi Display Sink source
code?
WiFi Display Sink source code
Miracast Source Code by Kensuke
Is it integrated into Google Cast?
There is no mention that Googled used miracast in Cast although there are similarities.
"Miracast is a wireless display standard designed for mirroring a smartphone, tablet, or PC’s screen to a television without requiring any physical HDMI cables". This mirroring feature is just one of the many things Chrome Cast can do. More on Google Cast here.

Related

How long time is data cached with GA plugin in Unity apps

We have an Android app built using Unity and the Google Analytics GAv4 plugin for Unity.
The use-case is that the users might use the app for days without internet, and when they do connect to internet again our expectations is that the data is cached on the device and will then be sent off.
Our initial tests seems to work, but cannot find any reference to how long the data persists on the device, and if there are any expiration of the data. The users have experienced that they miss some data generated when offline.
I have tried exploring the plugin, but cannot seem to find anything about how it caches the data. Anyone who have experience with this?
It looks like it is using com.google.android.gms.analytics.GoogleAnalytics which you can search the documentation on. It also might depend on the highest API level/android version on their device, where low version may not be able to send while offline or something. I think this page explains a little about how it keeps checking for a connection and tries to dispatch the events:
https://developers.google.com/android/reference/com/google/android/gms/analytics/GoogleAnalytics#dispatchLocalHits()
I saw in the plugin source code here:
https://github.com/googleanalytics/google-analytics-plugin-for-unity/blob/master/source/Plugins/GoogleAnalyticsV4/GoogleAnalyticsAndroidV4.cs

Kinect not being detected on other computers after deployment of UWP

I recently finished to develop a UWP based on the SDK example CameraFrames. On the second screen of this example (and also on my app) you can choose to preview the frames taken by the Kinect v2.0 as shown below.
On the top of the image on the right of "Grupo de fontes" (Source group) I am able to choose between the different cameras that are connected to my PC. The problem comes when I deploy my app to another PC, I am unable to see "Kinect V2 Video Sensor". Thus rendering my app obsolete as it needs to be portable between PCs. I have checked that inside "Package.appxmanifest->Capabilities->Web Cam" checkbox is ticked.
I am out of ideas as I don't have a clue why my app works flawlesly on my computer but not on the rest. If you need any extra info to help me with my problem please let me know.
It is worth noting that on the other PCs that I've tried my app can read frames via Kinect Studio or MatLab.
Inside "Camera privacy settings" my app has privileges to use the camera.
Thank you in advance.
Update due to Nico Zhu's comment
My OS is Windows 10 and my target version is "10.0.16299.0". I have downloading and deploying CameraFrames on the second computer that I'm working with but it doesn't recognize the Kinect as in input source. Even though CameraFrames doesn't work doesn't read anything, I can properly make use of the kinect using Kinect Studio.
It seems that my problem is focused on my second computer not being able to make use of the Kinect on any deployed UWPs. What should I installed in order to make sure that I have all that's needed to properly read from the Kinect?
At the end of a lot of headaches I just needed to upgrade my drivers.

How to support OTA video in custom web app (for LG TV)?

I've searched considerably for this answer (including all WebOS, Enyo, LG specific documentation, and countless Google searches), but have turned up nothing.
In short, the goal is to develop a WebOS application that can resize an existing video stream (coming from coax or HDMI), unfortunately ...
Update: This is a web-based app that resides on a server.
After specifying the IP address (via hidden config menu), the TV saves and exits, without any channel scanning, or additional setup. Upon reboot, it displays the index.html page at the specified IP address.
So, unless there's a way to explicitly configure OTA sources within the web application, I'm assuming that any OTA video will have to be provided via streaming.
WebOS offers a way to specify media sources from files -- so, is there any way to point to a video file that's perpetually streaming an OTA source?
Is Plex, or Sling capable of this?
Any direction that can be offered is greatly appreciated.
original answer: I'm not sure this is going to be possible. I believe only system apps are going to have access to the video pipeline.
new answer:
If you can configure your OTA stream to go over HLS, then you can use an m3u8 playlist for your <source> and it should "just work" on a webOS TV.
It's been a while since I posted this, and I've learned a great deal about this topic since then. So, for anyone who may run across this, with the same question -- here's the answer ...
When it comes to LG Smart-TVs their are two configuration options specific to commercial TVs (Lodging and Hospital models) ...
"Pro:Centric Wizard" and "EZ-Manager"
The Pro:Centric Wizard is intended for hospitals and won't automatically import TV configurations from a server, but it can scan for channels.
The EZ-Manager option is intended for lodging/hotels and provides a great deal more control, but the TV configuration and channel listing must all be obtained from a server (or usb drive).
Modification of the TV settings and channel listing is performed by using the FTG File Manager.
Downloadable Stand-alone Application:
http://www.lg.com/us/business/display-solutions/ftg
Online Version:
http://cct.procentric.tv/ftg_manager
Channels must be either manually programmed within the FTG File Manager, or imported from a TV clone file (created after the TV has performed a channel scan).
In order to create a clone file of scanned channels ...
Initiate the "Pro:Centric Wizard" setup process
Scan the channels
Complete Setup
Once the TV restarts, hit "Menu"
Highlight the "General" or "Settings" icon (should be "Settings") and hit "7" seven times (or until clone menu appears).
The option to export to USB will only appear after pressing "9876" on the remote.
Once the file is exported to USB, it can then be opened within the FTG File Manager and edited for server delivery through the "EZ-Manager" setup.
Hopefully this will help someone. :)

Artifact issues with Wowza and GoogleTV: PE's result check pointer is null

I'm streaming live feeds from Wowza to GoogleTV boxes via HLS. I'm having a lot of issues with artifacts, the kind you see if you lose packets, where part of the screen lags behind and doesn't get updated properly until the next key frame arrives. It's the same on the Visio, HiSense and Sony, with latest updates (GoogleTV FW 3.2 and Wowza 3.5.2). Watching the same streams in VLC or on an a Nexus 4/7 works great.
This is what I get from logcat:
[ 03-13 16:07:15.343 678:0xb3c W/MVBerlinVideoEngine36 ]
** empty vid_meta!!!
[ 03-13 16:07:15.423 678:0x2f1 E/MVBerlinVideoEngine36 ]
!! PE's result check pointer is null line = 536
The media is a live stream, encoded by VLC.
vlc udp://#239.0.0.2:10021 --sout "#transcode{deinterlace,deinterlace-mode=linear,acodec=aac,ab=160,vcodec=h264,venc=x264{profile=high,level=4.1,preset=fast}}:rtp{mux=ts,dst=192.168.100.10,port=10021}"
I have no idea where to continue to at this point. Can I get more information somehow? Is it most likely a decoding issue, network issue (tried wired + several wifi gateways) or code issue?
This is a firmware bug on the device. You will not be able to work around this right now. Internal bug filed - you will need to wait for an update.
We also often faced with the problem of freezing the bottom of the screen especially when we broadcast mp4 files with strong motion and frequent scene changes. When you plan to update the software? Is there a downloadable beta version of GoogleTV in which this problem was fixed?

iPhone and printer integration Sample code/API for custom App development

My recent engagement demands a printer integration using iPhone. The app will
• Generate a PDF with the collected data
• Print the same when user clicks print button
I am fairly new in iPhone development. There are hardly any reference and sample codes to support the printing feature , however there are quite a few apps available in the market to print from iPhone!
Please help me with reference or sample code to realize the feature from within the custom app we are building.
Many Thanks
-Jeet
I haven't developed an app that supports printing, however, I'm pretty sure the ones that do must be connecting to a companion OS X application running on some computer on the same wifi network that the iPhone app connects to and sends its data to.
This means that you'll have to look at the printing docs and sample code for the Desktop and build a helper app that will receive connections from your iPhone app. There is sample code to show you how to discover a computer on your network using Bonjour. You can then just stream the data over the network using a socket to the Desktop app from the iPhone and have it pass along the print job.
You say that you're fairly new to iPhone development, so this comes with a warning that it's not for the faint of heart.
Here is a blog post on how to communicate between desktop and iPhone using Bonjour:
http://cocoa-nut.de/?p=27
Best Regards,
There is official printing API from Apple. This only support for iOS 4.2 and above though:
http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/Printing/Printing.html
I am also looking for the same solution as I need to do this for the application that I am working on. I am also generating the PDF and user can then print it.
As you are saying that already developed apps are printing without using intermediate PC. But while searching over the net I came around an application which is useful for printing for Epson
http://mobile.eurosmartz.com/products/print.html
There it is mentioned by the company it self that "Install “Print” on your iPhone, download the free WePrint software for your desktop/laptop computer and then you can print directly from your iPhone."
Also there is another application on iTunes called "PrinterShare- print from iPhone to anywhere". There also they mentioned that computer connected to printer needs the PrinterShare software.
So I think there is a mediator present in these printings.
Let me know if you find some more information on this topic, as this is very new and clients are asking to avail this facility in there applications.
If you come to know any sample application
Regards,
Vishal.
There is currently no Official printing API.
You could of course implement your own LPR printing code.
Or you can license a ready API from someone like www.e-workshop-dev.com