media center extended - mediacenter

Where can I find resources related to writing your own media center extender client - i.e. an application that will run on another machine at home and remotely render vista's media center on it, allowing to stream movies, live tv etc
Thanks,
Gil

At least some of the services are based around protocols DLNA for the discovery of available services

If you write your Media center application and stock to MCML (Media center markup loanguage) it will run on the extender anyway without any additional work.

It is not possible. That would be called a softsled. Microsoft have not made any of the Media Center framework information available to anybody besides preferred clients (Linksys, Samsung, etc.) and has not done it since Vista.
The media center extenders actually does not use DLNA yet but they created a remote-desktop session where they connected to the media center server and streamed video. This is why you see users listed MCS-YOURCOMPNAME-0001.
The only media center extender left is Xbox 360. Sorry to be a downer.
http://www.geektonic.com/2009/05/microsoft-media-extenders-what.html

Related

Unable to record screen sharing in Skype for Business

My company recently migrated from Lync 2013 to Skype for Business Desktop. When I record a meeting in Skype, I no longer am able to record any screen sharing. It worked in Lync 2013. According to https://support.office.com/en-us/article/record-and-play-back-a-skype-for-business-meeting-6d1dd3c5-ded7-4935-8db0-d6d7173c482f which says:
When you record a Skype for Business Meeting, you capture audio,
video, instant messaging (IM), screen sharing, PowerPoint slides,
whiteboard activity, and polling. Any of the presenters can record a
meeting and save it on their computers.
I have multiple monitors set up; I tried sharing my primary desktop and tried sharing a specific program on my primary desktop, and neither approach saved the shared screen. It just recorded audio.
[Update] Additional details originally added as a comment about a month after initial post:
I do have a Targus USB 3.0 ACA038US video driver plugged into my docking station so I can have three screens. Skype recognizes all three screens when I share, but nothing is recorded. I tried sharing just a program and that did not work either. I disabled the monitor using USB video driver and that did not help either. I did not try unplugging the USB adapter yet.
I found out a month after my original post that I am able to successfully record screen sharing when working from home, where I am not using the Targus USB video driver. I do have two screens at home; the laptop, and a monitor plugged into the docking station, so the problem appears to be the Targus hardware or Targus driver.

Enabling Kinect In a Browser using NaCl

While working on a project with the kinect, I had an idea of integrating it onto a web browser directly from the device. I was wondering if someone has done this before or if there exists some form of information that can shed some light.
In More Detail:
I've been dissecting the Kinect Fusion application that is provided with the kinect and I was wondering what it would take to have a browser do a direct to device 3d scanning. I've discovered NaCl which claims that it can run native code, but I don't know how well it would run Microsoft native code (from the Kinect SDK version 2 //what I'm using.) also just looking at NaCl with no prior experience(with NaCl), I currently cannot imagine what steps to take to actually activate the kinect and have it start feeding the image render to the browser.
I know there exists some libraries that allow the kinect to work on other operating systems and was wondering if those libraries would allow me to have a general bitmapping to send the pp::graphics2d stuff for nacl(for the image display), for which I would then need to figure out how to actually present that onto the browser itself then have it run the native code in the background to create the 3d image then save it to the local computer.
I figured "let me tap the power of the stack." I'm afraid of an overflow, but you can't break eggs without making a few omelettes. Any information would be appreciated! If more information is needed, ask and I shall try my best to answer.
This is unlikely to work, as Native Client doesn't allow you to access OS-specific libraries.
Here's a library which uses NPAPI to allow a web page to communicate with the native kinect library: https://github.com/doug/depthjs. NPAPI will be deprecated soon, so this is not a long-term solution.
It looks like there is an open-source library for communicating with the kinect: https://github.com/OpenKinect/libfreenect. It would be a decent amount of work, but it looks like it should be possible to reverse-engineer the protocol from this library and perform the communication in JavaScript, via the chrome.usb apis.
Try EuphoriaNI. The library and some samples are available at http://kinectoncloud.com/. Currently, only the version for AS3 is posted on the site though. The version for the Web, of course, requires you to install a service on your computer (it's either that or a browser plug-in... and nobody likes those :)

Is it possible to build a smartphone app that stream a screen to a TV, while allowing you to remote control it with the phone itself?

Is it possible to build an Iphone/Ipad app (and Android app) that can do two things: stream an interface and the respective content (particularly video) to a TV and then let me use the phone itself as a remote control for this interface?
Basically the idea is, you don’t need a smart TV anymore or some kind of set-top box or other connected device, just the smart phone which you carry around all the time anyway and which is connected to your local wireless connection. Maybe a docking station with a HDMR connection to the TV, so you are not emptying your battery.
Do you know any comparable implementation or use?
If it is theoretically possible, can you anticipate any performance problems, bottlenecks and how those could be resolved?
If this it’s not possible, which links are missing, what technology would have to be developed first?
Thank you for your thoughts on this!
Jacob
The iPhone/iPad would work for this. It allows you to output to a second screen. You can stream video, audio, whatever. A cool example I saw was using the TV as the primary display and the phone as a controller for a game.
There are two ways to do it. You can use an hdmi output or a vga output. There is also a AirPlay, which will let you do it wirelessly. You would need an AirPlay capable device (like an AppleTV) for it to work though.

iPhone peer-to-peer voice chat

I see that Game Kit allows you to develop games with voice chat.
I want to build a more general, peer-to-peer voice chat application, that does not have to live in the Game Center. So a couple questions:
1. What peer to peer system/technologies could be used for this?
2. If I wanted to allow voice chat with a Flash client (i.e. iPhone app <--> Server <---> Flash client on PC) would options for 1 work for this?
I have some experience with RTFMP for Flash to Flash client chat, and no iPhone dev experience, so just want to test out some ideas.
Maybe one idea: build using the Ribbit Platform - they have both Objective-C and Flash SDKs, but this looks more like traditional\SIP calling.
Anyway, would appreciate anything that points me in the right direction.
Thanks.
Now that flash has access to raw Microphone data, you could roll your own client and server; yet, since, currently, it doesn't have UDP sockets in AIR for mobile, you would be forced into considering audio quality vs lagg with even tighter restrictions then usual.
You can now roll your own native extension to make this work; yet, I am assuming you want something that only requires coding in AS3.
Therefore, considering your restrictions, the only real bet would be to use Flash's built-in communications capabilities (e.g. RTMP).
With the above being said, there are opensource alternatives to the array of Adobe's own flash communication servers:
the red5 server, and rtmpd.
IMHO Ribbit's services are kind of pointless.

WIA can not find my internal camera in windows 7

I am currently working on a project where I need to access a build in camera (software will run on a tablet), stream what the camera is showing, and allow the user to take a picture from the stream. I have a version of what I am trying to accomplish on my laptop with its built in camera working. The major difference is the Laptop is using windows XP the tablet is using windows 7.
Running the software on the tablet I get an exception (with some research it appears that exception is cause by no WIA device found). Is it possible that the built in Camera is not WIA compatible? The device does show in the Device Manager as an USB Camera Device, but unlike the camera on my laptop I can't access it directly. I have to use 3rd party software put in by the tablet maker to get the camera to work.
Has anyone experience similar problems? I have to believe if the tablet maker can do what I need I should be able to do something similar.
There also is the Windows Portable Device API that can access cameras, but that appears to be written in c++, without a .NET wrapper. Does anyone know of a simple tutorial of how I could get .NET to place nice with it? EDIT: Just tried WPD didn't list any devices either. I am beginning to thing this camera doesn't exist.
Any knowledge/ pointers to resources would be appreciated. (So far google has turned up the same few articles, no matter which way I approach the problem)
Turns out my Camera was not WIA compatible. I was able to get the tablet to do what I needed it to do using directshow (actually directshow.net)
Good links if others are trying to do something similar and having similar problems
http://msdn.microsoft.com/en-us/library/dd375454%28VS.85%29.aspx
http://directshownet.sourceforge.net/faq.html