Intel Media SDK: Regarding mfxIMPL Usage - streaming

Please Can someone clarify what is the usage of mfxIMPL datastructure provided by Intel Media SDK?
Decoding sample app has a code line:
mfxIMPL impl = MFX_IMPL_HARDWARE;
Does this means the decoder runs on the GPU only.
If I change the MFX_IMPL_HARDWARE to MFX_IMPL_SOFTWARE, will the decoder run on CPU only?

The Media SDK provides APIs that can be executed on either core (SW implementation), or on the GPU/fixed-function-logic (HW-accelerated Implementation), depending on the system and its capabilities.
"mfxIMPL impl" -> Use software, or hardware or best available implementation. We recommend using MFX_IMPL_HARDWARE, or MFX_IMPL_AUTO if you are unsure of the underlying driver support. If MFX_IMPL_AUTO is specified on a system that does not support HW acceleration, then the SW impl is automatically used as default. Hope this helps.

Related

Swift equivalent of inputStream, Outputstream and bluetoothSocket

I am looking for equivalent of inputStream, OutputStream and BluetoothSocket in Swift.
What do we use when we are coding with Swift instead of those things in Android?
I have an app for Android now I want to create for iOS as well.
Typically Android's BluetoothSocket is used with SPP, which is not available on iOS. How you need to redesign this depends on what you're doing, what your firmware offers, and how much control you have over the Bluetooth device's development.
For MFi(*) devices, iAP2 (via ExternalAccessory) is very similar to SPP, so the same basic protocol and designs can be used. But in most cases, you have to develop a completely different protocol (most commonly over BLE GATT). CoreBluetooth and the class you've listed (CBPeripheral, etc) are how you interact with BLE GATT.
You can also dig into L2CAPP. It provides an interface similar to SPP, but I haven't worked with any firmware ADK that supports using it as an alternative.
(*) MFi is "Made for iPhone," and requires an extra chip in the hardware and has to go through Apple certification. It's worth looking into if you're building your own hardware. It gives you a few extra capabilities, though not nearly as many as people expect.
As #Sulthan already commented:
Simply said, there are no direct counterparts. You cannot rewrite the functionality 1-1 only changing names of classes.
For BLE devices use Core Bluetooth framework.
For Bluetooth Classic you need to use External Accessory framework (only for MFI accessories).

How to Determine Metal Device VR Support

I am looking to create a macOS 10.13 application that tests for virtual reality support. What would be the best way to test a Mac for VR support, considering the CPU, GPU, and connectivity requirements?
Also, given a MTLDevice, is there a way to check for VR support using the Metal API?
I have tried to check the default system Metal device for macOS GPUFamily1_v3 support, but that does not completely answer the question of whether a device supports VR on macOS. The code below is what I use to test support for the Metal feature set.
let defaultDevice = MTLCreateSystemDefaultDevice()
print(defaultDevice?.supportsFeatureSet(.macOS_GPUFamily1_v3))
There is no such thing as "Metal VR Support". There is no special capability or GPU-level feature required to render for VR. Furthermore, there is no such thing as a "spec good enough for VR" as it relies completely on the resolution and frame rate of the particular headset used, as well as your application.
You can query the IOService layer to get GPU model and specs, but you will have to extrapolate the capabilities for yourself based on your personal requirements.

How can I know the operating system of a device

If I know the device's model or vendor. Is there any direct way by which I can know the operating system of this device (e.g through the device driver or something like that? ). For example, I will quote an answer for a previous question I asked in: What is the difference between the firmware and the operating system?
Someone have said:
Hardware vendors commonly use a derivative of linux (e.g. Cisco IOS)
How can I know this. I know a name for one cisco device but I do not have the device and I need to check what is its operating system (even if it is widely known that it is Linux, I need to check this myself). How can I get this piece of information ? I checked the companies site and google, and I can not find any answer.
If the terms of the GNU Public License are complied with, it should be reasonably clear if a device is using any GPL code, including Linux, moreover the source code should be available too.
If the device uses an OS that is not open source, then even if the information were available to you, it is unlikely to be particularly useful except perhaps in respect to applying manufacturer's firmware updates.
Linux is by no means that common in embedded systems in general. It is commonly used in certain types of device, such as routers, STB's and NAS's. Often these devices have a web-server interface through which version information is usually available, but there is no common method of accessing this information, you'd have to access the particular URL for the device and parse the HTML.
You need a serial cable to hack into the device or read the binary from the flash and examine the hexdump. I have a STB in my home. The provider doesn't reveal the OS. There are competitors out there who need such information to take you down.

can LLVM IR (Intermediate Representation) be used to create cross-platform (iphone and Android) ARM executables?

I'm looking into possible means of efficiently creating an Android and iPhone targeted application from the same code base, be it in C/C++/C#/Objective-C or Java (using VMKit).
LLVM looks promising, however I'm slightly confused regarding compatibility issues surrounding the differing ARM CPU implementations, mainly from the aspect of how graphics and sound code are 'resolved' by underlying chipsets (i.e. do I have to code to specific ARM chipsets, or will a higher-level API, like OpenGL, suffice?).
I do know a little about various Cross Dev products (i.e. Airplay SDK, MoSync (GPL-GCC), Unity3d, XMLVM etc.), but what I'd really like to do is either write in Java or use a C/C++ engine, emit LLVM IR and create compatible ARM executables, if possible.
Apologies if any of the above is vague.
Thanks
Rich
The compiler is not the problem. To develop for both you need to create an abstraction layer that allows you to write a single application on that layer. Then have two implementations of the abstraction layer, one that makes Android api calls and one that makes iPhone api calls. There is nothing the compiler can do to help you.
Where LLVM IR might be interesting in its portability is for programs like:
int a,b;
a=7;
b=a-4;
Compile to IR then take the same IR and generate assembler for all the different processor types and examine the differences.
In the case of real applications that for example need to write a pixel on a display, the registers, sizes of the display and a whole host of other differences exist, and those differences are not exposed between the IR and the assembler backend but are exposed in the main C program and the api calls defined by the platforms library, so you have to solve the problem in C not IR or assembler.
LLVM is no different from any other compiler in the sense that you need, so I'm afraid the answer is no.
The LLVM IR is in layman's terms a "partly compiled" code, and can be used, say, to compile the rest on the end device. For example, if you have a graphically intensive app, you might ship parts of it in IR, then compile on the device to get the most performance out of the specific hardware.
For what you want, you either need to use one of the products you mentioned, or have native UIs (using Cocoa/VMKit), but possible share the data/logic code in the app
For standard app store legal development for stock OS devices, neither the sound nor the graphics code in an app have anything to do with the underlying chipsets or specific ARM CPU architecture. The sound and graphics (and all other user IO) are abstracted by each OS through platform dependent APIs and library linkages. You can either code to each platform's completely different APIs, or use an abstraction layer on top, such as Unity, et. al.
LLVM might allow you to optimize from intermediate code to machine code for certain differences in the ARM architectures (armv6, armv7, fp support, etc.), but only in self-contained code that does no user IO, or otherwise require any higher level interface use to the OS.

Are either the IPad or IPhone capable of OpenCL?

With the push towards multimedia enabled mobile devices this seems like a logical way to boost performance on these platforms, while keeping general purpose software power efficient. I've been interested in the IPad hardware as a developement platform for UI and data display / entry usage. But am curious of how much processing capability the device itself is capable of. OpenCL would make it a JUICY hardware platform to develop on, even though the licensing seems like it kinda stinks.
OpenCL is not yet part of iOS.
However, the newer iPhones, iPod touches, and the iPad all have GPUs that support OpenGL ES 2.0. 2.0 lets you create your own programmable shaders to run on the GPU, which would let you do high-performance parallel calculations. While not as elegant as OpenCL, you might be able to solve many of the same problems.
Additionally, iOS 4.0 brought with it the Accelerate framework which gives you access to many common vector-based operations for high-performance computing on the CPU. See Session 202 - The Accelerate framework for iPhone OS in the WWDC 2010 videos for more on this.
Caution! This question is ranked as 2nd result by google. However most answers here (including mine) are out-of-date. People interested in OpenCL on iOS should visit more update-to-date entries like this -- https://stackoverflow.com/a/18847804/443016.
http://www.macrumors.com/2011/01/14/ios-4-3-beta-hints-at-opencl-capable-sgx543-gpu-in-future-devices/
iPad2's GPU, PowerVR SGX543 is capable of OpenCL.
Let's wait and see which iOS release will bring OpenCL APIs to us.:)
Following from nacho4d:
There is indeed an OpenCL.framework in iOS5s private frameworks directory, so I would suppose iOS6 is the one to watch for OpenCL.
Actually, I've seen it in OpenGL-related crash logs for my iPad 1, although that could just be CPU (implementing parts of the graphics stack perhaps, like on OSX).
You can compile and run OpenCL code on iOS using the private OpenCL framework, but you probably don't get a project into the App Store (Apple doesn't want you to use private frameworks).
Here is how to do it:
https://github.com/linusyang/opencl-test-ios
OpenCL ? No yet.
A good way of guessing next Public Frameworks in iOSs is by looking at Private Frameworks Directory.
If you see there what you are looking for, then there are chances.
If not, then wait for the next release and look again in the Private stuff.
I guess CoreImage is coming first because OpenCL is too low level ;)
Anyway, this is just a guess