How to build AOSP for a device with no modem? - android-source

I am working on a tablet that has no modem (aka cellular radio) and is not designed to be used for receiving or sending voice calls/SMS/MMS using cdma/gsm/etc, it does however have Wi-Fi. This device unfortunately is based on a device that does have a modem. I wonder if anyone has a complete list of changes that need to be made to the android platform for a devices like this? I am working on Nougat. Here's what I have so far:
switched my device mk file to point to full_base.mk instead of full_base_telephony.mk
added a overlay/frameworks/base/care/res/res/values/config.xml with config_voice_capable set to false, config_sms_capable set to false, config_telephonyHardware set to an empty string array
set PRODUCT_CHARACTERISTICS := tablet in my device mk file
copy tablet_core_hardware.xml to system/etc/permissions instead of handheld_core_hardware.xml
set ro.radio.noril property to true, comment out service ril-daemon and any ims service from any init rc files
Am I missing anything?
Some things I noticed are the following modules are still included in the system image: BlockedNumberProvider, Telecom, TeleService, MmsService, TelephonyProvider, voip-common.jar, telecom.jar, telephony-common.jar, ims-common.jar. Even though I have selected the "no telephony" build mk files. Are these modules all needed on a device that doesn't have telephony features? As an experiment I removed the APKs (I haven't tried remove the jars yet) and the device seems to boot and run just fine so it seems they are just wasting resources.

Related

TensorFlow Lite: Update existing model or add new one in deployed app

I'm creating a mobile app (with Flutter, for more details) that will need to do some offline inference using TensorFlow Lite models. The fact this needs to be offline means the models need to be shipped with the APP.
I know how to deploy the models with the APP (see this tutorial, for example; there are plenty out there) but, as these might change over time - for example re-trained for better accuracy with new data or even new models added to analyse different things - it will be good to find a way of doing that without the need to update the whole application.
So far, the only options I have found for the dynamic update/addition of models require the APP connecting to an external service hosting the models, like Firebase, but that's not good enough when the APP needs to run offline.
Do you have any suggestion on how to do this?
Many thanks,
Diego
I'm sorry, but are you saying you want to update the app (one part of the app) without internet connection (or network connection)? The act of updating the app from the Play Store/ App Store requires an internet connection.
Manually install new application updates. This would be updating the entire app, not just the model file, but this is fine, you just don't have to make any code changes.
On Android: Install a new APK adb install, and
On iOS: use Adhoc distribution
Model file picker and manual model selection: Alternatively, you can add a file picker in the application to select a local model file. This model file will need to be on the device, which you can copy onto the device manually (e.g. adb push model.tflite on Android, Airdrop on iOS).
Local network hosted model: Or you can download the model from the local network automatically (still "offline"/ no internet connection).

Kinect not being detected on other computers after deployment of UWP

I recently finished to develop a UWP based on the SDK example CameraFrames. On the second screen of this example (and also on my app) you can choose to preview the frames taken by the Kinect v2.0 as shown below.
On the top of the image on the right of "Grupo de fontes" (Source group) I am able to choose between the different cameras that are connected to my PC. The problem comes when I deploy my app to another PC, I am unable to see "Kinect V2 Video Sensor". Thus rendering my app obsolete as it needs to be portable between PCs. I have checked that inside "Package.appxmanifest->Capabilities->Web Cam" checkbox is ticked.
I am out of ideas as I don't have a clue why my app works flawlesly on my computer but not on the rest. If you need any extra info to help me with my problem please let me know.
It is worth noting that on the other PCs that I've tried my app can read frames via Kinect Studio or MatLab.
Inside "Camera privacy settings" my app has privileges to use the camera.
Thank you in advance.
Update due to Nico Zhu's comment
My OS is Windows 10 and my target version is "10.0.16299.0". I have downloading and deploying CameraFrames on the second computer that I'm working with but it doesn't recognize the Kinect as in input source. Even though CameraFrames doesn't work doesn't read anything, I can properly make use of the kinect using Kinect Studio.
It seems that my problem is focused on my second computer not being able to make use of the Kinect on any deployed UWPs. What should I installed in order to make sure that I have all that's needed to properly read from the Kinect?
At the end of a lot of headaches I just needed to upgrade my drivers.

How to support OTA video in custom web app (for LG TV)?

I've searched considerably for this answer (including all WebOS, Enyo, LG specific documentation, and countless Google searches), but have turned up nothing.
In short, the goal is to develop a WebOS application that can resize an existing video stream (coming from coax or HDMI), unfortunately ...
Update: This is a web-based app that resides on a server.
After specifying the IP address (via hidden config menu), the TV saves and exits, without any channel scanning, or additional setup. Upon reboot, it displays the index.html page at the specified IP address.
So, unless there's a way to explicitly configure OTA sources within the web application, I'm assuming that any OTA video will have to be provided via streaming.
WebOS offers a way to specify media sources from files -- so, is there any way to point to a video file that's perpetually streaming an OTA source?
Is Plex, or Sling capable of this?
Any direction that can be offered is greatly appreciated.
original answer: I'm not sure this is going to be possible. I believe only system apps are going to have access to the video pipeline.
new answer:
If you can configure your OTA stream to go over HLS, then you can use an m3u8 playlist for your <source> and it should "just work" on a webOS TV.
It's been a while since I posted this, and I've learned a great deal about this topic since then. So, for anyone who may run across this, with the same question -- here's the answer ...
When it comes to LG Smart-TVs their are two configuration options specific to commercial TVs (Lodging and Hospital models) ...
"Pro:Centric Wizard" and "EZ-Manager"
The Pro:Centric Wizard is intended for hospitals and won't automatically import TV configurations from a server, but it can scan for channels.
The EZ-Manager option is intended for lodging/hotels and provides a great deal more control, but the TV configuration and channel listing must all be obtained from a server (or usb drive).
Modification of the TV settings and channel listing is performed by using the FTG File Manager.
Downloadable Stand-alone Application:
http://www.lg.com/us/business/display-solutions/ftg
Online Version:
http://cct.procentric.tv/ftg_manager
Channels must be either manually programmed within the FTG File Manager, or imported from a TV clone file (created after the TV has performed a channel scan).
In order to create a clone file of scanned channels ...
Initiate the "Pro:Centric Wizard" setup process
Scan the channels
Complete Setup
Once the TV restarts, hit "Menu"
Highlight the "General" or "Settings" icon (should be "Settings") and hit "7" seven times (or until clone menu appears).
The option to export to USB will only appear after pressing "9876" on the remote.
Once the file is exported to USB, it can then be opened within the FTG File Manager and edited for server delivery through the "EZ-Manager" setup.
Hopefully this will help someone. :)

AirConsole phone controller screen not loading

I've set up a project in Unity using AirConsole. I was able to put together a simple game pretty quickly, but I've since run into issues. I was initially developing on a Mac and then switched to my work computer which is a Dell PC. On my Mac I was able to test my game using the simulator in Unity. Since I switched to my PC I have been unable to connect from my phone to the webserver in the debug simulator. The phone gets through the "Enter the Code" screen, the developer console makes a connection noise, but then my phone just sits at the "Loading..." screen.
I've reverted back to the Pong sample project to confirm that my code is not the problem. I've tested the phone using both Chrome and the AirConsole app. I've tried temporarily disabling my firewall in case my firewall was blocking the ports. If I run the simulator in Browser Start Mode = "Virtual Controllers" the controllers on my PC load fine, but I'm still not able to connect with my phone. I'm using AirConsole v1.3.0.b which was still the most recent version when I posted this.
Steps to resolve this issue:
Start the game using the simulator.
Make sure the virtual controllers of the simulator load your controller.
If that is the case try to load the controller.html directly in the phones browser. Assuming your run your game on in the simulator using http://www.airconsole.com/simulator/#http://192.168.0.2:7834/ then take the part behind the # and append controller.html. In this example try to load: http://192.168.0.2:7834/controller.html on your smartphone. Note that this controller wont be working - it's just to try if the page that AirConsole loads internally works. If this does not work, you are either not in the same network or you network has client isolation activated (this needs to be deactivated). To resolve this, check: Using Airconsole on a standard University/Corporate network for dev
If this also works, you probably have javascript errors before the AirConsole constructor is called. Use chrome to remotely debug the phone browser: https://developers.google.com/web/tools/chrome-devtools/debug/remote-debugging/remote-debugging?hl=en
Update: here is a detailed guide for Unity
https://developers.airconsole.com/#!/guides/unity-ngrok

Force fake location on device?

I need to demo my app to my supervisor. At the minute I have set all the testing up so that it works with the highway drive in cali. When I demo it I will be in an office (stationary) so the real location data for the phone wont show the demo results at all.
Is there anyway to make the iphone do the city drive? - When its running natively i.e. not connected to the machine.
There are various CLLocationManager simulators on github that you could include in your demo build, such as the CLLocationManager_simulator here.
Alternately you can set up your apps to record location data to a file and then create a CLLocationManager simulator that plays back the file. With that testers can record test drives and then devs can play them back in the office to debug or examine what happened or retest with new builds.
If you don't mind using the Simulator to demo it, there's simulated location available. Look in the Simulator menu under Debug -> Location.