Detecting Data Saver / Low Data Mode in Flutter - flutter

I'm currently developing a Flutter app which contains online videos. Since users don't want to be bothered with the slow download speeds and high mobile data usage, we decided to introduce two different formats for the video content, where one is more optimized for slower connections and those who prefer using Power Saving Mode (or Low Power Mode in iOS) and Data Saver (or Low Data Mode in iOS).
While some packages including power already exists to detect for Power Saving Mode (thanks to Flutter/IOS Low Power Mode), I can't seem to find any Flutter plugins to detect those functionalities, unless if I can spend some time building native bindings for:
Android: https://developer.android.com/training/basics/network-ops/data-saver
iOS: Is there any way to detect iOS 9 low power mode programmatically?
Searching for pub.dev for "low data mode", Android's "ConnectivityManager" and iOS' "ProcessInfo" does not work, too. And no, connectivity_plus does not detect for this feature, too.
And even worse, searching for "data saver" and "ConnectivityManager" in the site returns Flutter's material library for some reason 🙃
So, is there any packages which I can use for this? Or should I create one from scratch?

Related

how to control camera retake in ziggeo for Ionic 3 framework

How to control the camera retake in Ziggeo for Ionic3. Ziggeo is taking the user to the camera and according to the device option, their user can take a lot of retakes. is it possible to stop the camera retakes or user may reflect back to ionic app as soon as user take video (Stop recording button).
I tried to found this on Ziggeo documentation but didn't got succeed.
Let me first mention that I work at Ziggeo. Now with that being said, lets get cracking :)
When camera is requested on the desktop systems the browsers talk to OS and OS talks to the drivers. The drivers talk with the camera and provide the video data. On the mobile devices this is slightly different.
The mobile browser will ask the system, which will reply by activating the camera app. The camera app is different for different versions of system and system itself, however in general they refuse to listen for any parameters that are sent to them from browser.
This is why you might see the option to retake the recording on the mobile devices.
The purpose of Ziggeo is to however provide a way to use camera and mic in many ways. As such there is a way to actually skip the native app and go to a new way of recording videos.
This is accomplished by adding the webrtc_on_mobile parameter when you are creating your app.
var ziggeoApp = new ZiggeoApi.V2.Application({
token:"APPLICATION_TOKEN",
webrtc_streaming_if_necessary: true,
webrtc_on_mobile: true
});
Now the above is just the HTML version of it. The Ionic is a bit different. Currently it is not possible, however it will be possible in the next update.
Edit 2020:
To support iOS webrtc_streaming_if_necessary: true was created. This is because the WebRTC implementation of WebRTC on those systems is for streaming, not the standard WebRTC. By using it, you make sure that you are not using the WebRTC Streaming unless it is actually necessary to do so.
Added the way you would use it in the above code.
You can always check and find the latest on the header building page on Ziggeo here: https://ziggeo.com/docs/sdks/javascript/browser-integration/header

Samsung Smart TV App to use HUE as Ambilight

I am trying to accomplish this task:
Running an app on a Samsung Smart TV (in background, kind of)
This app should check the screen content in an interval and calculate the main color of screen content or the main colors of each border (lets say 20% of width and heigth from border)
Use the remote accessible api for HUE to control n Philips HUE Lights to accomplish a roomwide ambilight.
Now as I am an android developer and do not have any experience with Smart-TVs I would ask you, if this could be accomplished (or if there is any show stopper) and you have some tips for me, prior to diggin into this very deeply? The actual "How to get startet developing a SmartTV App" will not be the main problem and I am into this right now.
So my actual questions are:
What is the best bettern (or is it impossible) to have something like an background job in an Samsung SmartTV? Maybe something like a ticker app with no actual visible overlay or a very small one, would also be a solution?
Is there a way to access the currently shown picture on TV, so I get access to the rgb values of the areas/pixels or maybe a screenshot or thumbnail of the screen, no matter what the source of the signal is, as I have to analyze it to get the color.
Would be great I you could advise me some resources specially to this tasks and give me some advice if this will be working or if there are any limitations or better concepts.
It seems the Huey app in the Play Store does what you want but accomplishes it in a different manner, using the camera of a device set in front of the TV to determine the colors.
Steve,
Hue API is not fit to be used as Ambilight control facility, since Hue API is not run real-time.
Overheads generated by client and server make it possible to develop Hue API - based Ambilight apps supporting 1-2-3 Hue Lights,
since hue, sat, bri are updated by server-side run scripts, so upddate is slow.
You need to run Ambilight real-time ( 5-10 updates in 1 sec) and have 8-10 or more Hue lights controlled real-time.
So I develop real-time hardware based Ambilight demo for my students.
Hue API alone is not heavy but Hue API calls are server-side processed by API handlers, to send calls via Zigbee master to Hue Lights with Zigbee
hardware and protocol embedded.
Smart TV is hardware based solution, so runs almost real-time and you can get video image updated frequently.
This may pique your interests: Build your own Ambilight clone with the Raspberry Pi

iPhone indoor location based app

I am researching how to create an app for my work that allows clients to download the app (preferably via the app store) and using some sort of wifi triangulation/fingerprints be able to determine their location for essentially an interactive tour.
Now, my question specifically is what is the best route to take for the iPhone? None of the clients will be expected to have jail broken iPhones.
To my understanding this requires the use of the wifi data which is a private api therefore not meeting the app store requirements. The biggest question I have is how does American Museum of Natural History get away with using the same technology, but still available on the app store?
if you're unfamiliar with American Museum of Natural History interactive tour app, see here:
http://itunes.apple.com/us/app/amnh-explorer/id381227123?mt=8
Thank you for any clarification you can provide.
I'm one of the developers of the AMNH Explorer app you're referencing.
Explorer uses the Cisco "Mobility Services Engine" (MSE) behind the scenes to determine its location. This is part of their Cisco wifi installation. The network itself listens for devices in the museum and estimates their position via Wifi triangulation. We do a bit of work in the app to "ask" the MSE for our current location.
Doing this work on the network side was (and still is) the only available option for iOS since, as you've found, the wifi scanning functions are considered to be private APIs.
If you'd like to build your own system and mobile app for doing something similar, you might start with the MSE.
Alternatively, we've built the same tech from Explorer into a new platform called Meridian which provides location-based services on both iOS and Android. Definitely get in touch with us via the website if you're interested in building on that.
Update 6/1/2017
Thought I would update this old answer - AMNH is no longer using the Wifi-based system I describe above, as of a few years ago. They now use an installation of a few hundred battery-powered Bluetooth Beacons (also provided by Meridian). The device (iOS or Android) scans for nearby beacons and, based on their known locations and RSSI values, triangulates a position. You can read more about it in this article.
Navizon offers an indoor positioning solution that works for iOS as well as any other platform. You can check it out here:
http://www.navizon.com/product-navizon-indoor-triangulation-system
It works by triangulating the WiFi signals transmitted by the device. Since it doesn't require an app to run on the phone, it bypasses the iOS limitations and can locate any other WiFi device for that matter.
Google recently launched an API called Maps Geolocation API. You can use it for indoor tracking of devices, which essentially can be used to achieve something similar to what AMNH's app does.
I would do this using Augmented Reality. There is a system sort of in place for this, the idea being that you place physical markers that have virtual information associated with them. I believe the system I saw was a type of bar code. When a user holds up the phone with the app, the app uses the camera to read the code and then display information. This could easily be used to make a virtual tour type app distributable through the app store and not even require a WIFI or 3/4G connection. This assumes that you simply load your information and store it locally with your app. Then to update it you simply push an update through the app store. Another solution is to use a SOAP/REST service and provide the information in that way, and this does not use private API's, though it does require some form of internet connection. For this you can see a question I asked about this topic a little bit ago:
SOAP/XML Tutorials Question
In addition, you could load a map of your tour location, and based on what code is scanned you can locate the user on the map and give suggested routes based on interests etc.
I found this tutorial recently on augmented reality, I haven't gone through it, but if its anything like the rest of Ray's tutorials, it will be extremely helpful.
http://www.raywenderlich.com/3997/introduction-to-augmented-reality-on-the-iphone
I'll stick around to clarify any questions or other concerns you may have with your app.
To augment the original answer for devs who were using Cisco MSE for indoor location - now they have an iOS and Android SDK which enables you to do indoor location using the MSE. A simulator can be used as well to develop the app without implementing the infrastructure to start with : https://developer.cisco.com/site/cmx-mobility-services/downloads/
For indoor location you can use Bluetooth LE beacons since it's a very accessible technology nowadays, there are several methods:
Trilateration: it uses 3 beacons, but with the noise and attenuation of Bluetooth signals, it gets quite difficult to determine the exact position and also it's not easy to use more than 3 beacons to increase accuracy.
Levenberg Marquadt method: used to solve non-linear squares problems showed good results on indoor positioning.
Dead Reckoning method: using the motion co-processor of the device, giving an initial position you can calculate the moving path of the device. Not that easy to implement anyway.
I wrote a post on the topic, you can find more info here: http://bits.citrusbyte.com/indoor-positioning-with-beacons/
And you can use this iOS app for your own indoor positioning experiments: https://github.com/citrusbyte/beacons-positioning
I doubt the American Museum is actually using private APIS; you'll probably find the routers that have been setup serve different responses to each other, so the app can detect it's position in the museum.
If you are looking for a cheaper to way to do the same task, you could have signs with QR codes, and use an open source library to let users scan these barcodes as they move through the museum, and update the onscreen content accordingly. On an even more low tech level, you can just tag each area with unique numbers, and distinguish that way.

What should I consider to ensure seamless port of my iPhone apps to iPad?

Following iPad's announcement and its SDK (iPhone SDK 3.2), porting apps to iPad becomes an important issue. What guidelines I should follow in my iPhone apps to ensure I can port it to iPad as seamlessly as possible?
The different resolution is particularly an important issue. While the iPad runs iPhone apps unmodified, it's not really the desirable behavior for a native app. How can we make our iPhone apps resolution-independent so that they can run gracefully on all resolutions like most desktop apps?
If you've been using IB and setting the resize behaviors of elements properly, and also coding frame coordinates all relative to each other you are half-way to having a UI that can potentially scale to a larger screen.
From the screen shots there are new kinds of action-sheets as well, potentially attached to UI elements instead of floating - if you use overlays today they will probably work about the same but you may want to consider changing placement from the center on larger display.
UPDATE:
Now the event is over, and registered developers can download the SDK - although we cannot talk about specific features here just yet, read through ALL of the documents related to the new OS version as there are a number of things aimed at helping you transition to supporting both platforms. Also before you start using custom libraries for things take a look through the API changes to see what new abilities might be supported that are not today.
Generally speaking, what I said above about IB holds true, and also you should start thinking about how your apps today could use more space to present more information at once instead of being split out over multiple screens. Also if you are doing any projects right now that use images, make sure to initially design the images large enough that you can also use them for higher resolution tablet applications.
It is far more reasonable to expect users to input text (and larger amounts of it) than with a non-iPad device.
Nothing, it appears. Although we don't have the SDK quite yet. It will all existing run iPhone app without an issue, albeit at reduced resolution.
It remains to be seen how much of the existing iPhone SDK is shared with the iPad SDK UI wise.
Judging by what has been said, absolutely nothing. You will have to adapt to the new screen size and better hardware all together, if you want to take advantage of the features that the improved device offers. The lack of a 3g module is also something to consider if your app(s) rely on that functionality.

Comparison mobile devices as dev. platforms iPhone, Blackberry Windows Mobile

I was trying to compare the three above mentioned platforms and what considerations one needs to think about when programming in order to create some kind of code base that could run on all three.
This is what I have collected for the iPhone - it would be great if somebody else could write something similar for the other two.
Only one application can run at any
given time. i.e. that is why the
SQLLite database is loaded as a file
into the app instead of as
traditionally having some kind of
server to connect to.
Only one fixed size window 480x320
pixels
Runs in a sandbox, when the app is
deployed a sandbox is created
"around" the app, the app can only
read/write files from within that
area. Also low-level access to the
phone is restricted.
Since a program can be stopped at
any time (see point 1) this needs to
be considered when designing the
app, at any time must the app be
able to write its current state to
disk so that it can resume later. If
this takes longer than five seconds
the app will be aborted.
128MB RAM, about half of that 64MB
is available to the app. There is
typicall 4GB storage (depends on
model), no virtual memory, if memory
is running out the app may be
aborted.
Edit: just to be clear, I am not after which platform/os is best for the developer, I am just interested in spec. comparison to know what can be expected if one has three target platforms and using native language for each (not web apps), what the memory and other considerations are.
Edit: removed language as its assumed that native language for the platform will be used.
There is an excellent article on Codeproject which would be of benefit to your question. Head on over here to read it.
Hope this helps,
Best regards,
Tom.
For Windows Mobile I want to add:
Windows Mobile in comparison to iPhone allows multiple applications to run at same time.
It comes with variable screen sizes and has different sdks (
Windows Mobile Professional for 'Windows Phones' (smartphones) with touchscreens and
Windows Mobile Standard for 'Windows Phones' with regular screens)
The framework which is generally used is .Net Compact Framework besides some people also prefer open-net which is a open source framework.
Unlike in iPhone, Windows Mobile has no private api's which means it gives more power to developers.
The memory size allowed for a program is 32 mb
You do not need a developer license for developing and shipping applications on windows mobile although windows mobile itself prompts you to avoid installing apllications which are from unknown publishers.( which is more interesting unlike in iPhone you need to have a license while you only want to debug your applcation on your device(not for the jailbroken devices.))
And for some bad things about windows mobile, see this link.
Thanks,
Madhup
I feel like the final list will be of little use, as all data points collected will differ substantially in content apart from your last one. Some corrections to your iPhone list:
1) Local databases such as SQLLite are"not traditionally" implemented as a server on other mobile platforms either (they also use various file-oriented DB's).
2) Very soon that single fixed size assumption may well be inaccurate.
3) The App is in a sandbox but can write to some areas outside of the sandbox via API calls (for instance, photo library or address book).
5) That number varies between 3Gs and 3G/2G/Touch (the older models have half the memory)
6) Monotouch is available, but I'm not sure there's anything that far along for Java based iPhone development. There's also a Flash compiler from Adobe.
Basically if you are thinking cross platform, memory/screen size/system access/common databases will all differ - so the whole thing boils down to language AND LIBRARIES. And that is where you really have an issue with a cross-platform approach, because the libraries are very different per system... in the end you MIGHT be able to share data structures and some pure data processing code across the platform binaries, with very different GUI code for each system. But is it really worth it to constrain the development of each client?
On a side-note Blackberry is Java-based, so it presents yet another hurdle for such an attempt.
If you really want to see what cross platform ends up looking like, take a look at the codebase for Waze - a cross-platform open source navigation app:
http://www.waze.com/wiki/index.php/Source_code
Client source for iPhone and Windows Mobile lives there.