I have an NFC tag that has integrated environmental sensors inside (MLX90129 to be exact). I would like to make an iPhone app that can read the realtime data from the tag multiple times per second and graph them. I'm not looking for background tag reading, and you can assume that the app will be open and the phone is near the tag at all times.
From what I can see on Apple documentation and other sources, the Swift support for NFC tags is mostly built for single session interrogation. Has anyone succeeded in getting continuous and repeated NFC tag reading for this type of purpose?
As you pointed out: "to make continuous and repeated NFC readings" it's not the intended functionality.
While I think that you can sort this out, there's another thing that could be a headache... to make multiple readings per second it's directly confronted to the current implementation of NFC tag reading in iOS.
Every time you start a reading, it shows the native window which informs the user that you are making a NFC Reading. A part of this process is the interaction of the user, and is exactly that part the one that imposes a time constraint. Even if the interaction with the user is not needed, there is an animation, and that animation has its lifecycle's events (start reading, reading, OK, KO, close...).
Afaik you can't bypass that animation which definitely could represent a couple seconds in the best case.
With that said, you should have a few things in mind, if you still want to try:
NFCTagReaderSession can only have one active reading at a time, and when that reading ends (OK/KO), it should be invalidated. So if you want to make another reading, you'll need to create and configure a new instance.
Related
we're attempting to track a streaming video with SiteCatalyst.The issue comes in as this video has obsviously no end and the s.media Module can't know how to set the seconds or milestones segment views.This is resulting in no tracking calls except for the starting one.Could a possible solution be the usage of s.media.monitor custom functions?Here's explained how to use them together with the basic Media module settings.Maybe a timing deployment of "sendRequest()" method could help...?I use this occasion to ask a brief how-to example of media.monitor methods, because I've been just using the basic settings till now, as below:
s.loadModule("Media");
s.Media.autoTrack = false;
s.Media.trackMilestones = "25,50";
s.Media.segmentByMilestones = true;... ...Thanks a lot
Yeah.. i really, really dislike the Media module. Video tracking is getting more and more popular with the clients, so it has become the biggest thorn in my side, because the nature of videos over the internet is a big mess with all kinds of moving parts internally, that make it extremely difficult to get truly accurate tracking beyond basic "start" and "stop". (actually I take that back.. I think mobile/sdk tracking is quickly becoming the thing i shake my angry fist at the most, but that's a different post!)
I think Adobe has made some heroic efforts to automate video tracking and it more or less works okay if you just have a regular (not flash) object or html5 tag embedded on the page but in practice, MOST of the time, sites implement their videos through 3rd party scripts (e.g. jwplayer, vimeo, youtube api) and the Media module automation basically goes down the drain on that count.
I understand that it needs to know how long a video is to know when to autopop the events, but I swear, 99% of the time in practice, the way Media module expects things to pop in certain orders etc.. it just doesn't align with how videos work in the real world. Even if you attempt to do it the "manual" way, more often than not it's still buggy,e.g. autoplay and buffering ALWAYS seem to screw up the open+play sequence that MUST happen in that order.
Basically, the Media module desperately needs to be rewritten to better handle streaming videos, and also just "manually" using it in general. Anyways..
Two things I have done in your situation. Overall, neither one of these options are a perfect 1:1 to normal videos with a duration, but then, streaming videos aren't really the same, so it doesn't really make sense to treat them the same.
Option #1: Use an estimated duration for your streaming video. So you said it yourself: your streaming videos have no end. Well as I mentioned, you can't calculate percent viewed unless you have a duration, pretty basic math. So, estimate a duration.
I have clients that have streaming webinars or whatever and it's true that there's technically no duration according to the player, but in reality they don't really conduct that webinar 24/7 forever. In reality it's for a set amount of time like 30 minutes or an hour or something. So, just specify the duration as that.
Yes, this will require extra custom work on your end to store/associate an estimated duration. And yes, this does have the potential for being misleading (e.g. if a webinar ends early or runs late). This option is generally good for sites that have set windows for the stream to actually be active.
Option #2: Ditch the notion of % viewed, record it as n time consumed. So the overall point of the milestones is to know how much of a video was actually watched, yes? Well, who said it has to be measured by % viewed?
How about instead, you just record n seconds consumed every n seconds. You can do this with an incrementor eVar, and/or counter event. (Part of the normal video tracking actually does include a counter event "Video Time", or a.media.timePlayed).
So basically, you'd basically just pop the events/props/eVars yourself, and ignore milestone/segment reports.
Note: This option only really works if you are using the older style video tracking that has events/props/eVars assigned for it. If you are using the newer style video tracking that does not use events/props/eVars.. well, AA does not currently offer an official way to manually pop that stuff directly. It is surely possible to unofficially do so, but I have not yet reverse engineered the latest Media module to figure out how to do that. So, in this case your only option is #1.
I am working on radio application where i need to convert speech to text. For that i am using third party api's. For geting better results i want to run two api's at the same time and compare the output. this should happen when user clicks on record button.
I know we can do this using GCD but not getting exact idea of how we can achieve this.
Need suggestion.
Thank you.
Th short answer is that you create two GCD queues, one for each Speech-to-Text task. Within each block, you call the two different APIs with the same input data. Then you either wait for the result, or get the block to invoke a callback status method when completed.
Note that you will need to ensure that the speech engines can safely run on background threads.
This is fairly straightforward if you want to record the audio first, then submit the data to two different engines for processing. But it sounds like you might want to start processing the audio as soon as the user clicks Record? In that case, it very much depends on the APIs as to how you feed them data in real time. You might want to just run them on separate threads explicitly and feed them data as it comes in.
I'm developing a navigation app which uses the UIBackgroundModes=location setting and receives CLLocationManager updates via didUpdateToLocation. That works fine.
The problem is that the intervals between location updates are very hard to predict and I need to make sure the app is called something like once every few seconds to do some other (tiny) amounts of work even if the location did not change significantly.
Can I do that? Am I allowed to do that? And how can I do that?
I found a blog post, but I'm not sure if this is really the way to proceed.
Permissible background operations are pretty limited in scope. You cannot, for example, just leave an NSTimer running to perform some arbitrary code while your application is in the background - so the simple answer to your question is no, you cannot. Definitely read the Apple documentation regarding what is and isn't allowed (most of what's allowed pertains to apps that "need" specific ongoing services, like the ability to play music, or respond to location changes (GPS type apps...). You may be able to construct a viable solution by responding to location or significant location change notifications...
I've decided to integrate OpenFeint into my new game to have achievements and leaderboards.
The game is dynamic and I would like user to be rewarded immediately for some successful results, but as it seems for me, OpenFeint's achievements are a bit sluggish and it shows visual notification only when it receives confirmation from the server.
Is it possible to change something in settings or hack it a little bit to show notification immediately as soon as it checks only local database if the achievement has not been unlocked it?
Not sure if this relates to the Android version of the SDK (which seems even slower), but we couldn't figure out how to make it faster. It was so unacceptably slow that we started developing our own framework that fixes most of open feint's shortcomings and then some. Check out Swarm, it might fit your needs better.
There are several things you can do to more tightly control the timing of these notifications. I'll explain one approach and you can use this as a starting point to explore further on your own. These suggestions apply specifically to iOS apps. One caveat is that these suggestions refer to internal APIs in OFSDK 2.8 for iOS and not ordinarily recommended for high level use and subject to change in future versions.
The first thing I recommend is that you build the sample app with your own product key. Use the standard sample app to experiment before applying the result to your own code.
You are going to get the snappiest response by separating the notification pop-up UI from the process of submitting the achievement. This way you don't have to worry about getting wrapped up in the logic for deciding whether the submission is going just to the local db or is doing the full confirmation on an async network transaction.
See the declaration of "showAchievementNotice" in "OFNotification.h". Performing a search in the sample app, you will see that this is the internal API used for displaying the achievement pop-up when an achievement is earned. It does not actually submit the achievement. You can call this method directly as it is called from "OFAchievementService.mm" to directly control when the message appears. You can then use the following article to disable the pop-up from being called when the actual submission occurs:
http://support.openfeint.com/dev/notification-pop-ups-in-ios/
This gives you complete freedom to call the submission at a later time provided you keep track of the need to do so. For example, you could locally serialize a flag to take care of the actual submission either after the level is done or the next time the app starts up. Don't forget that the user could quit out of a game without cleanly finishing a level.
I am attempting to stream video using Apple's http streaming technology. I am beginning to suspect that either the player on the iPhone or the Apple tools used to segment the videos is buggy.
http://developer.apple.com/iphone/library/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
I am getting really terrible behavior. The app never seems to do a good job of choosing what quality stream to use. It always starts at the lowest quality and often will job to the highest very suddenly and not be able to keep up. I have tried various ways of altering the bandwidth settings to test it.
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=5000
3/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=10000
4/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=459319
5/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=90268800
I have used very large and small setting to make certain streams the obvious choice, but it doesn't matter. Obviously I also have used default the values set by Apple's variantplaylistcreator tool. It always starts at the lowest quality and will jump to seaming random other qualities.
Anyone know whats going on with this?
Have you tried the sample reference streams provided at the bottom of the page here? Apple tests against these, so if it works there, you know it's on your end.