Is there any way to get iPhone coordinates without relying on javascript callback events? touchmove generates way too many events to be used in any kind of tight loop (like a game), and without multi-threading, the only option would seem to be non-event driven input coordinates.
There's just no way to do this, although perhaps one day this will be possible.
Related
I have an NFC tag that has integrated environmental sensors inside (MLX90129 to be exact). I would like to make an iPhone app that can read the realtime data from the tag multiple times per second and graph them. I'm not looking for background tag reading, and you can assume that the app will be open and the phone is near the tag at all times.
From what I can see on Apple documentation and other sources, the Swift support for NFC tags is mostly built for single session interrogation. Has anyone succeeded in getting continuous and repeated NFC tag reading for this type of purpose?
As you pointed out: "to make continuous and repeated NFC readings" it's not the intended functionality.
While I think that you can sort this out, there's another thing that could be a headache... to make multiple readings per second it's directly confronted to the current implementation of NFC tag reading in iOS.
Every time you start a reading, it shows the native window which informs the user that you are making a NFC Reading. A part of this process is the interaction of the user, and is exactly that part the one that imposes a time constraint. Even if the interaction with the user is not needed, there is an animation, and that animation has its lifecycle's events (start reading, reading, OK, KO, close...).
Afaik you can't bypass that animation which definitely could represent a couple seconds in the best case.
With that said, you should have a few things in mind, if you still want to try:
NFCTagReaderSession can only have one active reading at a time, and when that reading ends (OK/KO), it should be invalidated. So if you want to make another reading, you'll need to create and configure a new instance.
I've been using Unity3D lately, and soon I discovered thorugh multiple topics online that using OnMouseUp event is much slower than checking for mouse clicks on the update() function. Can someone explain why is that?
Is that also valid for other on functions like OnTriggerExit2D and others? As a design pattern, should I abandon the on() functions completely and only catch events on Update()?
I prefer neither of them but use the 'new' (since 4.6) Unity Event system when possible. In short you have to implement the appropriate handler interface like IPointerClickHandler.
I recommend to have a look at the Events tutorials, for example UI Events and Event Triggers. Note that although this tutorial is focussed on 2D, you can use the event system in 3D as well. You just need to add a Raycaster to your camera.
Agreed with the previous answer but as a reason to WHY the "On" functions are so much more inefficient lies in how they function.
How does unity know that an "OnMouseUp" event has fired? The mouse object would need to have an event handler attached to it that knows to fire the OnMouseUp event. Beyond that something would need to listen every single frame for that event and then run the required code. Also the OnEvent functions tend to get very dispersed because you could in theory have the same event in multiple different functions/classes.
I think the update method in general is a more efficient way to check, because there is a lot less overhead involved when you handle these things yourself.
tldr: There's more overhead involved in using the "On" events instead of in "Update"
I am working on radio application where i need to convert speech to text. For that i am using third party api's. For geting better results i want to run two api's at the same time and compare the output. this should happen when user clicks on record button.
I know we can do this using GCD but not getting exact idea of how we can achieve this.
Need suggestion.
Thank you.
Th short answer is that you create two GCD queues, one for each Speech-to-Text task. Within each block, you call the two different APIs with the same input data. Then you either wait for the result, or get the block to invoke a callback status method when completed.
Note that you will need to ensure that the speech engines can safely run on background threads.
This is fairly straightforward if you want to record the audio first, then submit the data to two different engines for processing. But it sounds like you might want to start processing the audio as soon as the user clicks Record? In that case, it very much depends on the APIs as to how you feed them data in real time. You might want to just run them on separate threads explicitly and feed them data as it comes in.
I am struggling with this question since I noticed that many functional testing frameworks (like Selenium for the web or UISpec for iOS) actually simulate UI events while testing. I am asking: couldn't it be sufficient just to check for preconditions such as that, e.g., the target and selector for a button are set correctly and then fire the selector manually? Why do I need to simulate touches? This has the con that you have to know more about the UI elements you're testing (you have to know what makes them to behave correctly), but since I am the one writing the tests, maybe this doesn't matter?
Could anyone shed some light on this?
Simulating touches can be useful for determining crashes caused by obscure or unplanned user behaviour - a particularly common one is having two items pressed simultaneously. It also allows you to create potentially quite esoteric tests: for example, random user input for a sustained period of time to attempt to crash or break your application in ways you wouldn't expect. The level to which you'd do this would depend on your app, and how important it was to you.
Your alternative approach also has some disadvantages when it comes to multi-touch. Whilst it would be fairly straightforward to fire a button selector through some sort of automatic test rather than simulating user input, what happens if you have an app that deals with swiping, pinching, or other multiple input gestures? In those cases the desired result may not be as black and white as the on/off of the button: you may have many shades of grey and differing output that required validation.
Simulated UI testing actually has quite a long history - there's an interesting story (well, interesting to me) about the original MacPaint and how a random UI input test was able to assist in reproducing obscure or difficult crashes here: http://www.folklore.org/StoryView.py?story=Monkey_Lives.txt
I have an Ext.List in my Sencha app that I would like to render as quickly as possible, then update asynchronously-- in this case, the list contains addresses, and I'd like to reserve some space at the right on each list item for the distance from the user, to be calculated using sencha's location services.
The location calcs could take a few seconds for each address, so I'd like to do that in an asynchronous manner, then update each list entry as the information becomes available. Does anyone have suggestions on how I might go about this? Thanks much.
I don't work with Sencha Touch, but one possible solution that I can think of is to use the afterrender event of Ext.List and trigger ajax requests. So, each request will be asynchronous and will update the distance independently.
But the issue with this is you might have more number of requests to the servers.