SiteCatalyst streaming video tracking and additional clarifications - tags

we're attempting to track a streaming video with SiteCatalyst.The issue comes in as this video has obsviously no end and the s.media Module can't know how to set the seconds or milestones segment views.This is resulting in no tracking calls except for the starting one.Could a possible solution be the usage of s.media.monitor custom functions?Here's explained how to use them together with the basic Media module settings.Maybe a timing deployment of "sendRequest()" method could help...?I use this occasion to ask a brief how-to example of media.monitor methods, because I've been just using the basic settings till now, as below:
s.loadModule("Media");
s.Media.autoTrack = false;
s.Media.trackMilestones = "25,50";
s.Media.segmentByMilestones = true;... ...Thanks a lot

Yeah.. i really, really dislike the Media module. Video tracking is getting more and more popular with the clients, so it has become the biggest thorn in my side, because the nature of videos over the internet is a big mess with all kinds of moving parts internally, that make it extremely difficult to get truly accurate tracking beyond basic "start" and "stop". (actually I take that back.. I think mobile/sdk tracking is quickly becoming the thing i shake my angry fist at the most, but that's a different post!)
I think Adobe has made some heroic efforts to automate video tracking and it more or less works okay if you just have a regular (not flash) object or html5 tag embedded on the page but in practice, MOST of the time, sites implement their videos through 3rd party scripts (e.g. jwplayer, vimeo, youtube api) and the Media module automation basically goes down the drain on that count.
I understand that it needs to know how long a video is to know when to autopop the events, but I swear, 99% of the time in practice, the way Media module expects things to pop in certain orders etc.. it just doesn't align with how videos work in the real world. Even if you attempt to do it the "manual" way, more often than not it's still buggy,e.g. autoplay and buffering ALWAYS seem to screw up the open+play sequence that MUST happen in that order.
Basically, the Media module desperately needs to be rewritten to better handle streaming videos, and also just "manually" using it in general. Anyways..
Two things I have done in your situation. Overall, neither one of these options are a perfect 1:1 to normal videos with a duration, but then, streaming videos aren't really the same, so it doesn't really make sense to treat them the same.
Option #1: Use an estimated duration for your streaming video. So you said it yourself: your streaming videos have no end. Well as I mentioned, you can't calculate percent viewed unless you have a duration, pretty basic math. So, estimate a duration.
I have clients that have streaming webinars or whatever and it's true that there's technically no duration according to the player, but in reality they don't really conduct that webinar 24/7 forever. In reality it's for a set amount of time like 30 minutes or an hour or something. So, just specify the duration as that.
Yes, this will require extra custom work on your end to store/associate an estimated duration. And yes, this does have the potential for being misleading (e.g. if a webinar ends early or runs late). This option is generally good for sites that have set windows for the stream to actually be active.
Option #2: Ditch the notion of % viewed, record it as n time consumed. So the overall point of the milestones is to know how much of a video was actually watched, yes? Well, who said it has to be measured by % viewed?
How about instead, you just record n seconds consumed every n seconds. You can do this with an incrementor eVar, and/or counter event. (Part of the normal video tracking actually does include a counter event "Video Time", or a.media.timePlayed).
So basically, you'd basically just pop the events/props/eVars yourself, and ignore milestone/segment reports.
Note: This option only really works if you are using the older style video tracking that has events/props/eVars assigned for it. If you are using the newer style video tracking that does not use events/props/eVars.. well, AA does not currently offer an official way to manually pop that stuff directly. It is surely possible to unofficially do so, but I have not yet reverse engineered the latest Media module to figure out how to do that. So, in this case your only option is #1.

Related

Continuous data streaming from NFC to iPhone in Swift?

I have an NFC tag that has integrated environmental sensors inside (MLX90129 to be exact). I would like to make an iPhone app that can read the realtime data from the tag multiple times per second and graph them. I'm not looking for background tag reading, and you can assume that the app will be open and the phone is near the tag at all times.
From what I can see on Apple documentation and other sources, the Swift support for NFC tags is mostly built for single session interrogation. Has anyone succeeded in getting continuous and repeated NFC tag reading for this type of purpose?
As you pointed out: "to make continuous and repeated NFC readings" it's not the intended functionality.
While I think that you can sort this out, there's another thing that could be a headache... to make multiple readings per second it's directly confronted to the current implementation of NFC tag reading in iOS.
Every time you start a reading, it shows the native window which informs the user that you are making a NFC Reading. A part of this process is the interaction of the user, and is exactly that part the one that imposes a time constraint. Even if the interaction with the user is not needed, there is an animation, and that animation has its lifecycle's events (start reading, reading, OK, KO, close...).
Afaik you can't bypass that animation which definitely could represent a couple seconds in the best case.
With that said, you should have a few things in mind, if you still want to try:
NFCTagReaderSession can only have one active reading at a time, and when that reading ends (OK/KO), it should be invalidated. So if you want to make another reading, you'll need to create and configure a new instance.

I think I abuse of Ressources.load

Ok so I'm a bit confused about Ressources.Load. I actually use it quite a lot and everyone seems to see this feature as pure evilness. In this documentation, it's even written "Don't use it". I searched a lot about this and found this post. It mostly says to use Ressources.Load only for rare assets, otherwise, performance could/will be harmed.
I can see why this could be a "bad" thing to use, but honestly, I don't know how not to use this in my situation.
Lets say I have a game with ~10 different races with couples of units per race. The user chose it's race and start the game. At this point, it seems normal to me to Ressource.Load only the assets related to this specific race, and not the other ones...
Also, let's say you have a combat scene, with many possible environments (ie: winter, forest, desert, etc.). Again, I wouldn't want to load anything else than the one I'm fighting on. So using Ressources.Load seems the perfect tool. No? Am I missing something important about Unity or what?
Thanks a lot
It's true that Unity loads everything it see that is connected to things in the inspector in the scene. You have no way to stop Unity's loading once you are in the scene. (You can unload later, but it already took the toll of loading them all) The performance harmed in Unity's term seems to mean while playing, because if you connect them to the scene it loads everything from start and plays smooth from then but if you do a dynamic load you risk in lagging while playing.
Don't use it.
This strong recommendation is made for several reasons:
Use of the Resources folder makes fine-grained memory management more
difficult.
It's difficult but not impossible. If you are careful on your own, then you can reap the reward that is lower memory consumption.
Improper use of Resources folders will increase application
startup time and the length of builds. As the number of Resources
folders increases, management of the Assets within those folders
becomes very difficult.
It can't be help because offsetting with the load time you can save at scene start, the increased startup time is probably worth it. Most player won't mind the startup time in my opinion.
The Resources system degrades a project's
ability to deliver custom content to specific platforms and eliminates
the possibility of incremental content upgrades. AssetBundle Variants
are Unity's primary tool for adjusting content on a per-device basis.
Then you only put things that works universally in the Resources folder.
A modern alternate way is to compose your game in scene and use LoadSceneMode.Additive to get what you want one by one. It is suitable for big chunks like combat scene, but for lazy loading of something small in concept (but potentially contains large data like textures) like characters I would still use Resources.Load. The only thing that has delayed load build in is AudioClip which you can deselect preload audio data.
I wrote a detailed load process and its memory consumption here if you are interested in reading.
https://gametorrahod.com/unity-texture-memory-loading-unloading-7054819e4ae8

Determining Network Quality on the iPhone

I have looked at several variations on the Reachability example such as the Donoho change and erica saduns UIApplication extension, but none of these allow you to determine the quality of your 3g connection.
Is there a programatic way to see signal strength and link quality?
I think you need to decide exactly what you mean by quality and also understand that it constantly changes.
The only really accurate measure is unfortunately historical - i.e. you can do a download or upload test and measure the time it took and any packet loss, jitter, delay etc and this will let you know what the quality was when your test was run.
The reason I say it is historical is that this does not guarantee that it will stay like this for any given time - for example you may move between cells (or rooms in the case of WiFi), or several other users in your area may start utilizing the bandwidth heavily.
It may be that a simple download or upload test is sufficient for your purposes to (I am guessing...) decide if your want to run your application in a certain way, and then you can build in further checks into the application itself to see if you need to adapt to a change in the network (e.g. you could trigger on the time for a particular application message transaction to complete)

Is HTTP Streaming with the iPhone buggy?

I am attempting to stream video using Apple's http streaming technology. I am beginning to suspect that either the player on the iPhone or the Apple tools used to segment the videos is buggy.
http://developer.apple.com/iphone/library/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
I am getting really terrible behavior. The app never seems to do a good job of choosing what quality stream to use. It always starts at the lowest quality and often will job to the highest very suddenly and not be able to keep up. I have tried various ways of altering the bandwidth settings to test it.
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=5000
3/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=10000
4/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=459319
5/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=90268800
I have used very large and small setting to make certain streams the obvious choice, but it doesn't matter. Obviously I also have used default the values set by Apple's variantplaylistcreator tool. It always starts at the lowest quality and will jump to seaming random other qualities.
Anyone know whats going on with this?
Have you tried the sample reference streams provided at the bottom of the page here? Apple tests against these, so if it works there, you know it's on your end.

Ideas for designing a Secure, "Low Cost" method for confirming client-side game results

This is more a system design question/challenge, than a coding question.
Basically, I'm thinking of throwing together a Bejeweled-esque game on Facebook using just HTML, CSS, and javascript. This is mostly out of a desire to learn all the little caveats of FBJS via a non-trivial project.
So here's the deal. When developing for Facebook, actual API calls are very expensive; not only is there an additional POST to the Facebook servers, there's also the api call limit and throttling to worry about. In a nutshell, the fewer calls to Facebook the better. Combine this with the timing concerns of even this simple puzzle game, and there's good reason to aggressively minimize the number of callbacks in general.
Not being a security expert, here's the design I've come up with:
Embed a random seed in the game page.
Use that seed to create the game board (As well as additional pieces as needed).
Tweak the seed (xor, concatenate and hash, something like that) after each player move, based on time since last move. Edit: I should probably also include the actual move taken in mutating the seed.
Upon game completion post back the following: game start time, each move taken and when, and the client side results.
On the server, re-run the game with the given data, sanity checking the start time and move times, and then confirm that the results match.
To mitigate denial of service, the game itself will be tweaked to have a win by turn X condition.
To discourage the server being used as a "oracle" of sorts, a user posting back an invalid game will be banned for some constant time X (X being on the order of minutes).
This design requires three Facebook call per game played: one to store the random seed before the game is played, one to fetch it after the game is finished, and one to update the player's score if the game is valid.
What I'm trying to proof the system against is straight up score spoofing (http://...?myscore=999999999, or similar). I'd also like to mitigate "look ahead" attacks, wherein the user can tell what pieces are coming to the board next. Denial of service attacks on the hosting server (intentional or otherwise) should also be prevented.
The actual question, can anyone see a flaw in this design? Equivalently, is there a simpler design that meets my criteria?
Note: I am aware how unnecessary this probably is, but its an interesting question none the less.
I'm going to try and throw some numbers up here to futher illustrate my reasoning, these are pretty rough but I hope helpful.
Assuming a 10x10 game board, there are ~200 potential moves (swapping two adjacent pieces) most of which are invalid. Let's say there are on average 5 valid moves per "turn". If we constrain player actions to the frame of 50 to 30,000 milliseconds, there are 149,750 potential new hashes provided the "tweaking" algorithm doesn't discard bits; I feel confident in say there are at least 10,000 potential new hashes which must be calculated by an attacker assuming a cryptographically secure hash is used. If you throw a min-max algorithm at this, your decision tree explodes very quickly. Throw a game session expiration at this, say 30 minutes, and I believe the attack because equivalent in complexity to writing a little bot program to play for you which cannot reasonably be defended against.
If the client code calculates the next piece and you can't hide this algorithm very well, then some bored college student will figure this out. As a result, they will be able to generate a massive score and defeat your intentions.
I tend to say that it is impossible to do. Why? You cannot trust the client - I could just analyse and completly rewrite the client side code and return whatever values I like. The only way to protect you from cheating and all kinds of attacks is to perform the logic at the server - the client will just collect user input and display the server output. But this is completly against your design goal to minimize the number of server calls.