Is there a way to know eeprom index of the chip in onGetResult() when requested for eeprom chip's information - eeprom

When requested for eeprom chip's information with
asyncGet(WB_RES::LOCAL::COMPONENT_EEPROM_EEPROMINDEX_INFO(), AsyncRequestOptions::Empty, eepromIndex);
in the WB_RES::EepromInfo value received in onGetResult(), I couldn't any reference to which chip(0 or 1) does that information belong to. How can I know that in the application?
I would like to get both the eeprom chip's (0, 1) info with their indices into the application and then modify the reading and writing methods as per their sizes.

The easiest way is to store the chip-index in your class when making the query and using that information in onGetResult(). The Whiteboard does encode the query parameters into the ResourceID, but finding their values afterwards is quite complex and probably not worth the effort.
Full disclosure: I work for the Movesense team

Related

Firebase analytics - Unity - time spent on a level

is there any possibility to get exact time spent on a certain level in a game via firebase analytics? Thank you so much 🙏
I tried to use logEvents.
The best way to do so would be measuring the time on the level within your codebase, then have a very dedicated event for level completion, in which you would pass the time spent on the level.
Let's get to details. I will use Kotlin as an example, but it should be obvious what I'm doing here and you can see more language examples here.
firebaseAnalytics.setUserProperty("user_id", userId)
firebaseAnalytics.logEvent("level_completed") {
param("name", levelName)
param("difficulty", difficulty)
param("subscription_status", subscriptionStatus)
param("minutes", minutesSpentOnLevel)
param("score", score)
}
Now see how I have a bunch of parameters with the event? These parameters are important since they will allow you to conduct a more thorough and robust analysis later on, answer more questions. Like, Hey, what is the most difficult level? Do people still have troubles on it when the game difficulty is lower? How many times has this level been rage-quit or lost (for that you'd likely need a level_started event). What about our paid players, are they having similar troubles on this level as well? How many people have ragequit the game on this level and never played again? That would likely be easier answer with sql at this point, taking the latest value of the level name for the level_started, grouped by the user_id. Or, you could also have levelName as a UserProperty as well as the EventProperty, then it would be somewhat trivial to answer in the default analytics interface.
Note that you're limited in the number of event parameters you can send per event. The total number of unique parameter names is limited too. As well as the number of unique event names you're allowed to have. In our case, the event name would be level_completed. See the limits here.
Because of those limitations, it's important to name your event properties in somewhat generic way so that you would be able to efficiently reuse them elsewhere. For this reason, I named minutes and not something like minutes_spent_on_the_level. You could then reuse this property to send the minutes the player spent actively playing, minutes the player spent idling, minutes the player spent on any info page, minutes they spent choosing their upgrades, etc. Same idea about having name property rather than level_name. Could as well be id.
You need to carefully and thoughtfully stuff your event with event properties. I normally have a wrapper around the firebase sdk, in which I would enrich events with dimensions that I always want to be there, like the user_id or subscription_status to not have to add them manually every time I send an event. I also usually have some more adequate logging there Firebase Analytics default logging is completely awful. I also have some sanitizing there, lowercasing all values unless I'm passing something case-sensitive like base64 values, making sure I don't have double spaces (so replacing \s+ with " " (space)), maybe also adding the user's local timestamp as another parameter. The latter is very helpful to indicate time-cheating users, especially if your game is an idler.
Good. We're halfway there :) Bear with me.
Now You need to go to firebase and register your eps (event parameters) into cds (custom dimensions and metrics). If you don't register your eps, they won't be counted towards the global cd limit count (it's about 50 custom dimensions and 50 custom metrics). You register the cds in the Custom Definitions section of FB.
Now you need to know whether this is a dimension or a metric, as well as the scope of your dimension. It's much easier than it sounds. The rule of thumb is: if you want to be able to run mathematical aggregation functions on your dimension, then it's a metric. Otherwise - it's a dimension. So:
firebaseAnalytics.setUserProperty("user_id", userId) <-- dimension
param("name", levelName) <-- dimension
param("difficulty", difficulty) <-- dimension (or can be a metric, depends)
param("subscription_status", subscriptionStatus) <-- dimension (can be a metric too, but even less likely)
param("minutes", minutesSpentOnLevel) <-- metric
param("score", score) <-- metric
Now another important thing to understand is the scope. Because Firebase and GA4 are still, essentially just in Beta being actively worked on, you only have user or hit scope for the dimensions and only hit for the metrics. The scope basically just indicates how the value persists. In my example, we only need the user_id as a user-scoped cd. Because user_id is the user-level dimension, it is set separately form the logEvent function. Although I suspect you can do it there too. Haven't tried tho.
Now, we're almost there.
Finally, you don't want to use Firebase to look at your data. It's horrible at data presentation. It's good at debugging though. Cuz that's what it was intended for initially. Because of how horrible it is, it's always advised to link it to GA4. Now GA4 will allow you to look at the Firebase values much more efficiently. Note that you will likely need to re-register your custom dimensions from Firebase in GA4. Because GA4 is capable of getting multiple data streams, of which firebase would be just one data source. But GA4's CDs limits are very close to Firebase's. Ok, let's be frank. GA4's data model is almost exactly copied from that of Firebase's. But GA4 has a much better analytics capabilities.
Good, you've moved to GA4. Now, GA4 is a very raw not-officially-beta product as well as Firebase Analytics. Because of that, it's advised to first change your data retention to 12 months and only use the explorer for analysis, pretty much ignoring the pre-generated reports. They are just not very reliable at this point.
Finally, you may find it easier to just use SQL to get your analysis done. For that, you can easily copy your data from GA4 to a sandbox instance of BQ. It's very easy to do.This is the best, most reliable known method of using GA4 at this moment. I mean, advanced analysts do the export into BQ, then ETL the data from BQ into a proper storage like Snowflake or even s3, or Aurora, or whatever you prefer and then on top of that, use a proper BI tool like Looker, PowerBI, Tableau, etc. A lot of people just stay in BQ though, it's fine. Lots of BI tools have BQ connectors, it's just BQ gets expensive quickly if you do a lot of analysis.
Whew, I hope you'll enjoy analyzing your game's data. Data-driven decisions rock in games. Well... They rock everywhere, to be honest.

How to Identify values from characteristic data for Walk Run Stride bluetooth sensor

I am developing a workout app with sensor connectivity and able to read and get data for Heart rate sensor but for stride sensors (walk/Run) facing problem to map the values given by sensor characteristic.
How will I get Speed, cadence, steps per mints , Distance.?
I searched on google didn't get for this.
I am pretty much sure getting data but difficulty in mapping index and values for different parameters.
Check attached code snapshot for data output.
Thanks in Advance...!!
This question should not be tagged cadence, unless I'm missing something? That tag if for "A global provider of Electronic Design Automation (EDA) software and engineering services."
PS: I can't comment yet, please resolve this and delete this answer.

How can I limit the number of blocks written in a Write_10 command?

I have a product that is basically a USB flash drive based on an NXP LPC18xx microcontroller. I'm using a library provided from the manufacturer (LPCOpen) that handles the USB MSC and the SD card media (which is where I store data).
Here is the problem: Internally the LPC18xx has a 64kB (limited by hardware) buffer used to cache reads/writes which means it can only cache up to 128 blocks(512B) of memory. The SCSI Write-10 command has a total-blocks field that can be up to 256 blocks (128kB). When originally testing the product on Windows 7 it never writes more than 128 blocks at a time but when tested on Linux it sometimes writes more than 128 blocks, which causes the microcontroller to crash.
Is there a way to tell the host OS not to request more than 128 blocks? I see references[1] to a Read-Block-Limit command(05h) but it doesn't seem to be widely supported. Also, what sense key would I return on the Write-10 command to tell Linux the write is too large? I also see references to a block limit VPD page in some device spec sheets but cannot find a lot of documentation about how it is implemented.
[1]https://en.wikipedia.org/wiki/SCSI_command
Let me offer a disclaimer up front that this is what you SHOULD do, but none of this may work. A cursory search of the Linux SCSI driver didn't show me what I wanted to see. So, I'm not at all sure that "doing the right thing" will get you the results you want.
Going by the book, you've got to do two things: implement the Block Limits VPD and handle too-large transfer sizes in WRITE AND READ.
First, implement the Block Limits VPD page, which you can find in late revisions of SBC-3 floating around on the Internet (like this one: http://www.13thmonkey.org/documentation/SCSI/sbc3r25.pdf). It's probably worth going to the t10.org site, registering, and then downloading the last revision (http://www.t10.org/cgi-bin/ac.pl?t=f&f=sbc3r36.pdf).
The Block Limits VPD page has a maximum transfer length field that specifies the maximum number of blocks that can be transferred by all the READ and WRITE commands, and basically anything else that reads or writes data. Of course the downside of implementing this page is that you have to make sure that all the other fields you return are correct!
Second, when handling READ and WRITE, if the command's transfer length exceeds your maximum, respond with an ILLEGAL REQUEST key, and set the additional sense code to INVALID FIELD IN CDB. This behavior is indicated by a table in the section that describes the Block Limits VPD, but only in late revisions of SBC-3 (I'm looking at 35h).
You might just start with returning INVALID FIELD IN CDB, since it's the easiest course of action. See if that's enough?

Caching Strategy for location requests

I am building REST APIs that return data (lets say events ) in particular area. The REST URL is a simple GET
/api/v1/events?lat=<lat>&lng=<lng>&radius=<radius>.
with parameters lat, lng and radius (10 miles by default), the latitude and longitude are what the device or browser APIs return. Now needless to say that the lat and lng change continuously as the user moves and also two users can be same vicinity with different lat / lng. What is the best way to cache such kind of requests on the server so that I don't have to dip into business logic everytime. The URL is not going to unique since lat/lng change.
Thanks
I'm assuming you have some sort of "grid", and when a user requests a specific coordinate, you return the grid tile(s) around the location. So you have an infinite URL space (coordinates) that is mapped to a finite number of tiles. One solution is to redirect every request to the "canonical", cache friendly URL for that tile, e.g.
GET /api/v1/events?lat=123&lng=456
=>
302 Found
Location: /api/v1/events?tile=abc
Or, if you want to retain the lat/long info in the URL, you could use the location of the center of the tile.
I think the best approach is for you to store the results in a cache with the center coordinates as a key, and later query the points within the circle for the new request.
I'm not aware of any cache engines that would allow you to perform spatial queries, so I think you'll have to use a database that allows easy querying and indexing of spatial data. You may use that database for caching your results, or at least store a key to that result in a cache engine somewhere else, and later you can query them with spatial coordinates, asking for all points with a threshold distance to your new request.
There's PostGis for PostgreSQL, which should be quite straightforward since it has full support for latitude/longitude distance computations. Once you have it setup with proper indexes, it should be as easy as:
SELECT * FROM your_cache_table
WHERE ST_Distance_Sphere(the_geom, ST_MakePoint(new_lon, new_lat)) <= 160.934
MySQL has some support for the OpenGis extensions, however it doesn't have support for latitude/longitude distance computations. Maybe you'll need to do some calculations by yourself, maybe the simple cartesian distance works for you. Check the documentation here, and this answer should also help.
I also believe even MySQL 5.6 still has support for spatial indexes only in MyISAM tables, but that shouldn't be an issue since you're using them only for cache.
Managing the cache may be a little more complicated than usual. If you need expiration, you should probably store only keys in the database and set an expire parameter on the cache server. When you hit a database point for which there's no longer a valid key, you clean it from the database. You'll probably need a way to invalidate cache when the primary data changes, removing from both the database and the cache server.
I have been data modelling a hobby application that also needs to deal with geolocation data and have had to try to solve similar problems. The solution will of course depend on the constraints that you have and the actual use cases that are crucial to your application's purpose. I will assume that you have design flexibility to change all aspects of your application. i.e. the technology stack.
note:
... so that I don't have to dip into business logic everytime. The URL is not
going to unique since lat/lng change.
The above is an ambiguous statement, since not dipping into business logic can mean many things. I will assume it means you don't want to make any database queries to retrieve new data that might be similar to the data you already have in the cache. Also assuming that you only want to cache on the server, the following approaches come to mind:
Cache the data between the database and the application.
Database ---> Cache --> App ---> User
In this approach, your application processes all the rest api calls and then decide whether the results in the cache can be used or another database access is required.
Cache the data between the application and the user.
Database --> App ---> Cache ---> User
This approach is a little tricky considering that the url is always changing. So you might need a 'smart caching mechanism' that will process the incoming url and then decide whether the cached data is relevant. The smart caching can be done in a number of ways depending on how your application is implemented. Something like mongodb could be used as a json cache and then each incoming request can be preprocessed to see if the data can just be returned from this cache or redirect to the main application. So the structure might look like this:
Database --> App ---> (Some logic(e.g NodeJS app) + Mongodb) ---> User
conclusion:
Without knowing the architecture of your solution, the critical use cases and the full design constraints, one cannot really suggest a complete solution to this problem. You might have to rethink certain features you are trying to provide and make tough compromises to get things working. Hopefully, the suggestions provided by different people here will be helpful.

Suitable NXP card for MUSCLE applet

I'm checking JCOP product range for a suitable version for MUSCLE applet.
CAP file size of this applet is 14KB.
which versions can be used for this applet?
http://www.nxp.com/documents/line_card/75016728.pdf
what parameter I should check for it? EEPROM or ROM?
This is slightly off topic, and it's probably not such a good idea to refer to a specific product as products are removed and added in time.
There is the .cap file size but note that the installed applet will take a bit less. You should however consider that .cap file size does not take memory required for installation and personalization in account.
You obviously need the asymmetric co-processor but you may not need contactless, so check for that.
Unless you want to pay for creating a ROM mask you should be looking at EEPROM or Flash (where available).
I'll quote Wikipedia:
However, the one-time masking cost is high and there is a long turn-around time from design to product phase. Design errors are costly: if an error in the data or code is found, the mask ROM is useless and must be replaced in order to change the code or data.
Start thinking ROM after you've sold a few hundreds of thousands :)