UE4 Increasing NAME_SIZE limit - unreal-engine4

I am using Cesium plugin with Unreal Engine 4. I have to load the tileset into Cesium3DTileset and the tileset is in the private bucket. To access the objects in the private bucket, I am using CloudFront Signed Cookie (Since Cesium3DTileset needs to access other helper files). However, since there is no way to send Cookie, I am sending it as a query parameter attached to the URL. But this makes the URL very long hence causing FName's 1023 max length exceeded. Got 1405 characters excluding null-terminator.
So, I wonder if there is a better/alternative way to achieve my goal. One way I can think of is increasing the NAME_SIZE from 1024 to some higher value. But I am not sure how to do that or if that is even possible.

I'd consider this a bug in Cesium for Unreal, and I wrote an issue for it here:
https://github.com/CesiumGS/cesium-unreal/issues/617

Related

Firebase analytics - Unity - time spent on a level

is there any possibility to get exact time spent on a certain level in a game via firebase analytics? Thank you so much 🙏
I tried to use logEvents.
The best way to do so would be measuring the time on the level within your codebase, then have a very dedicated event for level completion, in which you would pass the time spent on the level.
Let's get to details. I will use Kotlin as an example, but it should be obvious what I'm doing here and you can see more language examples here.
firebaseAnalytics.setUserProperty("user_id", userId)
firebaseAnalytics.logEvent("level_completed") {
param("name", levelName)
param("difficulty", difficulty)
param("subscription_status", subscriptionStatus)
param("minutes", minutesSpentOnLevel)
param("score", score)
}
Now see how I have a bunch of parameters with the event? These parameters are important since they will allow you to conduct a more thorough and robust analysis later on, answer more questions. Like, Hey, what is the most difficult level? Do people still have troubles on it when the game difficulty is lower? How many times has this level been rage-quit or lost (for that you'd likely need a level_started event). What about our paid players, are they having similar troubles on this level as well? How many people have ragequit the game on this level and never played again? That would likely be easier answer with sql at this point, taking the latest value of the level name for the level_started, grouped by the user_id. Or, you could also have levelName as a UserProperty as well as the EventProperty, then it would be somewhat trivial to answer in the default analytics interface.
Note that you're limited in the number of event parameters you can send per event. The total number of unique parameter names is limited too. As well as the number of unique event names you're allowed to have. In our case, the event name would be level_completed. See the limits here.
Because of those limitations, it's important to name your event properties in somewhat generic way so that you would be able to efficiently reuse them elsewhere. For this reason, I named minutes and not something like minutes_spent_on_the_level. You could then reuse this property to send the minutes the player spent actively playing, minutes the player spent idling, minutes the player spent on any info page, minutes they spent choosing their upgrades, etc. Same idea about having name property rather than level_name. Could as well be id.
You need to carefully and thoughtfully stuff your event with event properties. I normally have a wrapper around the firebase sdk, in which I would enrich events with dimensions that I always want to be there, like the user_id or subscription_status to not have to add them manually every time I send an event. I also usually have some more adequate logging there Firebase Analytics default logging is completely awful. I also have some sanitizing there, lowercasing all values unless I'm passing something case-sensitive like base64 values, making sure I don't have double spaces (so replacing \s+ with " " (space)), maybe also adding the user's local timestamp as another parameter. The latter is very helpful to indicate time-cheating users, especially if your game is an idler.
Good. We're halfway there :) Bear with me.
Now You need to go to firebase and register your eps (event parameters) into cds (custom dimensions and metrics). If you don't register your eps, they won't be counted towards the global cd limit count (it's about 50 custom dimensions and 50 custom metrics). You register the cds in the Custom Definitions section of FB.
Now you need to know whether this is a dimension or a metric, as well as the scope of your dimension. It's much easier than it sounds. The rule of thumb is: if you want to be able to run mathematical aggregation functions on your dimension, then it's a metric. Otherwise - it's a dimension. So:
firebaseAnalytics.setUserProperty("user_id", userId) <-- dimension
param("name", levelName) <-- dimension
param("difficulty", difficulty) <-- dimension (or can be a metric, depends)
param("subscription_status", subscriptionStatus) <-- dimension (can be a metric too, but even less likely)
param("minutes", minutesSpentOnLevel) <-- metric
param("score", score) <-- metric
Now another important thing to understand is the scope. Because Firebase and GA4 are still, essentially just in Beta being actively worked on, you only have user or hit scope for the dimensions and only hit for the metrics. The scope basically just indicates how the value persists. In my example, we only need the user_id as a user-scoped cd. Because user_id is the user-level dimension, it is set separately form the logEvent function. Although I suspect you can do it there too. Haven't tried tho.
Now, we're almost there.
Finally, you don't want to use Firebase to look at your data. It's horrible at data presentation. It's good at debugging though. Cuz that's what it was intended for initially. Because of how horrible it is, it's always advised to link it to GA4. Now GA4 will allow you to look at the Firebase values much more efficiently. Note that you will likely need to re-register your custom dimensions from Firebase in GA4. Because GA4 is capable of getting multiple data streams, of which firebase would be just one data source. But GA4's CDs limits are very close to Firebase's. Ok, let's be frank. GA4's data model is almost exactly copied from that of Firebase's. But GA4 has a much better analytics capabilities.
Good, you've moved to GA4. Now, GA4 is a very raw not-officially-beta product as well as Firebase Analytics. Because of that, it's advised to first change your data retention to 12 months and only use the explorer for analysis, pretty much ignoring the pre-generated reports. They are just not very reliable at this point.
Finally, you may find it easier to just use SQL to get your analysis done. For that, you can easily copy your data from GA4 to a sandbox instance of BQ. It's very easy to do.This is the best, most reliable known method of using GA4 at this moment. I mean, advanced analysts do the export into BQ, then ETL the data from BQ into a proper storage like Snowflake or even s3, or Aurora, or whatever you prefer and then on top of that, use a proper BI tool like Looker, PowerBI, Tableau, etc. A lot of people just stay in BQ though, it's fine. Lots of BI tools have BQ connectors, it's just BQ gets expensive quickly if you do a lot of analysis.
Whew, I hope you'll enjoy analyzing your game's data. Data-driven decisions rock in games. Well... They rock everywhere, to be honest.

More information for Not Enough Spatial Data problem

After many attempts to make a version of SpectatorView functional using UNet, I was able to successfully communicate between client / server using Azure Spatial Anchor to share anchor information. Many errors have occurred, including an unknown error, which appears to resolve the deletion of the MRTK 2.2.0 library. This has already been resolved, but now at random a problem occurs in the function CloudManager.CreateAnchorAsync, which says just like this: Not enough Neighborhood Spatial Data was available to complete the desired Create operation.
This problem does not happen directly, but randomly makes it impossible to find a pattern to try to think of a solution to this problem.
Remembering that the CloudManager.SessionStatus.RecommendedForCreateProgress variable is above 1 before calling the creation function.
Could you give me more information about this problem? What do I do to not do a project to solve it?

MarkLogic "XDMP-FRAGTOOLARGE" error while storing 200MB+ File using REST

When i try to store a 200MB+ xml file to marklogic using REST it gives the following error "XDMP-FRAGTOOLARGE: Fragment of /testdata/upload/submit.xml too large for in-memory storage".
I have tried the Fragment Roots and Fragment Parents option but still gets the same error.
But when i store the file without '.xml' extension in uri. it saves the file but not Xquery operations can be performed on it.
MarkLogic won't be able to derive the mime from the uri without extension. It will then fall back to storing it as binary.
I think that if you would use xdmp:document-load from QConsole, you might be able to load it correctly, as that will not try to hold the entire document in memory first. It won't help you much though, you will likely hit the same error elsewhere. The REST api will have to pass it through in memory, so that won't work like this.
You could raise memory settings in the Admin UI, but you are generally better off by splitting your input. MarkLogic Content Pump (MLCP) will allow you to do so using the aggregate_record options. That will split the file into smaller pieces based on a particular element, and store these as separate documents inside MarkLogic.
HTH!

consistent hashing on Multiple machines

I've read the article: http://n00tc0d3r.blogspot.com/ about the idea for consistent hashing, but I'm confused about the method on multiple machines.
The basic process is:
Insert
Hash an input long url into a single integer;
Locate a server on the ring and store the key--longUrl on the server;
Compute the shorten url using base conversion (from 10-base to 62-base) and return it to the user.(How does this step work? In a single machine, there is a auto-increased id to calculate for shorten url, but what is the value to calculate for shorten url on multiple machines? There is no auto-increased id.)
Retrieve
Convert the shorten url back to the key using base conversion (from 62-base to 10-base);
Locate the server containing that key and return the longUrl. (And how can we locate the server containing the key?)
I don't see any clear answer on that page for how the author intended it. I think this is basically an exercise for the reader. Here's some ideas:
Implement it as described, with hash-table style collision resolution. That is, when creating the URL, if it already matches something, deal with that in some way. Rehashing or arithmetic transformation (eg, add 1) are both possibilities. This means, naively, a theoretical worst case of having to hit a server n times trying to find an available key.
There's a lot of ways to take that basic idea and smarten it, eg, just search for another available key on the same server, eg, by rehashing iteratively until you find one that's on the server.
Allow servers to talk to each other, and coordinate on the autoincrement id.
This is probably not a great solution, but it might work well in some situations: give each server (or set of servers) separate namespace, eg, the first 16 bits selects a server. On creation, randomly choose one. Then you just need to figure out how you want that namespace to map. The namespaces only really matter for who is allowed to create what IDs, so if you want to add nodes or rebalance later, it is no big deal.
Let me know if you want more elaboration. I think there's a lot of ways that this one could go. It is annoying that the author didn't elaborate on this point; my experience with these sorts of algorithms is that collision resolution and similar problems tend to be at the very heart of a practical implementation of a distributed system.

Caching Strategy for location requests

I am building REST APIs that return data (lets say events ) in particular area. The REST URL is a simple GET
/api/v1/events?lat=<lat>&lng=<lng>&radius=<radius>.
with parameters lat, lng and radius (10 miles by default), the latitude and longitude are what the device or browser APIs return. Now needless to say that the lat and lng change continuously as the user moves and also two users can be same vicinity with different lat / lng. What is the best way to cache such kind of requests on the server so that I don't have to dip into business logic everytime. The URL is not going to unique since lat/lng change.
Thanks
I'm assuming you have some sort of "grid", and when a user requests a specific coordinate, you return the grid tile(s) around the location. So you have an infinite URL space (coordinates) that is mapped to a finite number of tiles. One solution is to redirect every request to the "canonical", cache friendly URL for that tile, e.g.
GET /api/v1/events?lat=123&lng=456
=>
302 Found
Location: /api/v1/events?tile=abc
Or, if you want to retain the lat/long info in the URL, you could use the location of the center of the tile.
I think the best approach is for you to store the results in a cache with the center coordinates as a key, and later query the points within the circle for the new request.
I'm not aware of any cache engines that would allow you to perform spatial queries, so I think you'll have to use a database that allows easy querying and indexing of spatial data. You may use that database for caching your results, or at least store a key to that result in a cache engine somewhere else, and later you can query them with spatial coordinates, asking for all points with a threshold distance to your new request.
There's PostGis for PostgreSQL, which should be quite straightforward since it has full support for latitude/longitude distance computations. Once you have it setup with proper indexes, it should be as easy as:
SELECT * FROM your_cache_table
WHERE ST_Distance_Sphere(the_geom, ST_MakePoint(new_lon, new_lat)) <= 160.934
MySQL has some support for the OpenGis extensions, however it doesn't have support for latitude/longitude distance computations. Maybe you'll need to do some calculations by yourself, maybe the simple cartesian distance works for you. Check the documentation here, and this answer should also help.
I also believe even MySQL 5.6 still has support for spatial indexes only in MyISAM tables, but that shouldn't be an issue since you're using them only for cache.
Managing the cache may be a little more complicated than usual. If you need expiration, you should probably store only keys in the database and set an expire parameter on the cache server. When you hit a database point for which there's no longer a valid key, you clean it from the database. You'll probably need a way to invalidate cache when the primary data changes, removing from both the database and the cache server.
I have been data modelling a hobby application that also needs to deal with geolocation data and have had to try to solve similar problems. The solution will of course depend on the constraints that you have and the actual use cases that are crucial to your application's purpose. I will assume that you have design flexibility to change all aspects of your application. i.e. the technology stack.
note:
... so that I don't have to dip into business logic everytime. The URL is not
going to unique since lat/lng change.
The above is an ambiguous statement, since not dipping into business logic can mean many things. I will assume it means you don't want to make any database queries to retrieve new data that might be similar to the data you already have in the cache. Also assuming that you only want to cache on the server, the following approaches come to mind:
Cache the data between the database and the application.
Database ---> Cache --> App ---> User
In this approach, your application processes all the rest api calls and then decide whether the results in the cache can be used or another database access is required.
Cache the data between the application and the user.
Database --> App ---> Cache ---> User
This approach is a little tricky considering that the url is always changing. So you might need a 'smart caching mechanism' that will process the incoming url and then decide whether the cached data is relevant. The smart caching can be done in a number of ways depending on how your application is implemented. Something like mongodb could be used as a json cache and then each incoming request can be preprocessed to see if the data can just be returned from this cache or redirect to the main application. So the structure might look like this:
Database --> App ---> (Some logic(e.g NodeJS app) + Mongodb) ---> User
conclusion:
Without knowing the architecture of your solution, the critical use cases and the full design constraints, one cannot really suggest a complete solution to this problem. You might have to rethink certain features you are trying to provide and make tough compromises to get things working. Hopefully, the suggestions provided by different people here will be helpful.