How to modify the Handle order in the UEFI Handle Database? - uefi

In the code, the following boot service is being used to obtain an array of Handles with the array size equals HandleCount.
gBS->LocateHandleBuffer (
ByProtocol,
&gEfiSomeProtocolGuid,
NULL,
&HandleCount,
&Handles
);
I was expecting a particular order of the array of Handles I can get after this boot service call. Let's say this represents the EFI_GRAPHICS_OUTPUT_PROTOCOL. The system connects to 3 video cards and I want to ensure the GeForce RTX™ 3090 is always loaded before 3070, and then 3060. How do I trace the source of this Handles array to modify the handle database? What affects the ordering of the handle database?

Related

Fiware-Orion: Geolocation

Hi I'm a student and I'm working for the first time on the broker. I understood how the creation of entities works and their updating through "update" queries. My question is: you can create an entity that contains variables (eg geolocation) defined with value "null" or "zero" and then initialize them with values that interest me. So as to have dynamic and non-static variables (ie that require user update)?
Or do we need interaction with the CEP to do this?
From what I have read in the fiware-orion guide when I create an entity (ex car having attributes and velocity coordinates: geopoint). The values of these 2 attributes must be set in a static way (ex: speed 100 and position coordinates 40.257, 2.187). If I understand the values of these attributes I can only update them by making an update query. So my question is:
Is it possible to update the value of the attributes that contain the position or the speed of the car in a dynamic way, ie without having to write the values from the keyboard? Or does this require the use of the CEP of orion?
If I could not explain myself more generally I would like to know if it is possible to follow the progress of a moving car without me having to add the values from the keyboard.
Thanks.
Orion Context Brokker exposes a REST-based API that (among other things) allows you to create, update and query entities. From the point of view of Orion, it doesn't matter who is the one invoking the API: it can be done manually (for instance, using Postman or curl) or can be an automated system developed by you or a third party (for instance, a software running in a sensor in the car that measures the speed and periodically sends an update using a wireless communication network).
From a client-server point of view (in the case you are familiar with these concepts), Orion takes the server of the API role and the one updating the speed (either manually or automatically) takes the role of client of the API.

Storing custom temporary data in Sitecore xDB

I am using Sitecore 8.1 with xDB enabled (MongoDB). I would like to store the user-roles of the visiting users in the xDB, so I can aggregate on these data in my reports. These roles can change over time, so one user could have one set of roles at some point in time and another set of roles at a later time.
I could go and store these user-roles as custom facets on the Contact entity, but as they may change for a user from visit to visit, I will loose historical data if I update the data in the facet every time the user log in (fx. I will not be able to tell which roles a given user had, at some given visit).
Instead, I could create a custom IElement for my facet data, and store the roles along with a timestamp saying when the given roles were registered, but this model may be hard to handle during the reporting phase, where I would need to connect the interaction data with the role-data based on timestamps every time I generate a report.
Is it possible to store these custom data in the xDB in something else than the Contact collection? Can I store custom data in the Interactions collection? There is a property called Tracker.Current.Session.Interaction.CustomValues which sounds like what I need, but if I store data here, will I be able to perform proper aggregation/reporting on the data? Any other approaches I haven't thought about?
CustomValues
Yes, the CustomValues dictionary is what I would use in your case. This dictionary will get serialized to MongoDB as a nested document of every interaction (unless the dictionary is empty).
Also note that, since CustomValues is a member of the base class Sitecore.Analytics.Model.Entity, this dictionary is available in many other data classes of xDB. For example, you can store custom values in PageData and PageEventData objects.
Since CustomValues takes an object of any class, your custom data class needs some extra things for it to be successfully saved to and subsequently loaded from MongoDB:
It has to be marked as [Serializable].
It needs to be registered in the MongoDB driver like this:
using Sitecore.Analytics.Data.DataAccess.MongoDb;
// [...]
MongoDbObjectMapper.Instance.RegisterModelExtension<YourCustomClassName>();
This needs to be done only once per application lifetime - for example, in an initialize pipeline processor.
Your own storage
Of course, you don't have to use Sitecore's API to store your custom data. So the alternative would be to manually save data to a custom MongoDB collection or an SQL table. You can then read that data in your aggregation processor, finding it by the ID of currently processed interaction.
The benefit of this approach is that you can decide where and how your data is stored. The downside is extra work of implementing and maintaining this data storage.

Is querying MongoDB faster than Redis?

I have some data stored in a database (MongoDB) and in distributed cache redis.
While querying to the repository, I am using lazy loading approach which first finds the data in the cache if it's available, if not find it in the database and update the cache as well so that next time when the requirement comes it should be found in the cache.
Sample Model Used:
Person ( id, name, age, address (Reference))
Address (id, place)
PersonCacheModel extends Person with addressId.
I am not storing parent object with child object together in the cache that is why I've created personCacheModel with addressId and store this object in the cache and while getting the data personCacheModel converts to person and make a call to address repo to addressCache to fill the address details of the person object.
As far as I understand:
personRepository.findPersonByName(NAME + randomNumber);
Access Data from Cache = network time + cache access time + deserialize time
Access Data from database = network time + database query time + object mapping time
When I ran above approach for 1000 rows, accessing data from the database is faster than the accessing data from the cache. I believe cache access time must be smaller than the accessing MongoDB.
Please let me know if there's an issue with the approach or is this is the expected scenario.
to have a valid benchmark we need to consider hardware side and data processing side:
hardware - do we have same configuration, RAM, CPUs count, OS... etc
process - how data is transformed (on single thread, multi thread, per object, per request)
Performing a load test on your data set will give you an good overview of which process is faster in particular use case scenario.
It is hard to judge - what it should be as long as there mentioned above points will be know for us.
The other thing is to have more than one test scenario and have it stressed in let's say 10 sec time, minute , 5 an hour... so you can have digits that will tell you the truth.

Using visjs manipulation to create workflow dependencies

We are currently using visjs version 3 to map the dependencies of our custom built workflow engine. This has been WONDERFUL because it helps us to visualize the flow and find invalid or missing dependencies. What we want to do next is simplify the process of building the dependencies using the visjs manipulation feature. The idea would be that we would display a large group of nodes and allow the user to order them correctly. We then want to be able to submit that json structure back to the server for processing.
Would this be possible?
Yes, this is possible.
Vis.js dispatches various events that relate to user interactions with graph (e.g. manipulations, or position changes) for which you can add handlers that modify or store the data on change. If you use DataSets to store nodes and edges in your network, you can always use the DataSets' get() function to retrieve all elements in you handler in JSON format. Then in your handler, just use an ajax request to transmit the JSON to your server to store the entire graph in your DB or by saving the JSON as a file.
The oppposite for loading the graph: simply query the JSON from your server and inject it into the node and edge DataSets' using the set method.
You can also store the networks current options using the network's getOptions method, which returns all applied options as json.

Garbage collection when using Zend_Session_Handler_DbTable

I am trying to use Zend_Session_Handler_DbTable to save my session data to the db but as far as i can see, the expired sessions are never deleted from the database.
I can see a cron job running (ubuntu) which deletes the file based sessions but I couldn't find how gc works on sessions which are saving in db.
The Zend_Session_SaveHandler_DbTable class has a garbage collection method called gc which is given to PHP via session_set_save_handler when you call Zend_Session::setSaveHandler().
The gc function should get called periodically based on the php.ini values session.gc_probability and session.gc_divisor. Make sure those values are set to something that would result in garbage collection running at some point.
Also make sure you specify the modifiedColumn and lifetimeColumn options when creating the DbTable save handler because the default gc function uses those columns to determine which rows in the session table are old and should be deleted.