Hey pretty new to optaplanner. What I have is a process planning entity that has an "availability zone" and set of computers that also have an assigned availability zone. Since a process can only be put on a computer with the same availability zone I wanted to use the ValueRangeProvider to narrow down the possible selections to include only those computers (similar to how the example in the documentation narrows down rooms based on the teachers department). But there is no direct connection from availability zones to the lower level entities (i.e. computers) the object I am working with currently only points up.
I was thinking that I could just pass in the list of computers to every process and just create a valuerange list based on that similar to what I did below, but I was hoping for a more elegant solution. I was looking at filters but I could not figure out how to create a filter to restricted a possible move based on the planning entity and that entity's planning variable.
#PlanningVariable(strengthComparatorClass = ComputerStrengthComparator.class)
public Computer getComputer() {
return computer;
}
#ValueRangeProvider(id="computerRange")
public List<Computer> getPossibleComputers(){
return computers.stream().filter(computer -> computer.getAvalibilityZone().equals(this.getAvalibilityZone())).collect(Collectors.toList());
}
If anyone knows of something I missed or has any ideas I would much appreciate the help.
That code actually works as far as I can tell. See docs "Value Range Provider from entity" (as opposed to "from solution").
That being said, it does have limitations: some features do not support it and will fail fast if combined with value ranges "from entity" - most do these days so I wouldn't worry about it. Furthermore, it prevents the Local Search from breaking that hard constraints to escape a local optima, but that's usually not a problem.
Related
is there any possibility to get exact time spent on a certain level in a game via firebase analytics? Thank you so much š
I tried to use logEvents.
The best way to do so would be measuring the time on the level within your codebase, then have a very dedicated event for level completion, in which you would pass the time spent on the level.
Let's get to details. I will use Kotlin as an example, but it should be obvious what I'm doing here and you can see more language examples here.
firebaseAnalytics.setUserProperty("user_id", userId)
firebaseAnalytics.logEvent("level_completed") {
param("name", levelName)
param("difficulty", difficulty)
param("subscription_status", subscriptionStatus)
param("minutes", minutesSpentOnLevel)
param("score", score)
}
Now see how I have a bunch of parameters with the event? These parameters are important since they will allow you to conduct a more thorough and robust analysis later on, answer more questions. Like, Hey, what is the most difficult level? Do people still have troubles on it when the game difficulty is lower? How many times has this level been rage-quit or lost (for that you'd likely need a level_started event). What about our paid players, are they having similar troubles on this level as well? How many people have ragequit the game on this level and never played again? That would likely be easier answer with sql at this point, taking the latest value of the level name for the level_started, grouped by the user_id. Or, you could also have levelName as a UserProperty as well as the EventProperty, then it would be somewhat trivial to answer in the default analytics interface.
Note that you're limited in the number of event parameters you can send per event. The total number of unique parameter names is limited too. As well as the number of unique event names you're allowed to have. In our case, the event name would be level_completed. See the limits here.
Because of those limitations, it's important to name your event properties in somewhat generic way so that you would be able to efficiently reuse them elsewhere. For this reason, I named minutes and not something like minutes_spent_on_the_level. You could then reuse this property to send the minutes the player spent actively playing, minutes the player spent idling, minutes the player spent on any info page, minutes they spent choosing their upgrades, etc. Same idea about having name property rather than level_name. Could as well be id.
You need to carefully and thoughtfully stuff your event with event properties. I normally have a wrapper around the firebase sdk, in which I would enrich events with dimensions that I always want to be there, like the user_id or subscription_status to not have to add them manually every time I send an event. I also usually have some more adequate logging there Firebase Analytics default logging is completely awful. I also have some sanitizing there, lowercasing all values unless I'm passing something case-sensitive like base64 values, making sure I don't have double spaces (so replacing \s+ with " " (space)), maybe also adding the user's local timestamp as another parameter. The latter is very helpful to indicate time-cheating users, especially if your game is an idler.
Good. We're halfway there :) Bear with me.
Now You need to go to firebase and register your eps (event parameters) into cds (custom dimensions and metrics). If you don't register your eps, they won't be counted towards the global cd limit count (it's about 50 custom dimensions and 50 custom metrics). You register the cds in the Custom Definitions section of FB.
Now you need to know whether this is a dimension or a metric, as well as the scope of your dimension. It's much easier than it sounds. The rule of thumb is: if you want to be able to run mathematical aggregation functions on your dimension, then it's a metric. Otherwise - it's a dimension. So:
firebaseAnalytics.setUserProperty("user_id", userId) <-- dimension
param("name", levelName) <-- dimension
param("difficulty", difficulty) <-- dimension (or can be a metric, depends)
param("subscription_status", subscriptionStatus) <-- dimension (can be a metric too, but even less likely)
param("minutes", minutesSpentOnLevel) <-- metric
param("score", score) <-- metric
Now another important thing to understand is the scope. Because Firebase and GA4 are still, essentially just in Beta being actively worked on, you only have user or hit scope for the dimensions and only hit for the metrics. The scope basically just indicates how the value persists. In my example, we only need the user_id as a user-scoped cd. Because user_id is the user-level dimension, it is set separately form the logEvent function. Although I suspect you can do it there too. Haven't tried tho.
Now, we're almost there.
Finally, you don't want to use Firebase to look at your data. It's horrible at data presentation. It's good at debugging though. Cuz that's what it was intended for initially. Because of how horrible it is, it's always advised to link it to GA4. Now GA4 will allow you to look at the Firebase values much more efficiently. Note that you will likely need to re-register your custom dimensions from Firebase in GA4. Because GA4 is capable of getting multiple data streams, of which firebase would be just one data source. But GA4's CDs limits are very close to Firebase's. Ok, let's be frank. GA4's data model is almost exactly copied from that of Firebase's. But GA4 has a much better analytics capabilities.
Good, you've moved to GA4. Now, GA4 is a very raw not-officially-beta product as well as Firebase Analytics. Because of that, it's advised to first change your data retention to 12 months and only use the explorer for analysis, pretty much ignoring the pre-generated reports. They are just not very reliable at this point.
Finally, you may find it easier to just use SQL to get your analysis done. For that, you can easily copy your data from GA4 to a sandbox instance of BQ. It's very easy to do.This is the best, most reliable known method of using GA4 at this moment. I mean, advanced analysts do the export into BQ, then ETL the data from BQ into a proper storage like Snowflake or even s3, or Aurora, or whatever you prefer and then on top of that, use a proper BI tool like Looker, PowerBI, Tableau, etc. A lot of people just stay in BQ though, it's fine. Lots of BI tools have BQ connectors, it's just BQ gets expensive quickly if you do a lot of analysis.
Whew, I hope you'll enjoy analyzing your game's data. Data-driven decisions rock in games. Well... They rock everywhere, to be honest.
Beginner question. I would like to have a value list display only the records in a found set.
For example, in a law firm database that has two tables, Clients and Cases, I can easily create value list that displays all cases for clients.
But that is a lot of cases to pick from, and invites user mistakes. I would like the selection from the value list to be restricted to cases matched to a particular client.
I have tried this method https://support.claris.com/s/article/Creating-conditional-Value-Lists-1503692929150?language=en_US and it works up to a point, but it requires too much entry of data and too many tables.
It seem like there ought to be a simpler method using the find function. Any help or ideas greatly appreciated.
In CQRS when we need to create a custom-tailored projections for our read-models, we usually prefer a "denormalized" projections (assume we are talking about projecting onto a DB). It is not uncommon to have the information need by the application/UI come from different aggregates (possibly from different BCs).
Imagine we need a projected table to contain customer's information together with her full address and that Customer and Address are different aggregates in our system (possibly in different BCs). Meaning that, addresses are generated and maintained independently of customers. Or, in other words, when a new customer is created, there is no guarantee that there will be an AddressCreatedEvent subsequently produced by the system, this event may have already been processed prior to the creation of the customer. All we have at the time of CreateCustomerCommand is an UUID of an existing address.
We have several solutions here.
Enrich CreateCustomerCommand and the subsequent CustomerCreatedEvent to contain full address of the customer (looking up this information on the fly from the UI or the controller). This way the projection handler will just update the table directly upon receiving CustomerCreatedEvent.
Use the addrUuid provided in CustomerCreatedEvent to perform an ad-hoc query in the projection handler to get the missing part of the address information before updating the table.
These are commonly discussed solution to this problem. However, as noted by many others, there are problems with each approach. Enriching events can be difficult to justify as well described by Enrico Massone in this question, for example. Querying other views/projections (kind of JOINs) will work but introduces coupling (see the same link).
I would like describe another method here, which, as I believe, nicely addresses these concerns. I apologize beforehand for not giving a proper credit if this is a known technique. Sincerely, I have not seen it described elsewhere (at least not as explicitly).
"A picture speaks a thousand words", as they say:
The idea is that :
We keep CreateCustomerCommand and CustomerCreatedEvent simple with only addrUuid attribute (no enriching).
In API controller we send two commands to the command handler (aggregates): the first one, as usual, - CreateCustomerCommand to create customer and project customer information together with addrUuid to the table leaving other columns (full address, etc.) empty for time being. (Warning: See the update, we may have concurrency issue here and need to issue the probe command from a Saga.)
Right after this, and after we have obtained custUuid of the newly created customer, we issue a special ProbeAddrressCommand to Address aggregate triggering an AddressProbedEvent which will encapsulate the full state of the address together with the special attribute probeInitiatorUuid which is, of course our custUuid from the previous command.
The projection handler will then act upon AddressProbedEvent by simply filling in the missing pieces of the information in the table looking up the required row by matching the provided probeInitiatorUuid (i.e. custUuid) and addrUuid.
So we have two phases: create Customer and probe for the related Address. They are depicted in the diagram with (1) and (2) correspondingly.
Obviously, we can send as many such "probe" commands (in parallel) as needed by our projection: ProbeBillingCommand, ProbePreferencesCommand, etc. effectively populating or "filling in" the denormalized projection with missing data from each handled "probe" event.
The advantages of this method is that we keep the commands/events in the first phase simple (only UUIDs to other aggregates) all the while avoiding synchronous coupling (joining) of the projections. The whole approach has a nice EDA feeling about it.
My question is then: is this a known technique? Seems like I have not seen this... And what can go wrong with this approach?
I would be more then happy to update this question with any references to other sources which describe this method.
UPDATE 1:
There is one significant flaw with this approach that I can see already: command ProbeAddrressCommand cannot be issued before the projection handler had a chance to process CustomerCreatedEvent. But this is impossible to know from the API gateway (or controller).
The solution would probably involve a Saga, say CustomerAddressJoinProjectionSaga with will start upon receiving CustomerCreatedEvent and which will only then issue ProbeAddrressCommand. The Saga will end upon registering AddressProbedEvent. Or, if many other aggregates are involved in probing, when all such events have been received.
So here is the updated diagram.
UPDATE 2:
As noted by Levi Ramsey (see answer below) my example is rather convoluted with respect to the choice of aggregates. Indeed, Customer and Address are often conceptualized as belonging together (same Aggregate Root). So it is a better illustration of the problem to think of something like Student and Course instead, assuming for the sake of simplicity that there is a straightforward relation between the two: a student is taking a course. This way it is more obvious that Student and Course are independent aggregates (students and courses can be created and maintained at different times and different places in the system).
But the question still remains: how can we obtain a projection containing the full information about a student (full name, etc.) and the courses she is registered for (title, credits, the instructor's full name, prerequisites, etc.) all in the same table, if the UI requires it ?
A couple of thoughts:
I question why address needs to be a separate aggregate much less in a different bounded context, in view of the requirement that customers have an address. If in some other bounded context customer addresses are meaningful (e.g. you want to know "which addresses have more customers" etc.), then that context can subscribe to the events from the customer service.
As an alternative, if there's a particularly strong reason to model addresses separately from customers, why not have the read side prospectively listen for events from the address aggregate and store the latest address for a given address UUID in case there's a customer who ends up with that address. The reliability per unit effort of that approach is likely to be somewhat greater, I would expect.
At my company we are currently using optaplanner to solve a vehicle routing problem with great results, we built a web app to manage vehicles, clientes, locations, depots and to show a graphic representation of the solution (including showing the locations in a map). We wrapped the solver in a spring java app with a rest interface to receive request and solve the problem. We are using Google maps to get distance-time data. Now we need to implement multirip....
To tackle the multitrip part I am following this approach:
1.- I added readyTime, endOfTrip and dueTime members to Vehicle class
2.- I created a rule to prevent arrivals at customers after vehicle->dueTime
3.- I modified the ArrivalTimeUpdateListener to consider the vehicle->readyTime when calculating the departureTime from a vehicle (using Math.max(depot->readyTime, vehicle->readyTime)
4.- At this point I started using the vehicle class as if it were a vehicle trip instead of a vehicle (I still donĀ“t change the name but that is the idea )
5.- I created a member nextVehicle in vehicle to represent the next trip
6.- For testing purposes I manually link two vehicles (or vehicle trips) before sending it to solver->solve
7.- In the ArrivalTimeUpdatingVariableListener class I extended the method that updates the arrival times to consider updating the nextVehicle->readyTime and by consequence the arrival times of the customers that belong to the next trip (and so on when there are more than two trips)
I am sure this is not the most elegant solution, but I tried other approaches (using custom shadow variable on Vehicle for instance) but it couldnĀ“t make it work.
The problem I am facing right now is that I donĀ“t get to understand the state of the model when ArrivalTimeUpdatingVariableListener is called, maybe someone faced similar problem and can help me. What i found (after try and error) is:
the customer.getVehice() method not always returns a value (distinct from null value), it seems to get updated some time after the previousStandstill change triggers the listener "updateArrivalTime" method.
In construction fase when a customer get assign to a vehicle the customer.getVehice() method returns null (it came from "not assigned")
In construction fase when a customer "is freed" the customer.getVehice() method returns the "previous vehicle"
In local search fase when a customer get assign to a vehicle the customer.getVehice() method returns the "previous vehicle" (original vehicle)
In local search fase when a customer go back to the original vehicle the customer.getVehice() method returns the "previous assign vehicle"
Any thoughts on this? Am I making right assumptions? (because originally I considered customer.getVehicle() as the "actual" vehicle and the solutions were completely wrong...)
The order of triggering the previousStandstill change itĀ“s kind of difficult to understand (for me). I mean when moving customers or swapping them between vehicles...any thoughts or hints on were to find info?
Can I access some variables from the "previous state of the model" when the solver makes a move?, because I am thinking I will need that if I continue with this approach (to update the vehicle->endOfTrip that is the nextVehicle->readyTime when the customer is the last one on the chain for instance)
and finally...am i doing something completely wrong conceptually ?
Any comments will be greatly appreciated (and sorry my grammar, I am native spanish speaker)
The chaotic triggering of shadow vars that you describe should not happen any more in optaplanner 6.3.0.Final or higher because it now gives the guarantee shown below.
But older versions (optaplanner 6.2 and earlier) suffer from that chaotic trigger behaviour (as fixed by PLANNER-252 - yes I know that issue number by heart - and yes, I am not the only one) could drive a developer (who's working on a complex model with multiple shadow variable(s)) insane and provide a one way ticket to the asylum.
Fortunately it has been fixed a few months ago, so upgrade to 6.3.0.Final or later and keep your sanity.
I recently found myself using some rather lengthy names for the tables and views involved in a development piece, which got me wondering whether it's possible to create client/database/server level aliases for objects.
Say for example I have a view named dbo.vAlphaBetaGammaDelta . Is there a way (with or without Intellisense) to create a reference to it named dbo.vABGD ?
If not, would there be any downfalls to creating a view of a view or single table aside from maintenance necessary if/when the table schema changes?
I should note that these aliases/views would not be intended for use in other objects, but for alleviation and prevention of carpal tunnel during day-to-day troubleshooting and delving xD
SQL Server allows for the creation of synonyms. That seems to be what you are looking for: http://msdn.microsoft.com/en-us/library/ms177544.aspx
However, as #MitchWheat mentioned. this seems to be going in the wrong direction. There are a few quite good SSMS plugins available that provide auto completion of long object names (e.g. SQL Prompt). Incidentally those products have trouble with synonyms...
There are many cases where you would like to have synonyms.
Let me state just one for start:
You have a well defined hypothetical name of a table: GlobalStatisticalRecord. Hundreds of lines of code and objects (keys, indexes, etc.) in SQL and elsewhere are referring to this table name.
After 5 years of usage, the abbreviation GSR was accustomed not only among the technical people, but also among the business users. So, to stress again, GSR is now even more recognizable than GlobalStatisticalRecord. However, for the new people that come into the technical team, it is good to keep the name GlobalStatisticalRecord as a table name, since it nicely describes what is the table all about. Now, when writing a quick adhoc query - and that may not be from your tool of choice with all the Intellisense features you are accustom to - then these aliases are really saving your time (and "life" at 2am in the morning when you are frantically trying to diagnose a production problem).
Please, if you never faced a case when you would need this, just don't assume that there is none.
I stressed the adhoc adjective, since I agree that in permanent queries (stored procedures, etc.), for the reasons you pointed out, it is advisable to use the full table names.