Create price rules in WCS using OOB commands - wcs

Currently I have a pricerule that has only one action element to fetch a price from pricelist.
In order to achieve this pricerule I'm adding entries into required tables like
PRICERULE,
PRELEMENT,
PRELEMENTATTR
and other tables.
Now I need to add more conditions and branches to this pricerule in order to fit for the requirements(something like this).
But I found forming this pricerule by inserting entries directly into tables (as I did for simple pricerule) is quite complex. Because after forming the pricerule it has to be updated on weekly basis. Updates will be like changing the markup/markdown percentage or changing the start and end date of this markup etc.
So my question is:
Instead of directly updating the tables, is there any IBM WCS OOB functionality to achieve this?

Related

Best practices for parameterizing load of multiple CSV files in Data Factory

I am experimenting with Azure Data Factory to replace some other data-load solutions we currently have, and I'm struggling with finding the best way to organize and parameterize the pipelines to provide the scalability we need.
Our typical pattern is that we build an integration for a particular Platform. This "integration" is essentially the mapping and transform of fields from their data files (CSVs) into our Stage1 SQL database, and by the time the data lands in there, the data types should be set properly and the indexes set.
Within each Platform, we have Customers. Each Customer has their own set of data files that get processed in that Customer context -- within the scope of a Platform, all Customer files follow the same schema (or close to it), but they all get sent to us separately. If you looked at our incoming file store, it might look like (simplified, there are 20-30 source datasets per customer depending on platform):
Platform
Customer A
Employees.csv
PayPeriods.csv
etc
Customer B
Employees.csv
PayPeriods.csv
etc
Each customer lands in their own SQL schema. So after processing the above, I should have CustomerA.Employees and CustomerB.Employees tables. (This allows a little bit of schema drift between customers, which does happen on some platforms. We handle it later in our stage 2 ETL process.)
What I'm trying to figure out is:
What is the best way to setup ADF so I can effectively manage one set of mappings per platform, and automatically accommodate any new customers we add to that platform without having to change the pipeline/flow?
My current thinking is to have one pipeline per platform, and one dataflow per file per platform. The pipeline has a variable, "schemaname", which is set using the path of the file that triggered it (e.g. "CustomerA"). Then, depending on file name, there is a branching conditional that will fire the right dataflow. E.g. if it's "employees.csv" it runs one dataflow, if it's "payperiods.csv" it loads a different dataflow. Also, they'd all be using the same generic target sink datasource, the table name being parameterized and those parameters being set in the pipeline using the schema variable and the filename from the conditional branch.
Are there any pitfalls to setting it up this way? Am I thinking about this correctly?
This sounds solid. Just be aware that you if you define column-specific mappings with expressions that expect those columns to be present, you may have data flow execution failures if those columns are not present in your customer source files.
The ways to protect against that in ADF Data Flow is to use column patterns. This will allow you to define mappings that are generic and more flexible.

PowerApps datasource to overcome 500 visible or searchable items limit

For PowerApps, what data source, other than SharePoint lists are accessible via Powershell?
There are actually two issues that I am dealing with. The first is dynamic updating and the second is the 500 item limit that SharePoint lists are subject to.
I need to dynamically update my data source, which I am currently doing with PowerShell. My data source is not static and updating records by hand is time-consuming and error prone. The driving force behind my question is that the SharePoint list view threshold is 5,000 records however you are limited to 500 visible and searchable records when using SharePoint lists in the Gallery View and my data source contains greater than 500 but less than 1000 records. If you have any items beyond the 500th record that should match the filter criteria, they will not be found. So SharePoint lists are not optional for me until that limitation is remediated
Reference: https://powerapps.microsoft.com/en-us/tutorials/function-filter-lookup/
To your first question, Powershell can be used for almost anything on the Microsoft stack. You could use SQL server, Dynamics 365, SP, Azure, and in the future there will be an SDK for the Common Data Service. There are a lot of connectors, and Powershell can work with a good majority of them.
Take note that working with these data structures through Powershell is independent from Powerapps. Powerapps just takes the data that the data connector gives it, and if you have something updating the data in the background (Powershell, cron job, etc.), In order to get a dynamic list of items, you can use a Timer control and a Refresh function on your data source to update the list every ~5-20 seconds.
To your second question about SharePoint, there is an article that came out around the time you asked this regarding working with large lists. I wouldn't say it completely solves your question, but this article seems to state using the "Filter" function on basic column types would possibly work for you:
...if you’d like to filter the set of items that you are showing in the gallery control, you will make use of a “Filter” expression, rather than the “Search” expression, which is the default that existing apps used. With our changes, SharePoint connector now supports “equals” type of queries on columns that support filtering (Single line of text, choice, numbers, dates and people), so make sure that the columns and the expressions you use are supported and watch for the same warning to avoid reverting back to the top 500 items.
It also notes that if you want to pull from a list larger than the 5k threshold, you would need to use indexes, I have not fully tested this yet but it seems that this could potentially solve your problem.

CRM Dynamics trigger workflow before saving

A little background:
I have 2 entities (Product and Case). The product entity will hold all product records. A section in the Case will have the ability to choose products and auto-populate all related fields that are located in the product record for that specific product. For example, Product record has fields like hazardous, range, lot ect. The same field appear on the Case record. These fields should only be populated based on the product that was selected.
I was able to accomplish the above by creating a 1:N relationship and adding it to my Case form. I then created a workflow to populate the related fields (hazardous, range, lot ect). However, these fields only populate when the record is saved. Is there a way to make it update the fields once the product is chosen?
I want to refrain form using any type of JavaScript. If possible, I would like to strictly use workflows to accomplish this (if at all possible).
Real time information in your case can be only accomplished by using JavaScript. Maps works too but they have a special behavior.
Workflows that fire when the record is created only execute after all core operations are done (Native logic, Plug-in logic...) and you can't fire workflows if the record is not created.
So using workflows is a good idea even if you can't see the information

Exporting specifyied records from portal

Hi does anyone know if it's possible to export specified records from a portal in XML? Currently when I filter the portal it exports all records in the relationship and ignores the portal filter. Is it possible to specify which records to export from a portal without modifying the relationship?
Thanks for any help.
Is it possible to specify which records to export from a portal
without modifying the relationship?
Not really. Well, at least in theory, you could go to the related records and perform a Constrain Found Set there to replicate the filter's action. But then you would have to implement the same logic twice, violating the DRY principle.
If you need the filtered results at the data layer (e.g. for export), then it's time to filter the relationship. Portal filtering is meant for display purposes only.
Note: this is assuming that you actually need this filtering for display purposes, too.

Reporting within CQRS

I'm trying understand CQRS to see if it can help out in an reporting environment.
Problem: An CQRS designed system is already in production, happily generating commands, events and updating the necessary query views. A new report is required. This report takes a number of parameters; Start Date, End Date, Product Type, and Product Category.
How do I generate the aggregate views for:
A query store that will initially be empty
And, can pass parameters with very different values
Do I try and solve this using a CQRS approach, or is there a better alternative?
Thanks
If it is not reasonable to precompute all your report data into flat view, then just don't do that. You may want to join a bunch of tables for your report. It's your decision what can be precomputed, and what is not worth it (cpu, storage considerations).
In your particular case (StartDate, EndDate,..) - i can't see what is the problem to generate a single ViewModel table for it, and just query directly against the parameters.
Figure out which events are required to gather all report data.
Query all those events, republish them to the endpoint that handles updating the new report table(s).
Wait until all events have been processed.
Put some indexes on the columns that will function as report query criteria.
Done!