Cannot add more products under on category in ATG - atg

When I tried to add more than 5,000 products under a single category, its taking more than 8 hours to execute. Is there any alternatives?

I've changed the Item-cache size from 1000 to 40000 (in customCatalog.xml) for the item-descriptor "product" and 5000 for "category". Now its loading fast.

Related

Select top x% records

Question is regarding how to get top x% of records according to their ratings.
For example I have a table with a few columns, one of which is rating:
rating smallint
value of rating is always positive.
My goal is to select top x% of entries according to their rating.
For example, for top 20%, if set of selected rows contains ratings like:
1,3,4,4,5,2,7,10,9
Then top 20% would be records with range from 8 to 10 → records with rating 9 and 10.
I implemented it in Django but it takes 2 calls to DB and I believe it can be easily achieved via SQL in PostgreSQL by just one call.
Any ideas how to implement it?
Considering that the max rating available in the column is your base for max calculation.
Try this workaround:
select * from sample where rating >=(select max(rating)-max(rating)*20/100 from sample)
Demo on fiddle

Tableau - Related Data Source Filter

I have data split between two different tables, at different levels of detail. The first table has transaction data that, in the fomrat:
category item spend
a 1 10
a 2 5
a 3 10
b 1 15
b 2 10
The second table is a budget by category in the format
category limit
a 40
b 30
I want to show three BANs, Total Spend, Total Limit, and Total Limit - Spend, and be able to filter by category across the related data source (transaction is related to budget table by category). However, I can't seem to get the filter / relationship right. That is, if I use category as a filter from the transaction table and set it to filter all using related data source, it doesn't filter the Total Limit amount. Using 2018.1, fyi.
Although you have data split across 2 tables they can be joined using the category field and available as a single data source. You would be then be able to use category as a quick filter.

Count Unique Values in Found Set

Firstly please allow me to explain what I am trying to achieve. I have a found set of about 100 records, each record has a SKU number (serial) now out of the 100 records, some records have the same SKU, so of course there is less than 100 unique SKUS.
I want to know how many times a SKU appears within the found set. NOT the total number of unique SKUS, but more like how many times each SKU appears individually.
So for example I could have the SKU - 123456 - which appears twice in the found set, so the value for that should be 2, as there are 2 instances of that SKU in the found set.
So just to reiterate, I do not want the total number of unique SKU's in the found set, but more to know how many times each individual SKU appears within the found set.
I have tried many things but keep ending up with the total unique values which is absolutely no use to me.
Thanks
Building on michael.hor257's idea, you can use the GetSummary function to achieve your desired result.
Sample field definition:
After sorting your found set on SKU, this will produce the following result:
use ExecuteSQL filemaker native function to achieve this easily. here is FileMaker article that gives a very good insight:
http://help.filemaker.com/app/answers/detail/a_id/3423/~/counting-the-number-of-unique-values-in-a-field

Consolidating records on Crystal 11?

I am trying to consolidate the records on my report to show only unique lanes (Shipper State and Destination State).
My data is currently grouped by customer group and then customers that fall within each group.
Ideally I would like my data to fit in this format.
LANE ACCEPTED Total Percentage
Customer Group: Fruits
Customer: Apple
GA-TN 1 2 50%
GA-FL 2 3 66%
Customer: Bannana
GA-TN 2 4 50%
Total 5 9 55%
Currently my data looks like the image attached.
Thank you in advance.
1) Consolidated the Lanes by creating it in Formula Field, followed by creating adding it to the Group

MongoDB performance issue: Single Huge collection vs Multiple Small Collections

I tested two scenarios Single Huge collection vs Multiple Small Collections and found huge difference in performance while querying. Here is what I did.
Case 1: I created a product collection containing 10 million records for 10 different types of product, and in this exactly 1 million records for each product type, and I created index on ProductType. When I ran a sample query with condition ProductType=1 and ProductPrice>100 and limit(10) to return 10 records of ProductType=1 and whose price is greater than 100, it took about 35 milliseconds when the collection has lot of products whose price is more than 100, and the same query took about 8000 millisecond (8 second) when we have very less number of products in ProductType=1 whose price is greater than 100.
Case 2: I created 10 different Product table for each ProductType each containing 1 million records. In collection 1 which contains records for productType 1, when I ran the same sample query with condition ProductPrice>100 and limit(10) to return 10 records of products whose price is greater than 100, it took about 2.5 milliseconds when the collection has lot of products whose price is more than 100, and the same query took about 1500 millisecond (1.5 second) when we have very less number of products whose price is greater than 100.
So why there is so much difference? The only difference between the case one and case two is one huge collection vs multiple smaller collection, but I have created index of ProductType in the first case one single huge collection. I guess the performance difference is caused by the Index in the first case, and I need that index in the first case otherwise it will be more worst in performance. I expected some performance slow in the first case due to the Index but I didn't expect the huge difference about 10 times slow in the first case.
So 8000 milliseconds vs 1500 milliseconds on one huge collection vs multiple small collection. Why?
Separating the collections gives you a free index without any real overhead. There is overhead for an index scan, especially if the index is not really helping you cut down on the number of results it has to scan (if you have a million results in the index, but you have to scan them all and inspect them, it's not going to help you much).
In short, separating them out is a valid optimization, but you should make your indexes better for your queries before you actually decide to take that route, which I consider a drastic measure (an index on product price might help you more in this case).
Using explain() can help you understand how queries work. Some basics are: You want a low nscanned to n ratio, ideally. You don't want scanAndOrder = true, and you don't want BasicCursor, usually (this means you're not using an index at all).