i have soql in Apex trigger where as it is fetching the all the records of test object.
SOQl is fetching more than 50000 records so when ever i am updating the records i am facing this governor limits error .
please let me know how to solve this error.
List<test__c> ocrInformation = new List<test__c>();
Map<String,String> Opporgcode=new Map<String,String>();
ocrInformation= [select id,Team__c,Org__c from test__c];//facing an error here
for(test__c oct: ocrInformation){
Opporgcode.put(oct.Org__c,oct.Team__c);
}
It's standard Salesforce limitation
total number of records retrieved by SOQL queries = 50,000
Do you really need to select all test__c records? Possibly, you could reduce amount of retrieved data with help op where or limit conditions. If not, you can try to use Batch Apex. It allows the 50k limit counts per batch execution.
Related
I'm using Grafana to visualize some data stored in CrateDB in different panes.
Some of my boards work correctly, but there are 3 specific boards (created by someone from my work team), in which at certain times of the day they stop showing data (No Data) and as a warning it shows the following error:
db query error: pq: [parent] Data too large, data for [fetch-1] would be [512323840/488.5mb], which is larger than the limit of [510027366/486.3mb], usages [request=0/0b, in_flight_requests=0/0b, query=150023700/143mb, jobs_log=19146608/18.2mb, operations_log=10503056/10mb]
Honestly, I would like to understand what it means, and how I can fix it.
I remain attentive to any help you can give me, and I deeply appreciate the help.
what I tried
17 SQL Statements of the form:
SELECT
time_index AS "time",
entity_id AS metric,
v1_ps
FROM etsm
WHERE
entity_id = 'SM_B3_RECT'
ORDER BY 1,2
for 17 different entities.
what I hope
I hope to receive the data corresponding to each of the SQL statements for their respective graphing.
The result
As a result, there is no data received on some of the statements made and the warning message I shared:
db query error: pq: [parent] Data too large, data for [fetch-1] would be [512323840/488.5mb], which is larger than the limit of [510027366/486.3mb], usages [request=0/0b, in_flight_requests=0/0b, query=150023700/143mb, jobs_log=19146608/18.2mb, operations_log=10503056/10mb]
As an additional fact, the graph is configured to update every 15 min, but no matter how many times you manually update the graph, the statements that receive data are different.
Example: I refresh the panel and the SQL statements A, B and C get data, while the others don't. I refresh the panel and the SQL statements D, H and J receive data, and the others don't (with a random pattern).
Other additional information:
I have access to the database being consulted with Grafana, and the data is there
You don't have time condition, so query select/process all records all the time and you are hitting limits (e. g. size of processed data) of your DB. Add time condition, so only fraction of all records will be returned.
I have a table in PostgreSQL "items" and there I have some information like id, name, desc, config etc.
It contains 1.6 million records.
I need to make a query to get all result like "select id, name, description from items"
What is the proper pattern for iterating over large result sets?
I used EntityListIterator:
EntityListIterator iterator = EntityQuery.use(delegator)
.select("id", "name", "description")
.from("items")
.cursorScrollInsensitive()
.queryIterator();
int total = iterator.getResultsSizeAfterPartialList();
List<GenericValue> items = iterator.getPartialList(start+1, length);
iterator.close();
the start here is 0 and the length is 10.
I implemented this so I can do pagination with Datatables.
The problem with this is that I have millions of records and it takes like 20 seconds to complete.
What can I do to improve the performance?
If you are implementing pagination, you shouldn't load all 1.6 million records in memory at once. Use order by id in your query and id from 0 to 10, 10 to 20, etc. in the where clause. Keep a counter that denotes up till which id you have traversed.
If you really want to pull all records in memory, then just load the first few pages' records (e.g. from id=1 to id=100), return it to the client, and then use something like CompletableFuture to asynchronously retrieve the rest of the records in the background.
Another approach is to run multiple small queries in separate threads, depending on how many parallel reads your database supports, and then merge the results.
What about CopyManager? You could fetch your data as a text/csv outputstream, maybe in this way it would be faster to retrieve.
CopyManager cm = new CopyManager((BaseConnection) conn);
String sql = "COPY (SELECT id, name, description FROM items) TO STDOUT WITH DELIMITER ';'";
cm.copyOut(sql, new BufferedWriter(new FileWriter("C:/export_transaction.csv")));
I am trying this query:
List<Account> onlyRRCustomer = [SELECT
ac.rr_First_Name__c,
ac.rr_Last_Name__c,
ac.rr_National_Insurance_Number__c,
ac.id,
ac.rr_Date_of_Birth__c
FROM
Account ac
WHERE
ac.rr_National_Insurance_Number__c IN :uniqueNiInputSet
AND RecordTypeId = :recordTypeId];
It gives me an error:
SELECT ac.rr_First_Name__c, ac.rr_Last_Name__c,
ac.rr_National_Insurance_Number__c, ac.id, ac.rr_Date_of_Birth__c FROM
Account ac WHERE (ac.rr_National_Insurance_Number__c = :tmpVar1 AND
RecordTypeId = :tmpVar2) 10:12:05.0
(11489528)|EXCEPTION_THROWN|[49]|System.QueryException: Non-selective
query against large object type (more than 200000 rows). Consider an
indexed filter or contact salesforce.com about custom indexing.
I understand uniqueNiInputSet.size() ~ 50, so, it's not an issue but for that record type, it might contains more records.
So, if i changed the position will that work? Means, first the recordtype and then the NIset in where clause. Is there any order how where clause are selected in SF. So, it will only look for 50 member and then within 50 it will serach for the particular record type?
That just means that the script is taking too long to execute. You may need to move this to a #future method or make execute it using Database.Batchable.
I don't think the order matters in SOQL, I think it's just trying to return too many records.
A non-selective query means you are performing a query against a table that has a large number of records and your query is not specific enough. You can work with Salesforce support to try to resolve this, either through the creation of additional backend indexes or by making the query more selective.
To be honest, your query looks very selective already, you're not using LIKE or IN. You should also put your most selective conditions first (resulting in a more focused query against your records).
I know it should'nt matter, but I would also move your conditions out of the parenthesis.
If there are any other fields you can filter on, that may help. Sometimes, you have to actually create new fields and populate them just to help make your queries more selective.
Also, if rr_National_Insurance_Number__c is a formula field, you will want to change it to a text field and populate workflow or apex instead. Formula fields require additional time on the servers to calculate.
SELECT rr_First_Name__c, rr_Last_Name__c, rr_National_Insurance_Number__c, id, rr_Date_of_Birth__c
FROM Account
WHERE new_custom_field__c = TRUE
AND rr_National_Insurance_Number__c = :tmpVar1
AND RecordTypeId = :tmpVar2
Your query is non-selective. For a standard indexes is 30% for the fist million records and 15% of records over a million up to 1 million records total. For and "AND" query each individual where criteria must itself be selective see this quick reference cheat sheet. In general try making
rr_National_Insurance_Number__c
an external id which will make it an indexed by salesforce by default and retry you query. Record Types are already indexed by default. If the result is still non-selective because of the number of results returned, try limiting the number of results using a field like CreatedDate to limit the scope of the query.
When the Class RowBounds in MyBatis API gets data from DB, does it do full scan and then cut the row that is set up by limit and offset parameters? or does it only get the data bound?
If the SQL query contains offset and limit/fetch first n rows only then the resultset will return only data within bounds. Bounds are applied on DB side. OFFSET 10000 LIMIT 20 will produces a (maximum) 20 records resultset.
This is likely what you need.
Rowbound does not alter the SQL query and operates independently. Mybatis works with whole Resultset returned by the DB.
e.g.: RowBounds(10000, 20) will skip first 10000 records of the resultset, then fetch 20 records and stop. But the result size may be MAX_INT.
It does retrieve full data from database however return only the requested number of records into the program. So, no worries for OutOfMemory but query will take long on database side.
Hibernate and Eclipselink, on the other hand pass on the given limit count onto the database and retrieves only required number of records from database. Hibernate achieves this by using database vendor specific SQL construct in its generated SQL. Ex - LIMIT clause in MS-SQL, Rownum for Oracle.
If you want to achieve the same in mybatis, you need to use these constructs yourselves.
It is easy and you can use mybatis conditions to make the SQL specific to any database.
I am using an Azure SQL database and I'm wondering if there is a limit on how many records I can insert at once. I have a case where I need to insert 7000+ records.
If there is a limit, anyone know a good way of inserting the records in batches?
foreach (var records in records)
{
db.Records.Add(record);
}
db.SaveChanges();
There is not straight answer on this question. Off course there is a limit on how many record you can insert. This depends on many things like 32/64bit / Memory / DB size / transaction size / add method to ef / number of records in one action / transactions locks on your db in a production situation etc. You can write some test that checks how big the chucks have to be like: fastest-way-of-inserting-in-entity-framework. Or you can use AddRange: Ef bulk insert.
db.AddRange(records);
db.SaveChanges();
You have to decide on if you need to insert the records in one transaction or you can use a number of transaction to keep your table locks smaller.