I am inserting some 350 person account records through apex data loader.I have a trigger on account which uses logged in user details. While inserting the records i m getting this error
"LeadAssignment: execution of AfterInsert caused by: System.QueryException: List has no rows for assignment to SObject Trigger.LeadAssignment: line 20, column 1"
Can anyone tell me how to get logged in user details while using apex data loader?
Thank You.
It would be worth looking at https://developer.salesforce.com/forums/ForumsMain?id=906F00000008yJvIAI which deals with a similar situation, albeit some time ago, to see if the problem is called by NULL values.
If so, I would suggest using the check for NULL approach they originally tried in the trigger, and only adding the Try-Catch workaround if that is still required.
Related
I am developing an authentication system using express, So I have a unique email field in the database
should I check the email first and if it exists throw a new custom error Or let the database throw the error?
I want to know what is better
Consumers of your API don't and shouldn't know what kind of database you use.
The error that makes it back to them should encapsulate all of it and specifically tell them what is wrong in some standard format with a good HTTP status code.
Database-specific errors leaking to the user should usually be considered a bug.
Both.
You should write code to check that the email exists before you attempt the insert.
But if that check finds no email, you might still get an error, because of a race condition. For example, in the brief moment between checking for the email and then proceeding to insert the row, some other concurrent session may insert its own row using that email. So your insert will get a duplicate key error in that case, even though you had checked and found the email not present.
Then why bother checking? Because if you use a table with an auto_increment primary key, a failed insert generates and then discards an auto-increment value.
This might seem like a rare and insignificant amount of waste. Also, we don't care that auto-increment id's are consecutive.
But I did help fix an application for a customer where they had a problem that new users were trying 1500 times to create unique accounts before succeeding. So they were "losing" thousands of auto-increment id's for every account. After a couple of months, they exhausted the range of the signed integer.
The fix I recommended was to first check that the email doesn't exist, to avoid attempting the insert if the email is found. But you still have to handle the race condition just in case.
I am trying to one of my BigQuery tables but I get this error:
Errors:
Something went wrong with the table you queried. Contact the table owner for assistance. (error code: tableUnavailable)
I used to be able to run the query until couple of hour ago without a problem and I still can query another table in a different data set.
I was also running a Python application to stream data into the this table, but now on:
table.insert_data(rows)
Causes this erro:
google.cloud.exceptions.ServiceUnavailable:
503 Error encountered during execution. Retrying may solve the problem.
(POST https://www.googleapis.com/bigquery/v2/projects/#myProjectId#/datasets/#myDatasetName#/tables/#myTableName#/insertAll)
I have read similar questions here but can't find any solution to this problem.
Here is the latest error jobId
"jobId": "bquijob_6491e032_15846c06c18"
This has been going for a while now, can anybody help?
This was a hopefully rare BigQuery incident:
https://status.cloud.google.com/incident/bigquery/18022
When user is registering on website, e-mail needs to be provided which is unique. I've made unique index on schema's email attribute, so if I try to save the document in database, error with code 11000 will be returned. My question is, regarding to business layer and data layer, should I just pass the document to database and catch/check error codes which it returns or should I check if the user with that e-mail exists before? I've being told that data integrity should be checked before passing it to the database by the business layer, but I don't see the reason why should I do that since I believe that mongo would be much faster raising the exception itself since it has that index provided. The only disadvantage I see in error code checking is that error codes might change (but I could abstract them) and the syntax might be changed.
There is the practical matter of speed and the fragility of "check-then-set" systems. If you try and check if an email exists before you write the document keyed on email, there is a chance that between the time you check and the time you right the conditions of the unique index are met and your write fails anyhow. This is a classic race condition. Further, it takes 2 queries to do check-then-set but only 1 query to do the insert and handle the failure. In my application I am having success with just letting the failure occur and reacting to the result.
As #JamesWahlin says, it is the difference between dong this all in one or causing mixed results (along with the index check) from potential race conditions by adding the extra client read.
Definitely rely on the response of only insert from MongoDB here.
Whenever I try to apply filter to an attribute, which has ValueSelection= Dropdown, the dropdown is not populated and error message "The requested list could not be retrieved because the query is not valid or a connection could not be made to the data source" is shown instead.
If I set up ValueSelection=List I am getting a different error message:
An attempt has been made to use a semantic query extension associated with the data extension 'SQL' that is not registered for this report server.
(Microsoft.ReportingServices.SemanticQueryEngine)
This happens within BIDS environment and was observed both in SQL 2005 and SQL 2008.
I've already studied articles, which discussed the similiar problem, but neither of them applied to my case. The user account in data source has all necessary rights, data could be retrieved without any problem (for example if i try "Explore data" in data source view). The SQL profiler shows that no query is being sent to SQL Server when there is an attempt to populate dropdown. So nothing is wrong with the query, it is simply never executed.
Your connection is not working. Try to test you connection by trying a simple table and query output.
This will enable you to test the connection before trying anything advanced.
Got this problem and in my case it was caused by wrong connection string in Data Source - instead of just having a SQL Server name like "SOMESQLSERVER_MACHINE" I had for some reason "SOMESQLSERVER_MACHINE.our.corp.domain". It had to be the same, but then I realized that the domain is wrong, after removing it all works like a charm again. That said: it's always good idea to start with detailed checks on your basic settings.
Otherwise this could be a problem with permissions to the folders on Report Manager.
Say that I have a User table in my ReadDatabase (use SQL Server). In a regulare read/write database I can put like a index on the table to make sure that 2 users aren't addedd to the table with the same emailadress.
So if I try to add a user with a emailadress that already exist in my table for a diffrent user, the sql server will throw an exception back.
In Cqrs I can't do that since if I decouple the write to my readdatabas from the domain model, by puting it on an asyncronus queue I wont get the exception thrown back to me, and I will return "OK" to the UI and the user will think that he is added to the database, when infact he will never be added to the read database.
I can do a search in the read database checking if there is a user already in my database with the emailadress, and if there is one, then thru an exception back to the UI. But if they press the save button the same time, I will do 2 checks to the database and see that there isn't any user in the database with the emailadress, I send back that it's okay. Put it on my queue and later it will fail (by hitting the unique identifier).
Am I suppose to load all users from my EventSource (it's a SQL Server) and then do the check on that collection, to see if I have a User that already has this emailadress. That sounds a bit crazy too me...
How have you people solved it?
The way I can see is to not using an asyncronized queue, but use a syncronized one but that will affect perfomance really bad, specially when you have many "read storages" to write to...
Need some help here...
Searching for CQRS Set Based Validation will give you solutions to this issue.
Greg Young posted about the business impact of embracing eventual consistency http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
Jérémie Chassaing posted about discovering missing aggregate roots in the domain http://thinkbeforecoding.com/post/2009/10/28/Uniqueness-validation-in-CQRS-Architecture
Related stack overflow questions:
How to handle set based consistency validation in CQRS?
CQRS Validation & uniqueness