Impex import export Error saving batch in bulk mode ambiguous unique keys - import

When I am exporting data from one environment and importing it to another, I am seeing an ambiguous unique keys error. I did check the ambiguity but did not find anything would cause this violation.
I get the following error (there are several identical errors but only posting 1):
Error Begin
**insert_update ABClCMSParagraphComponent;&Item;catalogVersion(catalog(id),version)[unique=true,allownull=true];content[lang=en];creationtime[forceWrite=true,dateformat=dd.MM.yyyy
hh:mm:ss];modifiedtime[dateformat=dd.MM.yyyy
hh:mm:ss];name;owner(&Item)[allownull=true,forceWrite=true];uid[unique=true,allownull=true]
ABClCMSParagraphComponent,8796158592060,,,Error saving batch in bulk
mode [reason:
unique keys {catalogVersion=CatalogVersionModel (8796093186649#41),
uid=DMparaleftdescrip} for model ABClCMSParagraphComponentModel
(8796158657596#1) - found 2 item(s) using the same keys]. Will try
line-by-line mode.,
unique keys {catalogVersion=CatalogVersionModel (8796093186649#41),
uid=comp_000003UX} for model ABClCMSParagraphComponentModel
(8796158592060#1) - found 2 item(s) using the same keys
;Item111;abcContentCatalog:Staged;"< p >Hello < a href="">world< /a><
/p>";12.09.2017 07:04:12;18.09.2017 09:38:39;Feed Article -
Makeup;;comp_000003UX
Error End
What would be the reason why it's showing the ambiguous error?

The logs didn't show the error clearly with the 2 check boxes that were selected. When I deleted these 2 columns, owner(&Item) & creationtime, the script imported successfully.

Often, when no specific errors are shown, there was a error saving the item. In your case it might be the initial attributes "owner" and "creationdate". If the item is present, initial attributes cannot be changed.

Related

QuickFIX/J not reading all the repeating groups in FIX message

We are receiving fix messages from WebICE exchange in a text file and our application is reading and parsing them line by line using QuickFixJ. We noticed that in some messages the repeating group fields are not being parsed and upon validating with data dictionary getting error.
quickfix.FieldException: Out of order repeating group members, field=326
For example in the sample file data-test.csv the first 2 rows parsed successfully but third one fails with the above error message.
Upon investigation I found , in first 2 rows tag 326 comes after tag 9133 but in the third row it comes before that and hence fails in validation. If I adjust data dictionary as per the third one it succeeds but ofcourse the first one starts failing.
This is happening only for few messages for most of the other fix messages are getting validated and parsed quite fine. This is part of the migration project from existing C# application using QuickFix/N to our scala application using QuickFix/J. And its been working fine at the source end (with QuickFIx/N). Is there any difference in both the libraries QuickFIx/J and QuickFIx/N in terms of dealing with group fields ?
To help recreate the issue , I have shared the data file having 3 fix messages as explained above.
Data file : data-test.csv
Data dictionary : ICE-FIX42.xml
Here is the test code snippet
val dd: DataDictionary = new DataDictionary("ICE-FIX42.xml")
val mfile = new File("data-test.csv")
for (line <- Source.fromFile(mfile).getLines) {
val message = new quickfix.Message(line,dd)
dd.setCheckUnorderedGroupFields(true)
dd.validate(message)
val noOfunderlyings= message.getInt(711)
println("Number of Underlyings "+noOfunderlyings)
for(i <- 1 to noOfunderlyings ) {
val FixGroup: Group = message.getGroup(i, 711)
println("UnderlyingSecurityID : " + FixGroup.getString(311))
}
}
Request to fellow SO users , If you can help me with this.
Many Thanks
You should use setCheckUnorderedGroupFields(false) to disable the validation of the ordering in repeating groups. However, this is only a workaround.
I would suggest to approach your counterparty about this because especially in repeating groups the field order is required to follow the message definition, i.e. the order in the data dictionary.
FIX TagValue encoding spec
Field sequence within a repeating group
...
Fields within repeating groups must be specified in the order that the fields are specified in the message definition.

Add a missing key to JSON in a Postgres table via Rails

I'm trying to use update_all to update any records that is missing a key in a JSON stored in a table cell. ids is the ids of those records and I've tried the below...
User.where(id: ids).
update_all(
"preferences = jsonb_set(preferences, '{some_key}', 'true'"
)
Where the error returns is...
Caused by PG::SyntaxError: ERROR: syntax error at or near "WHERE"
LINE 1: ...onb_set(preferences, '{some_key}', 'true' WHERE "user...
The key takes a string value so not sure why the query is failing.
UPDATE:
Based on what was mentioned, I added the parentheses and also added / modified the last two arguments...
User.where(id: ids).
update_all(
"preferences = jsonb_set(preferences, '{some_key}', 'true'::jsonb, true)"
)
still running into issues and this time it seems related to the key I'm passing
I know this key doesn't currently exist for the set of ids
I added true for create_missing so that 1 isn't an issue
I get this error now...
Caused by PG::UndefinedFunction: ERROR: function jsonb_set(hstore, unknown, jsonb, boolean) does not exis
some_key should be a key in preferences
You're passing in raw SQL so you are 100% responsible for ensuring that is actually valid SQL. What you have there isn't. Check your parentheses:
User.where(id: ids).
update_all(
"preferences = jsonb_set(preferences, '{some_key}', 'true')"
)
If you look more closely at the error message it was telling you there was a problem precisely at the introduction of the WHERE clause, and right after ...true' so that was a good place to look for problems.
Syntax errors like this can be really annoying, but don't forget your database will usually do its best to pin down the place where the problem occurs.

Talend: configuration dimension time error in tOracleOutput

I still have this problem
Exception in component tOracleOutput_1
java.sql.SQLSyntaxErrorException: ORA-00904: : invalid identifier
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
There is some currupt code in your job. What you can do is first check is there any code generated for this job. if not try removing each component/disable and run and see if the error persist or not
I have had this as well. What usually helps is restarting Talend or restarting the computer.
If that doesn't help, there is something wrong with the job. Then I check every schema, every connection, every tMap, every item in the job if there is an error which Talend doesn't show to me.
To check if the code generation system works, you can always click on the Code tab and see if something comes up.
EDIT
An error ORA-00904 comes up. This leads to the suggestion that a column is named wrongly as seen here: https://dba.stackexchange.com/questions/129641/ora-00904-error-while-querying-the-oracle-database-table
To avoid ORA-00904, column names must
begin with a letter.
consist only of alphanumeric and the special characters ($_#); other characters need double quotation marks around them.
be less than or equal to thirty characters.

Why this error keeps occurring?

I have inherited some old code. It gets a list of categories from a MySQL database table. I'm tasked with adding multilevel support to them. I've almost got it done, but for some reason it just errors out when I go to try the app in action.
The error is (you can also see it at http://detyams.ru/?cat=1):
Can't use an undefined value as an ARRAY reference at /usr/local/lib/perl5/site_perl/mach/5.18/DBI.pm line 2074, line 2231.
sub catlist
{
my $self=shift;
state $sth=$self->db->prepare(q/SELECT c.cat_id,c.cat_name,COUNT(pn.p_id) as cnt from category c
LEFT JOIN price_new pn ON (pn.cat_id=c.cat_id) GROUP BY pn.cat_id WHERE c.parent_id=?/);
$sth->execute(0);
my #catlist=$sth->fetchall_arrayref({}); # <- this call leads to the failure in the deep of DBI code.
foreach my $item (#catlist)
{
$sth->execute($item->{cat_id});
$item->{children}=$sth->fetchall_arrayref({});
}
return #catlist;
}
I've looked up some examples of using the DBI methods in there (like http://www.perlmonks.org/?node_id=284436#loh), all appear to be in accord with my code.
Oh, so, it just turned out if in case of a query syntax error (WHERE clause was placed in wrong position) fetchall_arrayref() appears to report cryptic errors instead of the underlying problem. Found out by checking server logs.

mongodb looping collection + save, objects returned several times

I'm writing a pretty big migration and had this code (coffeescript):
db.users.find().forEach (user)->
try
#some code changing the user depending on the old state
db.users.save(user)
print "user_ok: #{user._id}"
catch error
print "user_error: #{user._id}, error was: #{error}"
Some errors occured. But they occured on already processed users:
user_ok: user_1234
#many logs
user_error: user_1234 ...
How come the loop takes already processed objects?
I ended up doing:
backup = { users: [] }
db.users.find().forEach (user)->
try
#some code changing the user depending on the old state
backup.users.push user
print "user_ok: #{user._id}"
catch error
print "user_error: #{user._id}, error was #{error}"
#loop backup and save
And it works nice now, but it seems really weird. What's the point behind all that please?
When you modify an object, it might be moved by the database. The database needs to take additional care to remember which objects have been visited already. This feature is called snapshotting, you can ask for a snapshotted query using
db.collection.find().snapshot()
However, even this doesn't make guarantees about objects that were inserted or deleted during the cursor iteration. A few more caveats are explained in the link to the documentation.
Another option is to perform an $orderby on an invariable unique index. Ideally, that index is also monotonic, so if you are using ObjectIds as primary keys then the _id field comes in pretty handy, like
db.collection.find().sort({"_id" :1});