duplicate primary key in return table created by select union - tsql

I have the following query called searchit
SELECT 2 AS sourceID, BLOG_COMMENTS.bID, BLOG_TOPICS.Topic_Title,
BLOG_TOPICS.LFD, BLOG_TOPICS.LC,
BLOG_COMMENTS.Comment_Narrative
FROM BLOG_COMMENTS INNER JOIN BLOG_TOPICS
ON BLOG_COMMENTS.bID = BLOG_TOPICS.bID
WHERE (BLOG_COMMENTS.Comment_Narrative LIKE #Phrase)
This query executes AND returns the correct results in the query builder!
HOWEVER, the query needs to run in code-behind, so I have the following line:
DataTable blogcomments = btad.searchit(aphrase);
There are no null fields in any row of any column in EITHER of the tables. The tables are small enough I can easily detect null data. Note that bID is key for blog_topics and cID is key for blog comments.
In any case, when I run this I get the following error:
Failed to enable constraints. One or more rows contain values
violating non-null, unique, or foreign-key constraints.
Tables have a 1 x N relationship, many comments for each blog entry. IF I run the query with DISTINCT and remove the Comment_Narrative from the return fields, it returns data correctly (but I need the other rows!) However, when I return the other rows, I get the above error!
I think tells me that there is a constraint on the return table that I did not put there, therefore it must somehow be inheriting that constraint from the call to the query itself because one of the tables happens to have a primary key defined (which it MUST have). But why does the query work fine in the querybuilder? The querybuilder does not care that bID is duped in the result (and it should not be), but the code-behind DOES care.
Addendum:
Just as tests,
I removed the bID from the return list and I still get the error.
I removed the primary key from blog_topics.bID and I get the same error.
This kinda tells me that it's not the fact that my bID is duped that is causing the problem.
Another test:
I went into the designer code (I know it's nasty, I'm just desperate).
I added the following:
// zzz
try
{
this.Adapter.Fill(dataTable);
}
catch ( global::System.Exception ex )
{
}
Oddly enough, when I run it, I get the same error as before AND it doesn't show the changes I've made in the error message:
Line 13909: }
Line 13910: BPLL_Dataset.BLOG_TOPICSDataTable dataTable = new BPLL_Dataset.BLOG_TOPICSDataTable();
Line 13911: this.Adapter.Fill(dataTable);
Line 13912: return dataTable;
Line 13913: }
I'm stumped.... Unless maybe it sees I'm not doing anything in the try catch and is optimizing for me.
Another addendum:
Suspecting that it was ignoring the test code I added to the designer, I added something to the catch. It produces the SAME error and acts like it does not see this code. (Well, okay, it DOES NOT see this code, because it prints out same as before into the browser.)
// zzz
try
{
this.Adapter.Fill(dataTable);
}
catch ( global::System.Exception ex )
{
System.Web.HttpContext.Current.Response.Redirect("errorpage.aspx");
}
The thing is, when I made the original post, I was ALREADY trying to do a work-around. I'm not sure how far I can afford to go down the rabbit hole. Maybe I read the whole mess into C# and do all the joins and crap myself. I really hate to do that, because I've only recently gotten out of the habit, but I perceive I'm making a good faith effort to use the the tool the way God and Microsoft intended. From wit's end, tff.

You don't really show how you're running this query from C# ... but I'm assuming either as a straight text in a SqlCommand or it's being done by some ORM ... Have you attempted writing this query as a Stored Procedure and calling it that way? The stored Procedure would be easier to test and run by itself with sample data.
Given the fact that the error is mentioning null values I would presume that, if it is a problem with the query and not some other element of your code, then it'd have to be on one of the following fields:
BLOG_COMMENTS.bID
BLOG_TOPICS.bID
BLOG_COMMENTS.Comment_Narrative
If any of those fields are Nullable then you should be doing a COALESCE or an ISNULL on them before using them in any comparison or Join. It's situations like these which explain why most DBAs prefer to have as few nullable columns in tables as possible - they cause overhead and are prone to errors.
If that still doesn't fix your problem, then COALESCE/ISNULL all fields that are nullable and are being returned by this query. Take all null values out of the equation and just get the thing working and then, if you really need the null values to be null, go back through and remove the COALESCE/ISNULLs one at a time until you find the culprit.

My problem came from ignorance and a bit of dullness. I did not realize that just because a field is a key in the sql table does mean it has to be a key in the tableadapter. If one has a key field defined in the SQL table and then creates a table adapter, the corresponding field in the adapter will also be a key. All I had to do was unset the key field in the tableadapter and it worked.
Solution:
Select the key field in the adapter.
Right click
Select "Delete Key" (keeps the field, but removes the "key" icon)
That's it.

Related

Why Doesn't Knex Use Table Column Names on the Resulting Row Objects?

I'm making a query with Knex (PostgreSQL):
knex('foos')
.select(columnsThatAreAMixOfFooAndBarColumns)
.leftJoin('bars', 'foos.bar_id', 'bars.id')
.where({ 'foos.id': id })
When I get back results, I don't get back [{foo_id: 1, bar_id: 2, .... Instead I get back [{ id: 1, ....
What's so bizzare is that Knex is building the right query: if I log it I'll see SELECT "foos"."id", "bars"."id", .... But for some strange reason I can't figure out, Knex decides to merge all the resulting columns together, so I lose bars.id entirely and foos.id gets renamed to id.
Can anyone explain what's going on, and how I can fix it? I've tried adding returning, using columns instead of select, and otherwise changing how I make the query, but it doesn't help, and again the query Knex builds isn't the problem: it's the results.
EDIT: I re-ran the same exact query using knex.raw(query) instead ... and got the exact same (id, not foos_id) result. So it's definitely not how the query is made that's the problem: I'm somehow doing something to make Knex build the results in a way that ignores the qualified column names.
Well, as far as I can tell (and it's difficult because the documentation is almost exclusively about query building, with almost nothing about the results) the answer is "this is just how Knex works". It seems like it would make more sense to have the library return the qualified names, but for whatever reason it doesn't.
Instead, if you want to JOIN tables in Knex, it seems that you have to alias (ie. foos.id AS foos_id) every column that shares a name :(

Power Query - Appending two tables but the other table might be empty depending on the situation - throws an error in that case

I am working on a solution that involves merging two queries in Power Query to retrieve a single data table back to Excel. The first query is always populated but the other query comes from an ERP and might be empty (empty table) from time to time.
Appending the two queries involves making the header names the same in the two queries before the appending takes place. As the second query sometimes results in an empty table, the error arises in the steps when Power Query is modifying the header names in the second table (it cannot modify the header names as there are no headers).
"Error message: Expression.Error: The column 'PartMtl_Company' of the table wasn't found.
Details: PartMtl_Company" where the PartMtl_Company is the leftmost column in my table.
I am kind of thinking that I would need to evaluate whether the second table is empty and skip the renaming steps if that is the case. I assume merging the populated first table with an empty table would cause no problem and would only result in the first table. I have tried to look around for a suitable M-code but have not come across such.
I'm thinking you might be able to use Table.RowCount to solve this. Something along the lines of:
= if Table.RowCount(Table2) > 0 then...
You would modify the headers only if there is data in the second table. Same goes for the appending of the tables: you would only append if there is data in the second table, since you won't have renamed any headers otherwise.
Thank you Marc! That did the trick.
In the end, I wrote some in the lines of
= if Table.RowCount(Table2) > 0 then... (code that works on a non-empty table) ...else Table2
, which returns the empty table if it is empty to begin with. Appending the second table into the first table did not throw an error but returned only the first table like planned.

Fetching only one field using sorm framework

Is it possible to fetch only one field from the database using the SORM Framework?
What I want in plain SQL would be:
SELECT node_id FROM messages
I can't seem to be able to reproduce this in sorm. I know this might be against how sorm is supposed to work, but right now I have two huge tables with different kind of messages. I was asked to get all the unique node_ids from both tables.
I know I could just query both tables using sorm and parse through all the data but I would like to put the database to work. Obviously, this would be even better if one can get only unique node_ids in a single db call.
Right now with just querying everything and parsing it, it takes way too long.
There doesn't seem to be ORM support for what you want to do, unless node_id happens to be the primary key of your Message object:
val messages = Db.query[Message].fetchIds()
In this case you shouldn't need to worry about it being UNIQUE, since primary keys are by definition unique. Alternatively, you can run a custom SQL query:
Db.fetchWithSql[Message]("SELECT DISTINCT node_id FROM messages")
Note this latter might be typed wrong: you'd have to try it against your database. You might need fetchWithSql[Int], or some other variation: it is unclear what SORM does in the eventuality that the primary key hasn't been queried.

Getting "SQL0802 - Data conversion of data mapping error” exception when adding ORDER BY

I just came across scenario when occasionally (not for all sets of data) I'm getting "Error: SQL0802 - Data conversion or data mapping error." exception when adding ORDER BY to simple query. For example, this works:
SELECT
market,
locationCode,
locationName
FROM locations
and the following is failing miserably:
SELECT
market,
locationCode,
locationName
FROM locations
ORDER BY locationName
I'm getting: Error: SQL0802 - Data conversion or data mapping error. (State:S1000, Native Code: FFFFFCDE)
I get the same error if I try to sort by name, or population, or anything really.... but only sometimes, meaning, when it errors on name or code, it would error if sorted by any field in locations subset. If it works for particular subset of locations, then it works for any sort order.
There are no null values in any of the fields, code and name fields are character fields.
Initially, I got this error when I added ROW_NUMBER column:
ROW_NUMBER() OVER(PARTITION BY market ORDER BY locationCode) as rowNumber
since, I narrowed it down to failing order case. I don't know which direction to go with it. Any thoughts?
update: there are no blank values for location name field. And even if I remove all fields in this subset and leave only 7 digit numeric id and sort by that field. I still get the same error.
WITH locs as (
SELECT id
FROM locations
)
SELECT *
FROM locs
ORDER BY id
I get this error when I SELECT DISTINCT any field from the subset too.
I had/have the exact same situation as described. The error seemed to be random, but would always appear when sorting was added. Although I can't precisely describe the technical details, what I think is occurring is the "randomness" was actually due to the size of the tables, and the size of the cached chunks of returned rows from the query.
The actual cause of the problem is junk values and/or blanks in the key fields used by the join. If there was no sorting, and the first batch of cached results didn't hit the records with the bad fields, the error wouldn't occur at first...but eventually always it did.
And the two things that ALWAYS drew out the error IMMEDIATELY were sorting or paging through the results. That's because in order to sort, it has to hit every one of the those key fields, and then cache the complete results. I think. Like I said, I don't know the complete technobabble, but I'm pretty sure that's close in laygeek terms.
I was able to solve the error by force-casting the key columns to integer. I changed the join from this...
FROM DAILYV INNER JOIN BXV ON DAILYV.DAITEM=BXV.BXPACK
...to this...
FROM DAILYV INNER JOIN BXV ON CAST(DAILYV.DAITEM AS INT)=CAST(BXV.BXPACK AS INT)
...and I didn't have to make any corrections to the tables. This is a database that's very old, very messy, and has lots of junk in it. Corrections have been made, but it's a work in progress.

Getting exception when inserting an entity into a EF table

I get the following ArgumentNullException when I insert a rather simple entity into a table.
I don't think it matters, but the database is an SQL Server Compact .sdf file.
Value cannot be null.
Parameter name: source
at System.Linq.Enumerable.Any[TSource](IEnumerable1 source, Func2 predicate)
at System.Data.Entity.Internal.InternalContext.WrapUpdateException(UpdateException updateException)
at System.Data.Entity.Internal.InternalContext.SaveChanges()
at System.Data.Entity.Internal.LazyInternalContext.SaveChanges()
at System.Data.Entity.DbContext.SaveChanges()
{{ the code below }}
Here is the code that I'm using:
var newMaterial = _localJobs.DbMaterials.Create();
newMaterial.JobID = a_job.ID;
newMaterial.MaterialName = material.Name;
newMaterial.UseType = material.UseType;
newMaterial.Length = material.Length;
newMaterial.Width = material.Width;
newMaterial.Thickness = material.Thickness;
newMaterial.DefaultLength = material.DefaultLength;
newMaterial.DefaultWidth = material.DefaultWidth;
newMaterial.DefaultThickness = material.DefaultThickness;
_localJobs.DbMaterials.Add(newMaterial);
_localJobs.SaveChanges(); // <- The exception occurs here.
I am properly populating every field with valid data. The only key herein is JobID. It is a foreign key GUID with an explicit relation with a table called Job. The proper record already exists in the database.
Here is my table schema.
It's very old thread but still I am going to post answer to it because i had similar issue and I could not find answer on it. Maybe it will help someone.
Answer for me was very silly. My ID column was not an primary key of the table.
This error occurs when you have a field that:
Does not allow nulls
Does not have a default value
That you do not supply a value for in the insert
The reason for the error could be:
you are setting one of the fields to null
the save changes is also trying to save a row that was added earlier that is not showing up in the code you posted.
Try using SQL Profiler then you will see which insert statement is causing the error.
Given you will have checked all of the sensible suggestions already provided. Then another possibility is You model and DB dont Match.
Do you have a
Migration outstanding.
.. Double check. I have seen this message before ;-)