Entity Framework: View exclusion without primary key - entity-framework

I am using SQL Server where I have designed a view to sum the results of two tables and I want the output to be a single table with the results. My query simplified is something like:
SELECT SUM(col1), col2, col3
FROM Table1
GROUP BY col2, col3
This gives me the data I want, but when updating my EDM the view is excluded because "a primary key cannot be inferred".
With a little research I modified the query to spoof an id column to as follows:
SELECT ROW_NUMBER() OVER (ORDER BY col2) AS 'ID', SUM(col1), col2, col3
FROM Table1
GROUP BY col2, col3
This kind of query gives me a nice increasing set of ids. However, when I attempt to update my model it still excludes my view because it cannot infer a primary key. How can we use views that aggregate records and connect them with Linq-to-Entities?

As already discussed in the comments you can try adding MAX(id) as id to the view. Based on your feedback this would become:
SELECT ISNULL(MAX(id), 0) as ID,
SUM(col1),
col2,
col3
FROM Table1
GROUP BY col2, col3
Another option is to try creating an index on the view:
CREATE UNIQUE CLUSTERED INDEX idx_view1 ON dbo.View1(id)

I use this code alter view
ISNULL(ROW_NUMBER() OVER(ORDER BY ActionDate DESC), -1) AS RowID
I use this clause in multi relations view / table query
ROW_NUMBER never give null value because it never seen -1

This is all I needed to add in order to import my view into EF6.
select ISNULL(1, 1) keyField

Related

Can the Custom SQL Query in a Tableau Dashboard accept a list of values in a Parameter?

I have a Tableau dashboard drawing data from a Vertica Database via a Custom SQL Query.
The database table contains more than 100 million rows, with a column COL1 indicated as primary key. Each COL1 value corresponds to exactly one row of data. Therefore COL1 is unique for all rows.
The Custom SQL Query below refreshes the dashboard whenever the parameter is updated.
SELECT COL1, COL2, COL3, COL4, COL5 FROM TABLE WHERE COL1=<Parameters.Col1Param>
Can the dashboard users input more than one value to get more than 1 row of data?
I have tried using the IN condition as below:
SELECT COL1, COL2, COL3, COL4, COL5 FROM TABLE WHERE COL1 IN (<Parameters.Col1Param>)
However, I can't seem to be able to make this work with Parameter values Param1;Param2;Param3 or Param1,Param2,Param3.
I also tried including all values of COL1 and allowing the user to filter on-the-fly, but the database table is too large (over 100M of rows) for the dashboard to load the data into memory.
As always, minutes after posting a question on StackOverflow, I find a reasonable answer to my question.
The answer to this can be found here: Convert comma separated string to a list
SELECT COL1, COL2, COL3, COL4, COL5 FROM TABLE WHERE COL1 IN (
SELECT SPLIT_PART(<Parameters.Col1Param>, ';', row_num) params
FROM (SELECT ROW_NUMBER() OVER () AS row_num FROM tables) row_nums
WHERE SPLIT_PART(<Parameters.Col1Param>, ';', row_num) <> ''
)

duplicate multi column entries postgresql

I have a bunch of data in a postgresql database. I think that two keys should form a unique pair,
so want to enforce that in the database. I try
create unique index key1_key2_idx on table(key1,key2)
but that fails, telling me that I have duplicate entries.
How do I find these duplicate entries so I can delete them?
select key1,key2,count(*)
from table
group by key1,key2
having count(*) > 1
order by 3 desc;
The critical part of the query to determine the duplicates is having count(*) > 1.
There are a whole bunch of neat tricks at the following link, including some examples of removing duplicates: http://postgres.cz/wiki/PostgreSQL_SQL_Tricks
Assuming you only want to delete the duplicates and keep the original, the accepted answer is inaccurate -- it'll delete your originals as well and only keep records that have one entry from the start. This works on 9.x:
SELECT * FROM tblname WHERE ctid IN
(SELECT ctid FROM
(SELECT ctid, ROW_NUMBER() OVER
(partition BY col1, col2, col3 ORDER BY ctid) AS rnum
FROM tblname) t
WHERE t.rnum > 1);
https://wiki.postgresql.org/wiki/Deleting_duplicates

PostgreSQL - INSERT INTO statement

What I'm trying to do is select various rows from a certain table and insert them right back into the same table. My problem is that I keep running into the whole "duplicate PK" error - is there a way to skip the PK field when executing an INSERT INTO statement in PostgreSQL?
For example:
INSERT INTO reviews SELECT * FROM reviews WHERE rev_id=14;
the rev_id in the preceding SQL is the PK key, which I somehow need to skip. (To clarify: I am using * in the SELECT statement because the number of table columns can increase dynamically).
So finally, is there any way to skip the PK field?
Thanks in advance.
You can insert only the values you want so your PK will get auto-incremented
insert into reviews (col1, col2, col3) select col1, col2, col3 from reviews where rev_id=14
Please do not retrieve/insert the id-column
insert into reviews (col0, col1, ...) select col0, col1, ... from reviews where rev_id=14;

select where not exists excluding identity column

I am inserting only new records that do not exist in a live table from a "dump" table. My issue is there is an identity column that I don't want to insert into the live, I want the live tables identity column to take care of incrementing the value but I am getting an insert error "Insert Error: Column name or number of supplied values does not match table definition." Is there a way around this or is the only fix to remove the identity column all together?
Thanks,
Sam
You need to list of all the needed columns in your query, excluding the identity column.
One more reason why you should never use SELECT *.
INSERT liveTable
(col1, col2, col3)
SELECT col1, col2, col3
FROM dumpTable dt
WHERE NOT EXISTS
(
SELECT 1
FROM liveTable lt
WHERE lt.Id == dt.Id
)
Pro tip: You can also achieve the above by using an OUTER JOIN between the dump and live tables and using WHERE liveTable.col1 = NULL (you will probably need to qualify the column names selected with the dump table alias).
I figured out the issue.... my live table didn't have the ID field set as an identity, somehow when I created it that field wasn't set up correctly.
you can leave that column in your insert statment like this
insert into destination (col2, col3, col4)
select col2, col3 col4 from source
Don't do just
insert into destination
select * from source

sql: sort from two tables and order by date

I met a problem in my iPhone App - I create tow tables with sqlite3:
create table A (Name varchar(50), Added datetime);
create table B (UserID varchar(50), Username varchar(50), Created datetime);
I need to get all the values of the two tables ordered by time, which is like:
Alen 2011-06-25 17:56:00
12 Fire 2011-06-26 17:56:00
Bale 2011-07-01 17:56:00
As you see there is no relationship between of the tables, I've no idea about it.
The app is undergoing, and it's difficult to redesign the DB.
I'd like to know the solution based on the current DB schema (this is also the boss's requirement).
SELECT NULL AS Col1, Name AS Col2, Added AS Col3
FROM A
UNION ALL
SELECT UserID AS Col1, Username AS Col2, Created AS Col3
FROM B
ORDER BY 3