I want to remove duplicate rows return from a SELECT Query in Postgres
I have the following query
SELECT DISTINCT name FROM names ORDER BY name
But this somehow does not eliminate duplicate rows?
PostgreSQL is case sensitive, this might be a problem here
DISTINCT ON can be used for case-insensitive search (tested on 7.4)
SELECT DISTINCT ON (upper(name)) name FROM names ORDER BY upper(name);
Maybe something with same-looking-but-different characters (like LATIN 'a'/CYRILLIC 'а')
Don't forget to add a trim() on that too. Or else 'Record' and 'Record ' will be treated as separate entities. That ended up hurting me at first, I had to update my query to:
SELECT DISTINCT ON (upper(trim(name))) name FROM names ORDER BY upper(trim(name));
In Postgres 9.2 and greater you can now cast the column to a CITEXT type or even make the column that so you don't have to cast on select.
SELECT DISTINCT name::citext FROM names ORDER BY name::citext
Related
I'm trying to write a query in sql to exclude a keyword:
It's a list of cities written out (e.g. AnnArbor-MI). In the list there are duplicates because some have the word 'badsetup' after the city and these need to be discarded. How would I write something to exclude any city with 'badsetup' after it?
Your question title and content appear to be asking for two different things ...
Query cities while excluding the trailing 'badsetup':
SELECT regexp_matches(citycolumn, '(.*)badsetup')
FROM mytable;
Query cities that don't have the trailing 'badsetup':
SELECT citycolumn
FROM mytable
WHERE citycolumn NOT LIKE '%badsetup';
In psql, to select rows excluding those with the word 'badsetup' you can use the following:
SELECT * FROM table_name WHERE column NOT LIKE '%badsetup%';
In this case the '%' indicates that there can be any characters of any length in this space. So this query will find any instance of the phrase 'badsetup' in your column, regardless of the characters before or after it.
You can find more information in section 9.7.1 here: https://www.postgresql.org/docs/8.3/static/functions-matching.html
I am getting this error in the pg production mode, but its working fine in sqlite3 development mode.
ActiveRecord::StatementInvalid in ManagementController#index
PG::Error: ERROR: column "estates.id" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT "estates".* FROM "estates" WHERE "estates"."Mgmt" = ...
^
: SELECT "estates".* FROM "estates" WHERE "estates"."Mgmt" = 'Mazzey' GROUP BY user_id
#myestate = Estate.where(:Mgmt => current_user.Company).group(:user_id).all
If user_id is the PRIMARY KEY then you need to upgrade PostgreSQL; newer versions will correctly handle grouping by the primary key.
If user_id is neither unique nor the primary key for the 'estates' relation in question, then this query doesn't make much sense, since PostgreSQL has no way to know which value to return for each column of estates where multiple rows share the same user_id. You must use an aggregate function that expresses what you want, like min, max, avg, string_agg, array_agg, etc or add the column(s) of interest to the GROUP BY.
Alternately you can rephrase the query to use DISTINCT ON and an ORDER BY if you really do want to pick a somewhat arbitrary row, though I really doubt it's possible to express that via ActiveRecord.
Some databases - including SQLite and MySQL - will just pick an arbitrary row. This is considered incorrect and unsafe by the PostgreSQL team, so PostgreSQL follows the SQL standard and considers such queries to be errors.
If you have:
col1 col2
fred 42
bob 9
fred 44
fred 99
and you do:
SELECT col1, col2 FROM mytable GROUP BY col1;
then it's obvious that you should get the row:
bob 9
but what about the result for fred? There is no single correct answer to pick, so the database will refuse to execute such unsafe queries. If you wanted the greatest col2 for any col1 you'd use the max aggregate:
SELECT col1, max(col2) AS max_col2 FROM mytable GROUP BY col1;
I recently moved from MySQL to PostgreSQL and encountered the same issue. Just for reference, the best approach I've found is to use DISTINCT ON as suggested in this SO answer:
Elegant PostgreSQL Group by for Ruby on Rails / ActiveRecord
This will let you get one record for each unique value in your chosen column that matches the other query conditions:
MyModel.where(:some_col => value).select("DISTINCT ON (unique_col) *")
I prefer DISTINCT ON because I can still get all the other column values in the row. DISTINCT alone will only return the value of that specific column.
After often receiving the error myself I realised that Rails (I am using rails 4) automatically adds an 'order by id' at the end of your grouping query. This often results in the error above. So make sure you append your own .order(:group_by_column) at the end of your Rails query. Hence you will have something like this:
#problems = Problem.select('problems.username, sum(problems.weight) as weight_sum').group('problems.username').order('problems.username')
#myestate1 = Estate.where(:Mgmt => current_user.Company)
#myestate = #myestate1.select("DISTINCT(user_id)")
this is what I did.
I am using mysql workbench (SQL Editor). I need copy the list of columns in each query as was existed in Mysql Query Browser.
For example
Select * From tb
I want have the list of fields like as:
id,title,keyno,......
You mean you want to be able to get one or more columns for a specified table?
1st way
Do SHOW COLUMNS FROM your_table_name and from there on depending on what you want have some basic filtering added by specifying you want only columns that data type is int, default value is null etc e.g. SHOW COLUMNS FROM your_table_name WHERE type='mediumint(8)' ANDnull='yes'
2nd way
This way is a bit more flexible and powerful as you can combine many tables and other properties kept in MySQL's INFORMATION_SCHEMA internal db that has records of all db columns, tables etc. Using the query below as it is and setting TABLE_NAME to the table you want to find the columns for
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME='your_table_name';
To limit the number of matched columns down to a specific database add AND TABLE_SCHEMA='your_db_name' at the end of the query
Also, to have the column names appear not in multiple rows but in a single row as a comma separated list you can use GROUP_CONCAT(COLUMN_NAME,',') instead of only COLUMN_NAME
To select all columns in select statement, please go to SCHEMAS menu and right click ok table which you want to select column names, then select "Copy to Clipboard > Select All statement".
The solution accepted is fine, but it is limited to field names in tables. To handle arbitrary queries would be to standardize your select clause to be able to use regex to strip out only the column aliases. I format my select clause as "1 row per element" so
Select 1 + 1 as Col1, 1 + 2 Col2 From Table
becomes
Select 1 + 1 as Col1
, 1 + 2 Col2
From Table
Then I use simple regex on the "1 row per select element" version to replace "^.* " (excluding quotes) with nothing. The regex finds everything before the final space in the line, so it assumes your column aliases doesn't contain spaces (so replace spaces with underscore). Or if you don't like "1 row per element" then always use "as" keyword to give you a handle that regex can grasp.
I have a problem with TSQL. I have a number of tables, each table contain different number of fielsds with different names.
I need dynamically take all this tables, read all records and manage each record into string list, where each value separated by commas. And do smth. with this string.
I think that I need to use CURSORS, but I can't FETCH em without knowing A concrete amount of fields with names and types. Maybe I can create a table variable with dynamic number of fields?
Thanks a lot!
Makarov Artem.
I would repurpose one of the many T-SQL scripts written to generate INSERT statements. They do exactly what you require. Namely
Reverse engineer a given table to determine columns names and types
Generate a delimited string of values
The most complete example I've found is here
But just a simple Google search for "INSERT STATEMENT GENERATOR" will yield several examples that you can repurpose to fit your needs.
Best of luck!
SELECT
ORDINAL_POSITION
,COLUMN_NAME
,DATA_TYPE
,CHARACTER_MAXIMUM_LENGTH
,IS_NULLABLE
,COLUMN_DEFAULT
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'MYTABLE'
ORDER BY
ORDINAL_POSITION ASC;
from http://weblogs.sqlteam.com/joew/archive/2008/04/27/60574.aspx
Perhaps you can do something with this.
select T2.X.query('for $i in *
return concat(data($i), ",")'
).value('.', 'nvarchar(max)') as C
from (
select *
from YourTable
for xml path('Row'),elements xsinil, type
) as T1(X)
cross apply T1.X.nodes('/Row') T2(X)
It will give you one row for each row in YourTable with each value in YourTable separated by a comma in the column C.
This builds an XML for the entire table and then parses that XML. Might get you into trouble if you have tables with a lot of rows.
BTW: I saw from a comment that you can "use only pure SQL". I really don't think this qualifies as "pure SQL" :).
I have a SQL query, like so:
SELECT DISTINCT ID, Name FROM Table
This brings up all the distinct IDs (1...13), but in the 13 IDs, it repeats the name (as it comes up twice). The order of the query (ID, Name) has to be kept the same as the app using this query is coded with this assumption.
Is there a way to ensure there are no duplicates?
Thanks
You can try :
select id, name from table group by id,name
But it seems like distinct should work. Perhaps there are trailing spaces at the end of your name fields?
Instead of using DISTINCT, use GROUP BY
SELECT ID, Name FROM Table GROUP BY ID, Name