I am working with SQL Server 2014. I have dynamically generated SQL which looks like this (simplified for brevity):
with CTE as
(
select field as [field], field as [Field]
from myTable
)
select [field], [Field]
from CTE
The above however results in this error:
The column 'Field' was specified multiple times for 'CTE'.
I would expect/want this to work because the 2 columns are in fact unique, accounting for case. Is there anyway we can ask SQL (via some SET option maybe) to treat them as unique?
My dynamically generated SQL is very complex and it's very difficult to identify such 'duplicates' and 'combine' them.
From a theoretical perspective, you could change the collation of your database to a case-sensitive option. Case-sensitive database/server collations will also consider the case sensitivity of aliases. Table column collations, and collating a column in a select will not.
Changing database/server collation will change a whole lot else though. It would be a very extreme change to fix an alias issue and I doubt it is a viable solution for you.
That said, if your dynamic SQL is able to see that the alias field already exists and use the capitalized alias Field for the next instance of the same column, I would think you could simply adjust that to be field1, field2 etc. You can always re-alias those to whatever you want in your outer/final select, they just need to be unique in the CTE query.
Related
I am trying to build a storage manager where users can store their lab samples/data. Unfortunately, this means that the tables will end up being quite dynamic, as each sample might have different data associated with it. I will still require users to define a schema, so I can display the data properly, however, I think this schema will have to be represented as a JSON field in the underlying database.
I was wondering, in Prisma, is there a way to fuzzy search through collections. Could I type something like help and then return all rows that match this expression ANYWHERE in their columns? (including the JSON fields). Could i do something like this at all with posgresql? Or with MongoDB?
thank you
You can easily do that with jsonb in PostgreSQL.
If you have a table defined like
CREATE TABLE userdata (
id bigint PRIMARY KEY,
important_col1 text,
important_col2 integer,
other_cols jsonb
);
You can create an index like this
CREATE INDEX ON userdata USING gin (other_cols);
and search efficiently with
SELECT id FROM userdata WHERE other_cols #> '{"attribute": "value"}';
Here, #> is the JSON containment operator in PostgreSQL.
Yes, in PostgreSQL you surely can do this. It's quite straightforward. Here is an example.
Let your table be called the_table aliased as tht. Cast an entire table row as text tht::text and use case insensitive regular expression match operator ~* to find rows that contain help in this text. You can use more elaborate and powerful regular expression for searching too.
Please note that since the ~* operator will defeat any index, this query will result in a sequential scan.
select * -- or whatever list of expressions you need
from the_table as tht
where tht::text ~* 'help';
I have a query like this.
SELECT companies.id, companies.code, MAX(disclosures.filed_at) disclosure_filed_at
FROM \"companies\" INNER JOIN \"disclosures\" ON \"disclosures\".\"company_id\" = \"companies\".\"id\"
GROUP BY companies.id
This query works in Postgresql 9.5.2, but it failed in version 8.4.20 with an error.
PG::GroupingError: ERROR: column "companies.code" must appear in the GROUP BY clause or be used in an aggregate function
If I add companies.code to GROUP BY clause, then it works. But when I select by companies.*, I can't group by companies.*.
Should I write all columns in version 8.4 to use *?
The Postgres behavior is supported by the ANSI standard. The reason is that the id not only defines each row in companies, but it is defined to do so (using a unique or primary key constraint, although I'm not sure if this works in Postgres for a unique constraint).
Hence, the database knows that it can safely refer to any other column from the same row. This is called "functional dependency".
This feature has also now been added to MySQL (documented here). You might find that documentation easier to follow than the Postgres description:
When GROUP BY is present, or any aggregate functions are present, it
is not valid for the SELECT list expressions to refer to ungrouped
columns except within aggregate functions or when the ungrouped column
is functionally dependent on the grouped columns, since there would
otherwise be more than one possible value to return for an ungrouped
column. A functional dependency exists if the grouped columns (or a
subset thereof) are the primary key of the table containing the
ungrouped column.
I am a novice at Firebird SQL.
Can anyone advise how I specify the order of field name in Firebird? If I change the name of the header, then it places that field at the end.
In another situation the field names appear differently in Excel that they do in the preview. (Can't see any particular pattern, not even aphabetical or alphabetical per table).
Any hints are appreciated.
I understand your question as you want to change the fields order in a table, permanently.
For that, Firebird FAQ gives this DDL-statement:
ALTER TABLE table_name ALTER field_name POSITION new_position;
Positions are numbered from one. If you wish to exchange two fields, make sure you run the statement for both of them.
If you want to change the fields order temporary, e.g. in a query, you can obviously just define the fields order in the select statement. But I think, this is one of the first things a SQL novice learns.
By SQL
ALTER TABLE TABLE_NAME ALTER COLUMN FIELD_NAME POSITION x;
http://firebirdsql.org/refdocs/langrefupd20-alter-table.html#langrefupd20-at-position
Far as I know, there is no option to automatically arrange the fields. But you may use your preferred tools like Flame Robin, IBExpert etc. to do this manually. They have this functionality.
For example Flame Robin :
I have some T-SQL (SQL Server 2008) that I inherited and am trying to find out why some of queries are running really slow. In the Actual Execution Plan I have three clustered index scans which are costing me 19%, 21% and 26%, so this seems to be the source of my problem.
The contents of the fields are usually numeric (but some job numbers have an alpha prefix)
The database design (vendor supplied) is pretty poor. The max length of a job number in their application is 12 chars, but in the tables that are joined it is defined as varchar(50) in some places and varchar(15) in others. My parameter is a varchar(12), but I get same thing if I change it to a varchar(50)
The node contains this:
Predicate: [Live_Costing].[dbo].[TSTrans].[JobNo] as [sts1].[JobNo]=CONVERT_IMPLICIT(varchar(50),[#JobNo],0)
sts1 is a derived table, but the table it pulls jobno from is a varchar(50)
I don't understand why it's doing an implicit conversion between 2 varchars. Is it just because they are different lengths?
I'm fairly new to the execution plan
Is there an easy way to figure out which node in the exec plan relates to which part of the query?
Is the predicate, the join clause?
Regards
Mark
Some variables can have collation: enter link description here
Regardless you need to verify your collations, which can be specified at server, DB, table, and column level.
First, check your collation between tempdb and the vendor supplied database. It should match. If it doesn't, it will tend to do implicit conversions.
Assuming you cannot modify the vendor supplied code base, one or more of the following should help you:
1) Predefine your temp tables and specify the same collation for the key field as in the db in use, rather than tempdb.
2) Provide collations when doing string comparisons.
3) Specify collation for key values if using "select into" with a temp table
4) Make sure your collations on your tables and columns match your database collation (VERY important if you imported only specific tables from a vendor into an existing database.)
If you can change the vendor supplied code base, I would suggest reviewing the cost for making all of your char keys the same length and NOT varchar. Varchar has an overhead of 10. The caveat is that if you create a fixed length character field not null, it will be padded to the right (unavoidable).
Ideally, you would have int keys, and only use varchar fields for user interaction/lookup:
create table Products(ProductID int not null identity(1,1) primary key clustered, ProductNumber varchar(50) not null)
alter table Products add constraint uckProducts_ProductNumber unique(ProductNumber)
Then do all joins on ProductID, rather than ProductNumber. Just filter on ProductNumber.
would be perfectly fine.
I have been migrating a MySQL db to Pg (9.1), and have been emulating MySQL ENUM data types by creating a new data type in Pg, and then using that as the column definition. My question -- could I, and would it be better to, use a CHECK CONSTRAINT instead? The MySQL ENUM types are implemented to enforce specific values entries in the rows. Could that be done with a CHECK CONSTRAINT? and, if yes, would it be better (or worse)?
Based on the comments and answers here, and some rudimentary research, I have the following summary to offer for comments from the Postgres-erati. Will really appreciate your input.
There are three ways to restrict entries in a Postgres database table column. Consider a table to store "colors" where you want only 'red', 'green', or 'blue' to be valid entries.
Enumerated data type
CREATE TYPE valid_colors AS ENUM ('red', 'green', 'blue');
CREATE TABLE t (
color VALID_COLORS
);
Advantages are that the type can be defined once and then reused in as many tables as needed. A standard query can list all the values for an ENUM type, and can be used to make application form widgets.
SELECT n.nspname AS enum_schema,
t.typname AS enum_name,
e.enumlabel AS enum_value
FROM pg_type t JOIN
pg_enum e ON t.oid = e.enumtypid JOIN
pg_catalog.pg_namespace n ON n.oid = t.typnamespace
WHERE t.typname = 'valid_colors'
enum_schema | enum_name | enum_value
-------------+---------------+------------
public | valid_colors | red
public | valid_colors | green
public | valid_colors | blue
Disadvantages are, the ENUM type is stored in system catalogs, so a query as above is required to view its definition. These values are not apparent when viewing the table definition. And, since an ENUM type is actually a data type separate from the built in NUMERIC and TEXT data types, the regular numeric and string operators and functions don't work on it. So, one can't do a query like
SELECT FROM t WHERE color LIKE 'bl%';
Check constraints
CREATE TABLE t (
colors TEXT CHECK (colors IN ('red', 'green', 'blue'))
);
Two advantage are that, one, "what you see is what you get," that is, the valid values for the column are recorded right in the table definition, and two, all native string or numeric operators work.
Foreign keys
CREATE TABLE valid_colors (
id SERIAL PRIMARY KEY NOT NULL,
color TEXT
);
INSERT INTO valid_colors (color) VALUES
('red'),
('green'),
('blue');
CREATE TABLE t (
color_id INTEGER REFERENCES valid_colors (id)
);
Essentially the same as creating an ENUM type, except, the native numeric or string operators work, and one doesn't have to query system catalogs to discover the valid values. A join is required to link the color_id to the desired text value.
As other answers point out, check constraints have flexibility issues, but setting a foreign key on an integer id requires joining during lookups. Why not just use the allowed values as natural keys in the reference table?
To adapt the schema from punkish's answer:
CREATE TABLE valid_colors (
color TEXT PRIMARY KEY
);
INSERT INTO valid_colors (color) VALUES
('red'),
('green'),
('blue');
CREATE TABLE t (
color TEXT REFERENCES valid_colors (color) ON UPDATE CASCADE
);
Values are stored inline as in the check constraint case, so there are no joins, but new valid value options can be easily added and existing values instances can be updated via ON UPDATE CASCADE (e.g. if it's decided "red" should actually be "Red", update valid_colors accordingly and the change propagates automatically).
One of the big disadvantages of Foreign keys vs Check constraints is that any reporting or UI displays will have to perform a join to resolve the id to the text.
In a small system this is not a big deal but if you are working on a HR or similar system with very many small lookup tables then this can be a very big deal with lots of joins taking place just to get the text.
My recommendation would be that if the values are few and rarely changing, then use a constraint on a text field otherwise use a lookup table against an integer id field.
PostgreSQL has enum types, works as it should. I don't know if an enum is "better" than a constraint, they just both work.
From my point of view, given a same set of values
Enum is a better solution if you will use it on multiple column
If you want to limit the values of only one column in your application, a check constraint is a better solution.
Of course, there is a whole lot of other parameters which could creep in your decision process (typically, the fact that builtin operators are not available), but I think these two are the most prevalent ones.
I'm hoping somebody will chime in with a good answer from the PostgreSQL database side as to why one might be preferable to the other.
From a software developer point of view, I have a slight preference for using check constraints, since PostgreSQL enum's require a cast in your SQL to do an update/insert, such as:
INSERT INTO table1 (colA, colB) VALUES('foo', 'bar'::myenum)
where "myenum" is the enum type you specified in PostgreSQL.
This certainly makes the SQL non-portable (which may not be a big deal for most people), but also is just yet another thing you have to deal with while developing applications, so I prefer having VARCHARs (or other typical primitives) with check constraints.
As a side note, I've noticed that MySQL enums do not require this type of cast, so this is something particular to PostgreSQL in my experience.