MySQL offers PROCEDURE ANALYSE. Given a query like
SELECT `field` FROM `table` PROCEDURE ANALYSE();
The result offers a suggested field type
Optimal_fieldtype: TINYINT(2) UNSIGNED NOT NULL
Does PostgreSQL offer a similar functionality? I've looked at pg_statistic/pg_stats. Seems like I can use this information to infer what data type might be appropriate, but it would be handy if Postgres could recommend an actual data type itself.
There is no equivalent in Postgres for MySQL's PROCEDURE ANALYSE that I know of. (And I think I would know.)
For the given example, obviously about an integer-type column, I would:
SELECT min(field), max(field) FROM tbl;
And check against the range of possible numeric data types in the manual.
Or for the distribution of distinct values:
SELECT field, count(*) AS ct FROM tbl GROUP BY 1 ORDER BY 2 DESC;
Actually, any DBA with a minimum of experience should know the essential characteristics of basic data types. The best type for each use case depends on a lot more than the current range and distribution of values. MySQL's PROCEDURE ANALYSE() would basically be assistance to newcomers - who would easily misinterpret results.
Related
I am fairly new to DB2 (and SQL in general) and I am having trouble finding an efficient method to DECODE columns
Currently, the database has a number of tables most of which have a significant number of their columns as numbers, these numbers correspond to a table with the real values. We are talking 9,500 different values (e.g '502=yes' or '1413= Graduate Student')
In any situation, I would just do WHERE clause and show where they are equal, but since there are 20-30 columns that need to be decoded per table, I can't really do this (that I know of).
Is there a way to effectively just display the corresponding value from the other table?
Example:
SELECT TEST_ID, DECODE(TEST_STATUS, 5111, 'Approved, 5112, 'In Progress') TEST_STATUS
FROM TEST_TABLE
The above works fine.......but I manually look up the numbers and review them to build the statements. As I mentioned, some tables have 20-30 columns that would need this AND some need DECODE statements that would be 12-15 conditions.
Is there anything that would allow me to do something simpler like:
SELECT TEST_ID, DECODE(TEST_STATUS = *TableWithCodeValues*) TEST_STATUS
FROM TEST_TABLE
EDIT: Also, to be more clear, I know I can do a ton of INNER JOINS, but I wasn't sure if there was a more efficient way than that.
From a logical point of view, I would consider splitting the lookup table into several domain/dimension tables. Not sure if that is possible to do for you, so I'll leave that part.
As mentioned in my comment I would stay away from using DECODE as described in your post. I would start by doing it as usual joins:
SELECT a.TEST_STATUS
, b.TEST_STATUS_DESCRIPTION
, a.ANOTHER_STATUS
, c.ANOTHER_STATUS_DESCRIPTION
, ...
FROM TEST_TABLE as a
JOIN TEST_STATUS_TABLE as b
ON a.TEST_STATUS = b.TEST_STATUS
JOIN ANOTHER_STATUS_TABLE as c
ON a.ANOTHER_STATUS = c.ANOTHER_STATUS
JOIN ...
If things are too slow there are a couple of things you can try:
Create a statistical view that can help determine cardinalities from the joins (may help the optimizer creating a better plan):
https://www.ibm.com/support/knowledgecenter/sl/SSEPGG_9.7.0/com.ibm.db2.luw.admin.perf.doc/doc/c0021713.html
If your license admits you can experiment with Materialized Query Tables (MQT). Note that there is a penalty for modifications of the base tables, so if you have more of a OLTP workload, this is probably not a good idea:
https://www.ibm.com/developerworks/data/library/techarticle/dm-0509melnyk/index.html
A third option if your lookup table is fairly static is to cache the lookup table in the application. Read the TEST_TABLE from the database, and lookup descriptions in the application. Further improvements may be to add triggers that invalidate the cache when lookup table is modified.
If you don't want to do all these joins you could create yourself an own LOOKUP function.
create or replace function lookup(IN_ID INTEGER)
returns varchar(32)
deterministic reads sql data
begin atomic
declare OUT_TEXT varchar(32);--
set OUT_TEXT=(select text from test.lookup where id=IN_ID);--
return OUT_TEXT;--
end;
With a table TEST.LOOKUP like
create table test.lookup(id integer, text varchar(32))
containing some id/text pairs this will return the text value corrseponding to an id .. if not found NULL.
With your mentioned 10k id/text pairs and an index on the ID field this shouldn't be a performance issue as such data amount should be easily be cached in the corresponding bufferpool.
I'm sorry for my title, but it's totally right. I don't know PostgreSQL well, but I have to take over other person's application. I know SQL, so it's usually no problem to take over any application based on MSSQL, Oracle, MySQL, PostgreSQL, ... but here is some kind of PostgreSQL facet. Could anyone, please, explain this query to me?
select company_generate_course_template_fc
((select company_id from company order by 1 desc limit 1)::int)
The query calls the function company_generate_course_template_fc() passing the result of the query: select company_id from company order by 1 desc limit 1 as an argument. The result is cast to an integer using ::int (see the manual for details)
Apart from the ::int part (and the different ways of limiting the result to a single row), this wouldn't be much different in other databases
The ANSI SQL equivalent of ::int would be cast(... as integer)
If you are talking about ::int it's type casting from string (company_id) to integer value
I think the question is pretty self explanatory, I've been learning about Postgres TSRange column type, but I've not been able to find an example of how you can order by it... How do you order by a TSRange, or specifically the lower bound?
There are range functions described in documentation.
SELECT * FROM my_table ORDER BY lower(range_column);
I am trying to remodel a SQL database Cassandra such that, I can find the Cassandra equivalent for the SQL queries. I use CQL 3 and Cassandra v1.2. I modeled the db design in cassandra so that it supports the order by clauses and denormalized tables to support the join operation. However I am at sea when it comes to DISTINCT, SUM() and GROUPBY equvalents
SELECT a1,MAX(b1) FROM demo1 group by a1.
SELECT DISTINCT (a2) FROM demo2 where b2='sea'
SELECT sum(a3), sum(b3) from demo3 where c3='water' and d3='ocean'
This is like a showstopper to my work for past couple of days. Is there a way in Cassandra, that I can model the db schema to support queries of these kind? I cant think of any way in Cassandra . How are such queries be implemented using Cassandra?
I read that a hive layer over Cassandra can possibly make these queries work. I am just wondering if that is the only way that such queries can be supported in Cassandra..? Pls advise on any other possible methods..
With Cassandra you solve these kinds of problems by doing more work when you insert your data -- which sounds like it would be slow, but Cassandra is designed for fast writes, and you're probably going to read the data many more times than you write it so it makes sense when you consider the whole system.
I can't tell you exactly how to create your tables to model your problem because it will depend a lot on the details. You need to figure a schema that lets you get the data without performing any on-the-fly aggregations. Think about how you would create views for the queries in an RDBMS, and then try to think how you would insert data directly into those views, not into the underlying tables. That's kind of how you model things in Cassandra.
Although this is an old question, it appears in Google search results pretty high. So I wanted to give an update.
Cassandra 2.2+ supports user defined function and user defined aggregates. WARNING: this does not mean that you don't have to do data modeling anymore (as it was pointed by #Theo) rather it just allows you to slightly preprocess your data upon retrieval.
SELECT DISTINCT (a2) FROM demo2 where b2='sea'
To implement DISTINCT, you should define a function and an agreggate. I'll call both the function and the aggregate uniq rather than distinct to emphasize the fact that it is user defined.
CREATE OR REPLACE FUNCTION uniq(state set<text>, val text)
CALLED ON NULL INPUT RETURNS set<text> LANGUAGE java
AS 'state.add(val); return state;';
CREATE OR REPLACE AGGREGATE uniq(text)
SFUNC uniq STYPE set<text> INITCOND {};
Then you use it as follows:
SELECT uniq(a2) FROM demo2 where b2='sea';
SELECT sum(a3), sum(b3) from demo3 where c3='water' and d3='ocean'
SUM is provided out of the box and works as you would expect. See system.sum.
SELECT a1,MAX(b1) FROM demo1 group by a1
GROUP BY is a tricky one. Actually, there is no way to group result rows by some column. But what you can do is to create a map<text, int> and to group them manually in the map. Based on an example from Christopher Batey's blog, group-by and max:
CREATE OR REPLACE FUNCTION state_group_and_max(state map<text, int>, type text, amount int)
CALLED ON NULL INPUT
RETURNS map<text, int>
LANGUAGE java AS '
Integer val = (Integer) state.get(type);
if (val == null) val = amount; else val = Math.max(val, amount);
state.put(type, val);
return state;
' ;
CREATE OR REPLACE AGGREGATE state_group_and_max(text, int)
SFUNC state_group_and_max
STYPE map<text, int>
INITCOND {};
Then you use it as follows:
SELECT state_group_and_max(a1, b1) FROM demo1;
Notes
As it was mentioned above, you still have to invest some time in data modeling, don't overuse these features
You have to set enable_user_defined_functions=true in your cassandra.yaml to enable the features
You can overload the functions to support grouping by columns of different types.
References:
Great UDF and UDA examples by Christopher Batey and few more
Datastax docs on UDF and UDA
User Defined Functions in Cassandra 3.0 (Planet Cassandra Blog)
Cassandra 3.10 now supports Group by parition key and clustering key. You can refer to this link for more detail.
Cassandra doesn't support operations like this. You can use something like Hive on top or there's a (non-free) product from Acunu that may do what you need.
The other solution is to do the work yourself. For example, you can sum things by reading in all the data from certain rows and summing. Or maintain a Cassandra counter to increment on the fly.
I'm using Sybase 12.5.3 (ASE); I'm new to Sybase though I've worked with MSSQL pretty extensively. I'm running into a scenario where a stored procedure is really very slow. I've traced the issue to a single SELECT stmt for a relatively large table. Modifying that statement dramatically improves the performance of the procedure (and reverting it drastically slows it down; i.e., the SELECT stmt is definitely the culprit).
-- Sybase optimizes and uses multi-column index... fast!<br>
SELECT ID,status,dateTime
FROM myTable
WHERE status in ('NEW','SENT')
ORDER BY ID
-- Sybase does not use index and does very slow table scan<br>
SELECT ID,status,dateTime
FROM myTable
WHERE status in (select status from allowableStatusValues)
ORDER BY ID
The code above is an adapted/simplified version of the actual code. Note that I've already tried recompiling the procedure, updating statistics, etc.
I have no idea why Sybase ASE would choose an index only when strings are hard-coded and choose a table scan when choosing from another table. Someone please give me a clue, and thank you in advance.
1.The issue here is poor coding. In your release, poor code and poor table design are the main reasons (98%) the optimiser makes incorrect decisions (the two go hand-in-hand, I have not figured out the proportion of each). Both:
WHERE status IN ('NEW','SENT')
and
WHERE status IN (SELECT status FROM allowableStatusValues)
are substandard, because in both cases they cause ASE to create a worktable for the contents between the brackets, which can easily be avoided (and all consequential issues avoided with it). There is no possibility of statistics on a worktable, since the statistics on either t.status or s.status is missing (AdamH is correct re that point), it correctly chooses a table scan.
Subqueries have their place, but never as a substitute for a pure (the tables are related) join. The corrections are:
WHERE status = "NEW" OR status = "SENT"
and
FROM myTable t,
allowableStatusValues s
WHERE t.status = s.status
2.The statement
|Now you don't have to add an index to get statistics on a column, but it's probably the best way.
is incorrect. Never create Indices that you will not use. If you want statistics updated on a column, simply
UPDATE STATISTICS myTable (status)
3.It is important to ensure that you have current statistics on (a) all indexed columns and (b) all join columns.
4.Yes, there is no substitute for SHOWPLAN on every code segment that is intended for release, doubly so for any code with questionable performance. You can also SET NOEXEC ON, to avoid execution, eg. for large result sets.
An index hint will work around it, but is probably not the solution.
Firstly I'd like to know if there is an index on allowableStatusValues.status, if there is then sybase will have stats on it and will have a good idea on the number of values in there.
If not then the optimiser probably won't have a good idea how many different values Status may take. It's then having to make the assumption that you're going to be extracting almost all of the rows from myTable, and the best way of doing this is a table scan (if no covering index).
Now you don't have to add an index to get statistics on a column, but it's probably the best way.
If you do have an index on allowableStatusValues.status, then i'd wonder how good your stats are. Get yourself a copy of sp__optdiag. You probably also need to tune the values of "histogram tuning factor" and "number of histogram steps", increasing these slightly from the defaults will give you more detailed statistics which always helps the optimiser.
Does it still do a table scan if you replace the subquery with a join:
SELECT m.ID,m.status,m.dateTime
FROM myTable m
JOIN allowableStatusValues a on m.status = a.status
ORDER BY ID
Rather than relying on experimental observations of how long a query takes to run, I would highly recommend getting Sybase to show you the execution plans for each query, for example:
SET showplan ON
GO
-- query/procedure call goes here
SELECT id, status, datetime
FROM myTable
WHERE status IN('NEW','SENT')
ORDER BY id
GO
SET showplan OFF
GO
With SET showplan ON, Sybase generates execution plans for every statement it executes. These can be invaluable in helping to identify where queries are not making use of appropriate indexes. For stored procedures in Sybase, the execution plan for the entire procedure is generated when the stored procedure is first executed after being compiled.
If you post the plans for each of your queries we might be able to shed more light on the problem.
Amazingly, using an index hint resolves the issue (see the (index myIndexName) line below - re-written/simplififed code below:
-- using INDEX HINT
SELECT ID,status,dateTime
FROM myTable (index myIndexName)
WHERE status in (select status from allowableStatusValues)
ORDER BY ID
Weird that I have to use this technique to avoid a table scan, but there ya go.
Garrett, by showing only the simplified code, you have likely stripped out exactly the information that would illuminate the source of the problem.
My first guess would be a type mismatch between allowableStatusValues.status and myTable.status. However, that is not the only possibility. As ninesided stated, the complete query plans (using showplan and fmtonly flags), as well as the actual table definitions and stored procedure source, is much more likely to produce a useful answer.