I have a table named conditions on a schema named test. I created a hypertable and inserted hundreds of rows.
When I run select show_chunks(), it works and displays chunks but I cannot use the table name as parameter as suggested in the manual. This does not work:
SELECT show_chunks("test"."conditions");
How can I fix this?
Ps: I want to query the chunk itself by its name? How can I do this?
The show_chunks expects a regclass, which depending on your current search path means you need to schema qualify the table.
The following should work:
SELECT public.show_chunks('test.conditions');
The double quotes are only necessary if your table is a delimited identifier, for example if your tablename contains a space, you would need to add the double quotes for the identifier. You will still need to wrap it in single quotes though:
SELECT public.show_chunks('test."equipment conditions"');
SELECT public.show_chunks('"test schema"."equipment conditions"');
For more information about identifier quoting:
https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS
Edit: Addressing the PS:
I want to query the chunk itself by its name? How can I do this?
feike=# SELECT public.show_chunks('test.conditions');
show_chunks
--------------------------------------------
_timescaledb_internal._hyper_28_1176_chunk
_timescaledb_internal._hyper_28_1177_chunk
[...]
SELECT * FROM _timescaledb_internal._hyper_28_1176_chunk;
Related
I'm using a PostgreSQL with a Go driver. Sometimes I need to query not existing fields, just to check - maybe something exists in a DB. Before querying I can't tell whether that field exists. Example:
where size=10 or length=10
By default I get an error column "length" does not exist, however, the size column could exist and I could get some results.
Is it possible to handle such cases to return what is possible?
EDIT:
Yes, I could get all the existing columns first. But the initial queries can be rather complex and not created by me directly, I can only modify them.
That means the query can be simple like the previous example and can be much more complex like this:
WHERE size=10 OR (length=10 AND n='example') OR (c BETWEEN 1 and 5 AND p='Mars')
If missing columns are length and c - does that mean I have to parse the SQL, split it by OR (or other operators), check every part of the query, then remove any part with missing columns - and in the end to generate a new SQL query?
Any easier way?
I would try to check within information schema first
"select column_name from INFORMATION_SCHEMA.COLUMNS where table_name ='table_name';"
And then based on result do query
Why don't you get a list of columns that are in the table first? Like this
select column_name
from information_schema.columns
where table_name = 'table_name' and (column_name = 'size' or column_name = 'length');
The result will be the columns that exist.
There is no way to do what you want, except for constructing an SQL string from the list of available columns, which can be got by querying information_schema.columns.
SQL statements are parsed before they are executed, and there is no conditional compilation or no short-circuiting, so you get an error if a non-existing column is referenced.
Query:
Select x classifier from tabx x
returns rows with one column named 'classifier' with list of values from the row.
Does anybody know where such features are documented? Is it the only usage of the keyword? I tried to google it but I've found only list of postgresql reserved keywords without explanation.
Example
It works like String_agg with ',' delimiter, but for a row.
It's called a column alias.
For columns it's documented here and for tables it's documented here
The name of the alias is irrelevant and bears no special meaning (including the word "classifier"). It just has to be a valid SQL identifier. Using an alias won't change anything in the resulting data.
If you use the (optional) AS keyword, it might be a bit more obvious:
select some_column as something_else
from some_table as another_name;
The reference x refers to the table alias specified in the FROM clause and refers to the complete row/record. It's not the column alias ("classifier") that does this, it's the reference to the table.
This behaviour is documented here in the manual
Edit after a new query was shown.
select a classifier, a.classifier
from t_attribute a;
The part a classifier still gives an alias to the record column (as explained above). a.classifier simply accesses a column from the table t_attribute.
It could also have been written as:
select a as the_complete_row, a.classifier as classifier
from t_attribute a;
We are seeing issue in table value which are populated from DB2 (source) to Postgres (Target).
I have including here all the job details for each component.
Based on the above approach and once the data has been populated, when we run the below query in the Postgres DB.
SELECT * FROM VMRCTTA1.VMRRCUST_SUMM where cust_gssn_cd='XY03666699' ;
SELECT * FROM VMRCTTA1.VMRRCUST_SUMM where cust_cntry_cd='847' ;
There will be no records were returned however, when we run the same query with Trim as below it works.
SELECT * FROM VMRCTTA1.VMRRCUST_SUMM where trim(cust_gssn_cd)='XY03666699' ;
SELECT * FROM VMRCTTA1.VMRRCUST_SUMM where trim(cust_cntry_cd)='847' ;
Below are the ways we have tried to overcome this but no luck.
Used tmap between source and target component.
Used trim in source component under Advanced setting.
Change the datatype in Postgres DB of cust_cntry_cd from char(5) to Character varying, this will allow value without any length restriction.
Please suggest what is missing as we have this issue in almost all the table where we have character/varchar columns.
We are using TOS.
The data type is probably character(5) in DB2.
That means that the trailing spaces are part of the column and will be migrated. You have to compare with
cust_cntry_cd = '847 '
or cast the right argument to character(5):
cust_cntry_cd = CAST ('847' AS character(5))
Maybe you could delete all spaces in the advanced settings of the tDB2Input component.
Like the screen :
Trying to be lazy when looking at an example
SELECT realestate.address, realestate.parcel, s.sale_year, s.sale_amount,
FROM realestate INNER JOIN
dblink('dbname=somedb port=5432 host=someserver
user=someuser password=somepwd',
'SELECT parcel_id, sale_year,
sale_amount FROM parcel_sales')
AS s(parcel_id char(10),sale_year int, sale_amount int)
Is there a way of getting the AS section filled in from the table?
I'm copying data from tables of the same name and structure on different servers.
If I can get the structure to copy from the existing table, it will save me a lot of time
Thanks
Bruce
The answer is: No. See the doc:
Since dblink can be used with any query, it is declared to return record, rather than specifying any particular set of columns. This means that you must specify the expected set of columns in the calling query — otherwise PostgreSQL would not know what to expect.
http://www.postgresql.org/docs/9.1/static/contrib-dblink-function.html
Edit: by the way, for a table or a view, you can get the fields name and type in a first query:
select column_name
from information_schema.columns
where table_name = 'your_table_or_view';
You could then use it to fill the fields declaration.
Alexis
I have a text column in a table. We store XML in this column. Now I want to search for tags and values
Example data:
<bank>
<name>Citi Bank</name>
.....
.....
/<bank>
I would like to run the following query:
select * from xxxx where to_tsvector('english',xml_column) ## to_tsquery('<name>Citi Bank</name>')
This works fine but it also works for tags like name1 or no tag.
How do I have to setup my search in order for this to work so I get an exact match for the tag and value ?
You could use the xpath function like this
select *
from xxx
where xpath(xml_column, 'bank/name/text()') = 'CitiBank';
BUT it won't use the full-text search index. You could use a subquery to find probable matches and avoid full scans, and the xpath expression for getting correct answers, or create a function index if the queries are going to be always the same.
You might want to reconsider storing XML in a database, instead you could look at inserting the data into related tables, since using XML is a poor replacement for a relational store. Even if you go with XML in database, use the XML type, not the TEXT type, and create an index like this (yes, basically you'd need an index per xpath expression):
CREATE INDEX my_funcidx ON my_table USING GIN ( CAST(xpath('/bank/name/text()', xmlfield) AS TEXT[]) );
then, query it like this:
SELECT * FROM my_table WHERE CAST(xpath('/bank/name/text()', xmlfield) AS TEXT[]) #> '{Citi Bank}'::TEXT[];
and this will use the index, as EXPLAIN will indicate.
The important part is the CASTing to TEXT[], as XML[], which the xpath function returns, isn't indexable by default.