How to check a column exists in a table in Intersystems Cache SQL? - intersystems-cache

I've noticed that while I can use %dictionary.compiledclass to get a table with schema names and table names, which allow querying for their existence, I cannot do the same for columns. I have yet to find the command that allows verifying whether a column exists or not, or retrieving numerous column names using LIKE "prefix%".
Is there even such a thing? Or an alternative?

You can use %Dictionary.CompiledProperty table and SqlFieldName column of that table.
For example to find out tables that have column 'ColumnName' you can use this query:
select parent->SqlTableName
from %dictionary.compiledproperty
where SqlFieldName='ColumnName'

Execute this Query :
select * from %dictionary.compiledproperty Where parent='TableName' and SqlFieldName='ColumnName'
Check Row Count value ,0 not exist

Related

Rename and add column in same query in PostgreSQL

Let's say I have a table Student with just 2 columns - id bigint, name varchar(50).
I'm trying to rename it and add column in a same single query in PostgreSQL version 11.16 like below:
ALTER TABLE student
RENAME COLUMN name TO fullname,
ADD COLUMN roll_no varchar(30);
But I got this error:
Syntax error at or near "ADD"
Is it possible? If not, why?
Actually, that is not possible. I got this from the StackOverflow answer. One RENAME is possible for only one statement. Even multiple RENAME commands are also not possible. Here is the postgres manual.

pg_largeobject huge, but no tables have OID column type

postrgresql noob, PG 9.4.x, no access to application code, developers, anyone knowledgeable about it
User database CT has 427GB pg_largeobject (PGLOB) table, next largest table is 500ish MB.
Per this post (Does Postgresql use PGLOB internally?) a very reputable member said postgresql does not use PGLOB internally.
I have reviewed the schema of all user tables in the database, and none of them are of type OID (or lo) - which is the value used for PGLOB rows to tie the collection of blob chunks back to a referencing table row. I think this means I cannot use vacuumlo (vacuumlo) to delete orphaned PGLOB rows because that utility searches user objects for those two data types in user tables.
I HAVE identified a table with an integer field type that has int values that match LOID values in PGLOB. This seems to indicate that the developers somehow got their blobs into PGLOB using the integer value stored in a user table row.
QUESTION: Is that last statement possible?
A) If it is not, what could be adding all this data to PGLOB table?
B) If it is possible, is there a way I can programatically search ALL tables for integer values that might represent rows in PGLOB?
NEED: I DESPERATELY need to reduce the size of the PGLOB table, as we are running out of disk space. And no, we cannot add space to existing disk per admin. So I somehow need to determine if there are LOID values in PGLOB that do NOT exist in ANY user tables as integer-type fields and then run lo_unlink to remove the rows. This could get me more usable 8K pages in the table.
BTW, I have also run pg_freespace on PGLOB, and it identified that most of the pages in PGLOB did not contain enough space in which to insert another blob chunk.
THANKS FOR THE ASSISTANCE!
Not really an answer but thinking out loud:
As you found all large objects are stored in a single table. The oid field you refer to is something you add to a table so you can have a pointer to a particular LO oid in pg_largeobject. That being said there is nothing compelling you to store that info in a table, you can just create LO's in pg_largeobject. From the looks of it, and just a guess, the developers stored the oid's as integer's with the intent of doing integer::oid to get a particular LO back as needed. I would look at other information is stored in that table to see if helps determine what the LO's are for?
Also you might join the integer::oid values to the oid(loid) in pg_catalog to see if that table accounts for all of them?
I was able to do a detailed analysis of all user tables in the database, find all columns that contained numeric data with no decimals, and then do a a query from pg_largeobject with a NOT EXISTS clause for every table matching pglob.loid against the appropriate field(s) in the user tables.
I found 25794 LOIDs that could be DELETEd from the PGLOB table, totaling 3.4M rows.
select distinct loid
into OrphanedBLOBs
from pg_largeobject l
where NOT exists (select * from tbl1 cn where cn.noteid = l.loid)
and not exists (select * from tbl1 d where d.document = l.loid)
and not exists (select * from tbl1 d where d.reportid = l.loid)
I used that table to execute lo_unlink(loid) for each of the LOIDs.

select all columns except two in q kdb historical database

In output I want to select all columns except two columns from a table in q/kdb historical database.
I tried running below query but it does not work on hdb.
delete colid,coltime from table where date=.z.d-1
but it is failing with below error
ERROR: 'par
(trying to update a physically partitioned table)
I referred https://code.kx.com/wiki/Cookbook/ProgrammingIdioms#How_do_I_select_all_the_columns_of_a_table_except_one.3F but no help.
How can we display all columns except for two in kdb historical database?
The reason you are getting par error is due to the fact that it is a partitioned table.
The error is documented here
trying to update a partitioned table
You cannot directly update, delete anything on a partitioned table ( there is a separate db maintenance script for that)
The query you have used as fix is basically selecting the data first in-memory (temporarily) and then deleting the columns, hence it is working.
delete colid,coltime from select from table where date=.z.d-1
You can try the following functional form :
c:cols[t] except `p
?[t;enlist(=;`date;2015.01.01) ;0b;c!c]
Could try a functional select:
?[table;enlist(=;`date;.z.d);0b;{x!x}cols[table]except`colid`coltime]
Here the last argument is a dictionary of column name to column title, which tells the query what to extract. Instead of deleting the columns you specified this selects all but those two, which is the same query more or less.
To see what the functional form of a query is you can run something like:
parse"select colid,coltime from table where date=.z.d"
And it will output the arguments to the functional select.
You can read more on functional selects at code.kx.com.
Only select queries work on partitioned tables, which you resolved by structuring your query where you first selected the table into memory, then deleted the columns you did not want.
If you have a large number of columns and don't want to create a bulky select query you could use a functional select.
?[table;();0b;{x!x}((cols table) except `colid`coltime)]
And show all columns except a subset of columns. The column clause expects a dictionary hence I am using the function {x!x} to convert my list to a dictionary. See more information here
https://code.kx.com/q/ref/funsql/
As nyi mentioned, if you want to permanently delete columns from an historical database you can use the deleteCol function in the dbmaint tools https://github.com/KxSystems/kdb/blob/master/utils/dbmaint.md

If a Postgres DB has unique IDs across its tables, how do you find a row using its ID without knowing its table?

Following the blog of Rob Conery I have set of unique IDs across the tables of my Postgres DB.
Now, using these unique IDs, is there a way to query a row on the DB without knowing what table it is in? Or can those tables be indexed such that if the row is not available on the current table, I just increase the index and I can query to the next table?
In short - if you did not prepared for that - then no. You can prepare for that by generating your own uuid. Please look here. For instance PG has uuid that preserve order. Also uuid v5 has something like namespaces. So you can build hierarchy. However that is done by hashing namespace, and I don't know tool to do opposite inside PG.
If you know all possible tables in advance you could prepare a query that simply UNIONs a search with a tagged type over all tables. In case of two tables named comments and news you could do something like:
PREPARE type_of_id(uuid) AS
SELECT id, 'comments' AS type
FROM comments
WHERE id = $1
UNION
SELECT id, 'news' AS type
FROM news
WHERE id = $1;
EXECUTE type_of_id('8ecf6bb1-02d1-4c04-8875-f1da62b7f720');
Automatically generating this could probably be done by querying pg_catalog.pg_tables and generating the relevant query on the fly.

Is there a way to quickly duplicate record in T-SQL?

I need to duplicate selected rows with all the fields exactly same except ID ident int which is added automatically by SQL.
What is the best way to duplicate/clone record or records (up to 50)?
Is there any T-SQL functionality in MS SQL 2008 or do I need to select insert in stored procedures ?
The only way to accomplish what you want is by using Insert statements which enumerate every column except the identity column.
You can of course select multiple rows to be duplicated by using a Select statement in your Insert statements. However, I would assume that this will violate your business key (your other unique constraint on the table other than the surrogate key which you have right?) and require some other column to be altered as well.
Insert MyTable( ...
Select ...
From MyTable
Where ....
If it is a pure copy (minus the ID field) then the following will work (replace 'NameOfExistingTable' with the table you want to duplicate the rows from and optionally use the Where clause to limit the data that you wish to duplicate):
SELECT *
INTO #TempImportRowsTable
FROM (
SELECT *
FROM [NameOfExistingTable]
-- WHERE ID = 1
) AS createTable
-- If needed make other alterations to the temp table here
ALTER TABLE #TempImportRowsTable DROP COLUMN Id
INSERT INTO [NameOfExistingTable]
SELECT * FROM #TempImportRowsTable
DROP TABLE #TempImportRowsTable
If you're able to check the duplication condition as rows are inserted, you could put an INSERT trigger on the table. This would allow you to check the columns as they are inserted instead of having to select over the entire table.