im using a platform called CKAN which saves datasets. When a dataset is added it creates a table with a (seemingly) random name. There are certain datasets that I want to use the data from. Therefore I want to map the relation between the table in another table and the data that is inside.
I would like to use this mapped variable (table name) in a select query as FROM statement.
SELECT * FROM (SELECT tablename FROM mappingtable WHERE id=1)
How do I do this?
Edit: As what kind of data type do I store the table name?
Related
I have declared and temporary table successfully.
DECLARE GLOBAL TEMPORARY TABLE SESSION.MY_TEMP_TABLE
LIKE MYTABLES.AN_EXISTING_TABLE
INCLUDING IDENTITY
ON COMMIT PRESERVE ROWS
WITH REPLACE;
I then use the following to merge two tables and output this into my temporary table:
INSERT INTO SESSION.MY_TEMP_TABLE
SELECT a.*
FROM (SELECT * FROM MYTABLES.TABLE_A) as a
LEFT JOIN
(SELECT * FROM MYTABLES.TABLE_B) as b
ON a.KEY=b.KEY;
Now this above all works.
ISSUE: I now want to merge on two new variables from a further table (MYTABLES.TABLE_C), however it will not let me because I declared the temporary table with a certain number of columns and I am trying to add further columns. I did a google and it seems ALTER TABLE will not work with DECLARED TEMPORARY tables, any help please?
Session tables (DGTT) need to be declared with all the required columns , as you cannot use alter table to add additional columns to a session table.
A way around this limitation is to use session tables in a different manner, specifically to create a new session table on demand with whatever additional columns you need (possibly also including the data from other tables). This can be very fast when you use the NOT LOGGED option. It also works well if your session table uses DISTRIBUTE BY HASH on environments that support that feature.
Here is an example that shows 3 session tables, the third of which has all columns from the first two tables:
declare global temporary table session.m1 like emp including identity on commit preserve rows with replace not logged;
declare global temporary table session.m2 like org including identity on commit preserve rows with replace not logged;
declare global temporary table session.m3 as (select * from session.m1, session.m2) with data with replace not logged;
If you do not want to populate the session table at time of declaration you can use DEFINITION ONLY instead of WITH DATA (or use WITH NO DATA) and populate the table later via insert or merge.
I need to convert text data in a table to large object data in another table. So the table structure is :-
Employee->
id (character varying(130)),name (character varying(130)), description (text)
EmployeeDetailed ->
detailed_id(character varying(130)), desc_lob (oid)
What query can I run in order to transfer all the rows from Employee table to EmployeeDetailed table so that detailed_id would be populated from Employee's id columns and description would be converted to large object and oid would be inserted in desc_lob.
Can I use lo_import(), would it help here?
lo_import() is a client interface command. You can use an INSERT statement, using the result of a SELECT, and use lo_from_bytea inside that SELECT clause:
INSERT INTO EmployeeDetailed (detailed_id, desc_lob)
SELECT id, lo_from_bytea(0, convert_to(description, 'LATIN1'))
FROM Employee
Change LATIN1 for whatever encoding you might like (see this answer)
I'm a real beginner when it comes to SQL and I'm currently trying to build a database using postgres. I have a lot of data I want to put into my database in JSON files, but I have trouble converting it into tables. The JSON is nested and contains many variables, but the behavior of jsonb_populate_record allows me to ignore the structure I don't want to deal with right now. So far I have:
CREATE TABLE raw (records JSONB);
COPY raw from 'home/myuser/mydocuments/mydata/data.txt'
create type jsonb_type as (time text, id numeric);
create table test as (
select jsonb_populate_record(null::jsonb_type, raw.records) from raw;
When running the select statement only (without the create table) the data looks great in the GUI I use (DBeaver). However it does not seem to be an actual table as I cannot run select statements like
select time from test;
or similar. The column in my table 'test' also is called 'jsonb_populate_record(jsonb_type)' in the GUI, so something seems to be going wrong there. I do not know how to fix it, I've read about people using lateral joins when using json_populate_record, but due to my limited SQL knowledge I can't understand or replicate what they are doing.
jsonb_populate_record() returns a single column (which is a record).
If you want to get multiple columns, you need to expand the record:
create table test
as
select (jsonb_populate_record(null::jsonb_type, raw.records)).*
from raw;
A "record" is a a data type (that's why you need create type to create one) but one that can contain multiple fields. So if you have a column in a table (or a result) that column in turn contains the fields of that record type. The * then expands the fields in that record.
I was trying to insert values from one table to another from two different databases.
My issue is I have two tables with a relation and the first table is having an identity column also.
eg table first(id, Name) - table second(id, address)
So now both the table exist with values in a db and i am trying to copy values from this db to another db.
So when I insert values from first db to second db the the first table will insert values for the Id column by itself so now I have to link that id to the second table.
How can I do that?
UPDATE using MSSQL server 2000
You can use #scope_identity immediately after your insert in SQL server 2000 which will give you the last id within the current scope but I'm not sure how that would work with bulk inserting of data
http://msdn.microsoft.com/en-us/library/ms190315.aspx
If this were SQL Server 2005 or later I would suggest using the output clause in your insert statement to retrieve the ids just inserted, but that was not available in SQL Server 2000.
If your data contains some column or series of columns which is unique other than the identity column, then you can query your first table based on that series of columns to get the ids and use that to populate your second table.
If the target tables were empty you could use SET IDENTITY_INSERT ON - this would allow to insert original values to identity columns, and you will not have to update referenced IDs. Of course if there is any existing ids that can overlap inserted ids - that is not the solution.
If names in first tables are unique, you could boild mapping between new and old ids and perform update something like this:
UPDATE S
SET S.id = F.id
FROM second S
INNER JOIN first_original FO ON FO.id = S.id
INNER JOIN first F ON F.name = FO.name
If names are not unique, then original ids should be saved in "first" in order to provide mapping between old and new ids. It can be temporary new column that can be deleted after ids in "second" will be updated.
Or as Rich Andrews said you could use #scope_identity, but in this case you will have to perform insert one by one - declare a cursor on source table, insert each record, get its new id and insert it into "second" table.
I am new in iphone i am developing a application with data base i am create a database and have to table we want to insert one table value to other. I am not understand how do this.
if any body know help me with coding.
Nested query: INSERT INTO Table VALUES (SELECT blah...)