How to copy all column names and data types from table in DBeaver? - postgresql

So I have this table with a LOT of columns. And I am trying to do pxf connection from another database where this table is. Are there any way I can copy or export maybe all
column name - data type pairs, so I won't have to announce it one by one in while creating external table?

You could generate the DDL from that table and use it as you need, by clicking right mouse button and selecting from menu Generate SQL-->DDL.

Related

Change databricks delta table typr to external

I have a MANAGED table in delta format in databrciks and I wanted to change it to EXTERNAL to make sure dropping the table would not affect the data. However the following code did not change the table TYPE and just added a new table property. How can I correctly convert my managed table to an external table ?
%sql
alter table db_delta.table1 SET TBLPROPERTIES('EXTERNAL'='TRUE')
Describe Table:
# Detailed Table Information
Name
db_delta.table1
Location
dbfs:/user/hive/warehouse/db_delta.db/table1
Provider
delta
Type
MANAGED
Table Properties
[EXTERNAL=TRUE,overwriteSchema=true]
I found the following workaround for the above scenario.
1.Copy the Managed table location to external location
dbutils.fs.cp('dbfs:/user/hive/warehouse/amazon_data_agg','abfss://data#amazondata.dfs.core.windows.net/amzon_aggred/',True)
Now drop the managed table.
drop table amazon_data_agg;
Now create the external table by the schema of the already created table, if there is schema mismatch you will get error.
Now you can append and do all operation
df_agg.write.format('delta').mode('append').option('path','abfss://data#amazondata.dfs.core.windows.net/amzon_aggred/').saveAsTable('amazon_data_agg')

Dynamic table selection

Is it possible to dynamically select a table by name?
For example I have a table, and every time records are uploaded to it a backup is created first with the date appended to the table name.
table_20191108
table_20191109
table_20191110
table_20191111
What I would like to do is basically write some type of dynamic sql that always
select * from table_MAXDATE
I would like to do this so I can compare table to the most recent backup (e.g. table_20191111) in order to see what changed between the two tables.
haven't tried anything specific yet.

Handling the output of jsonb_populate_record

I'm a real beginner when it comes to SQL and I'm currently trying to build a database using postgres. I have a lot of data I want to put into my database in JSON files, but I have trouble converting it into tables. The JSON is nested and contains many variables, but the behavior of jsonb_populate_record allows me to ignore the structure I don't want to deal with right now. So far I have:
CREATE TABLE raw (records JSONB);
COPY raw from 'home/myuser/mydocuments/mydata/data.txt'
create type jsonb_type as (time text, id numeric);
create table test as (
select jsonb_populate_record(null::jsonb_type, raw.records) from raw;
When running the select statement only (without the create table) the data looks great in the GUI I use (DBeaver). However it does not seem to be an actual table as I cannot run select statements like
select time from test;
or similar. The column in my table 'test' also is called 'jsonb_populate_record(jsonb_type)' in the GUI, so something seems to be going wrong there. I do not know how to fix it, I've read about people using lateral joins when using json_populate_record, but due to my limited SQL knowledge I can't understand or replicate what they are doing.
jsonb_populate_record() returns a single column (which is a record).
If you want to get multiple columns, you need to expand the record:
create table test
as
select (jsonb_populate_record(null::jsonb_type, raw.records)).*
from raw;
A "record" is a a data type (that's why you need create type to create one) but one that can contain multiple fields. So if you have a column in a table (or a result) that column in turn contains the fields of that record type. The * then expands the fields in that record.

Preconfigure column types when using DataGrip to import from CSV

I'm using DataGrip 2016.3 to connect to a PostgreSQL server.
When I right click and Import From File, currently DataGrip makes assumptions about the type associated with each column
(see image for what DataGrip defaults to in the import dialog). I'd like to specify that column A is VARCHAR(50), column B is INT, column C is DATE, and so on. I will be uploading similar files multiple times, and I'd like to avoid having to specify my types each time I import. Is there a way to save and select configurations of columns A, B, and C's types?
The best way that I've found to accomplish this is to first create the destination table. Then Right-click on the table in the database tool window to launch the "Import Data from File..." wizard.
This is also great if you want the destination table to contain additional columns, such as an auto-incrementing id column.

How to insert a row in postgreSQL pgAdmin?

I am new to postgreSQL. Is there any way to insert row in postgreSQL pgAdmin without using SQL Editor (SQL query)?
The accepted answer is related to PgAdmin 3 which is outdated and not supported.
For PgAdmin 4 and above, the application is running in the browser.
After you create your table, you have to make sure that your table has a primary key otherwise you couldn't edit the data as mentioned in the official documentation.
To modify the content of a table, each row in the table must be
uniquely identifiable. If the table definition does not include an OID
or a primary key, the displayed data is read only. Note that views
cannot be edited; updatable views (using rules) are not supported.
1- Add primary key
Expand your table properties by clicking on it in the pgAdmin4 legend. Right-click on 'Constraints', select 'Create' --> 'Primary Key'to define a primary key column.
2- View the data in excel like format
Browser view, right-click on your table --> select View/Edit Data --> All Rows
3- Add new row / Edit data
On the Data Output tab at the bottom of the table below the last row, there will be an empty row where you can enter new data in an excel-like manner. If you want to make updates you can also double click on any cell and change its value.
4- Save the changes
Click on the 'Save' button on the menu bar near the top of the data window.
I think some answers don't provide an answer to the original question, some of them insert records but with SQL statements and the OP clearly said WITHOUT, so I post the right answer: (Step by Step)
Alternatively you can use the query tool:
INSERT INTO public.table01(
name, age)
VALUES (?, ?);
use the lightning icon to execute.
You can do that without the SQL editor, but it's better to do this by queries.
Although, in pgAdmin, there is an option which you can click to have an excel-like window where you can add and update data in a table without using SQL language. Please select a table which you want to add a row first and click on the next icon here below.
Editing table data without primary key is forbidden
If your tables don't have a primary key or OIDs, you can view the data only.
Inserting new rows and changing existing rows isn't possible for the Edit Data tool without primary key.
Use INSERT:
INSERT INTO tablename (field1, field2) values ('value1', 2);
on pgAdmin 4, right-click on the table and use the item like below. You can also use that script in the background.
Finally, to watch the inserted data do like below. You can also use that script in the background.
All the above are correct answers. I just want to add that : When u create a table, make sure u have atleast one column as PRIMARY_KEY. Then, just follow the GUI : View/Edit data. U can add row as the last row of the table
As an update, the icon for the save button is different in pgAdmin 4.
This is how the menu should look after right-clicking on the table you want to insert into and hovering over "View/Edit Data".
After adding rows, either press F6 (on Ubuntu) or click the icon that looks like a stack of discs (database icon) with a lock on it.
Zoomed in:
Wide View: