How can I refresh an imported CSV table in PGADMIN III - postgresql

I imported a csv file to an sql table, but this csv file will change on a regular basis. Is there a way to refresh the table based on the changes in the csv file without removing the table, creating it again, and using the 'import' function in pgadmin?
If possible, would such a solution also exist for the entire schema, consisting of tables based on imported csv files?
Thank you in advance!

Edit To Add: This assumes you have decent access to the postgres server so not just a pure PGADMIN solution.
You can do this file an FDW (File Data Wrapper).
https://www.postgresql.org/docs/9.5/file-fdw.html or for your correct version.
For example I have a FDW setup to look at the Postgres logfile from within SQL rather than having to open an ssh session to the server.
The file exists as a table in the schema when you access it refreshes the data from the file.
The code I used for the file is as follows, obviously the file needs to be on the db server local system.
create foreign table pglog
(
log_time timestamp(3) with time zone,
user_name text,
database_name text,
process_id integer,
connection_from text,
session_id text,
session_line_num bigint,
command_tag text,
session_start_time timestamp with time zone,
virtual_transaction_id text,
transaction_id bigint,
error_severity text,
sql_state_code text,
message text,
detail text,
hint text,
internal_query text,
internal_query_pos integer,
context text,
query text,
query_pos integer,
location text,
application_name text
)
server pglog
options (filename '/var/db/postgres/data11/log/postgresql.csv', format 'csv');

Related

Columns not auto populating when importing CSV data on PgAdmin for PostgreSQL (On Windows)

I am on Windows. I created a database, went to the schemas tab on the database dropdown, and created a table. I entered in all of the info for the table. However, when I import a CSV file to the table, the columns of the table do not show up on the column tab. As a result, I cannot import the csv data. It will not even allow me enter in the column names myself.
This is the code I used to create a table:
CREATE TABLE table_name(column1 int, column2 varchar, column3 varchar, column4 int);
select * from table_name
After, I clicked on the table I had made to import csv data. However, the columns will not autopopulate when I finished importing the csv data with a header, and UTF-8 encoding.
I reinstalled PgAdmin but it did not solve anything. I cannot enter in the column names myself as well. The column names of the table should be autogenerating but it is not working.

How to create a OLAP table in Postgresql

This question may sound very rudimentary.
To my surprise, after several hours of searching and referring many books, I could not figure out ...
How to CREATE/MAKE SURE the below table that is created in Postgresql is STORED in Columnar format and is effective for analytical queries. We have 500 million records in this table. We never update this table.
Question is how can if its a ANALYTICAL/COLUMNAR table vs a transactional table ?
CREATE TABLE dwh.access_log
(
ts timestamp with time zone,
remote_address text,
remote_user text,
url text,
domain text,
status_code int,
body_size int,
http_referer text,
http_user_agent text
);
Thanks

postgresql logs dumping in table

I am dumping postgresql .csv file into a table but getting below error
COPY postgres_log FROM 'C:\Program Files\PostgreSQL\13\data\log\postgresql-2021-06-15_191640.csv' WITH csv;
ERROR: extra data after last expected column
CONTEXT: COPY postgres_log, line 1: "2021-06-15 19:16:40.261 IST,,,5532,,60c8af3f.159c,1,,2021-06-15 19:16:39 IST,,0,LOG,00000,"ending lo..."
SQL state: 22P04
I have followed below postgresql document
https://www.postgresql.org/docs/10/runtime-config-logging.html
Please suggest, how to dump in table
You're following instructions from Postgres 10, in which the CSV log format has one fewer column than Postgres 13, which is the version you're actually running. That's why you get that error message. To fix the problem, update your postgres_log table definition as per the PG 13 docs:
https://www.postgresql.org/docs/13/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG
CREATE TABLE postgres_log
(
log_time timestamp(3) with time zone,
user_name text,
database_name text,
process_id integer,
connection_from text,
session_id text,
session_line_num bigint,
command_tag text,
session_start_time timestamp with time zone,
virtual_transaction_id text,
transaction_id bigint,
error_severity text,
sql_state_code text,
message text,
detail text,
hint text,
internal_query text,
internal_query_pos integer,
context text,
query text,
query_pos integer,
location text,
application_name text,
backend_type text,
PRIMARY KEY (session_id, session_line_num)
);

pgAdmin doesn't show user's tables from Yugabyte DB

i have installed YugabyteDB and created local cluster using this command
./bin/yugabyted start
the database is up and running , then i create the keyspaces and tables by running the following command
cqlsh -f resources/IoTData.cql
IoTData.cql contains the following :
// Create keyspace
CREATE KEYSPACE IF NOT EXISTS TrafficKeySpace;
// Create keyspace
CREATE KEYSPACE IF NOT EXISTS TrafficKeySpace;
// Create tables
CREATE TABLE IF NOT EXISTS TrafficKeySpace.Origin_Table (vehicleId text, routeId text, vehicleType text, longitude text, latitude text, timeStamp timestamp, speed double, fuelLevel double, PRIMARY KEY ((vehicleId), timeStamp)) WITH default_time_to_live = 3600;
CREATE TABLE IF NOT EXISTS TrafficKeySpace.Total_Traffic (routeId text, vehicleType text, totalCount bigint, timeStamp timestamp, recordDate text, PRIMARY KEY (routeId, recordDate, vehicleType));
CREATE TABLE IF NOT EXISTS TrafficKeySpace.Window_Traffic (routeId text, vehicleType text, totalCount bigint, timeStamp timestamp, recordDate text, PRIMARY KEY (routeId, recordDate, vehicleType));
CREATE TABLE IF NOT EXISTS TrafficKeySpace.Poi_Traffic(vehicleid text, vehicletype text, distance bigint, timeStamp timestamp, PRIMARY KEY (vehicleid));
// Select from the tables
SELECT count(*) FROM TrafficKeySpace.Origin_Table;
SELECT count(*) FROM TrafficKeySpace.Total_Traffic;
SELECT count(*) FROM TrafficKeySpace.Window_Traffic;
SELECT count(*) FROM TrafficKeySpace.Poi_Traffic;
// Truncate the tables
TRUNCATE TABLE TrafficKeySpace.Origin_Table;
TRUNCATE TABLE TrafficKeySpace.Total_Traffic;
TRUNCATE TABLE TrafficKeySpace.Window_Traffic;
TRUNCATE TABLE TrafficKeySpace.Poi_Traffic;
The YB-Master Admin UI shows me that tables are created , but when i am using pgAdmin client to brows data from that database it doesn't shows me those tables.
in order to connect to yugabyteDB i used those properties :
database : yugabyte
user : yugabyte
password : yugabyte
host : localhost
port : 5433
why the client doesn't show tables i have created
why the client doesn't show tables i have created
The reason is that the 2 different layers can't interact with each other. YSQL data/tables cannot be read from YCQL clients and vice-versa.
This is also explained in the faq:
The YugabyteDB APIs are currently isolated and independent from one
another. Data inserted or managed by one API cannot be queried by the
other API. Additionally, Yugabyte does not provide a way to access the
data across the APIs.

SQLite character encoding via shell select

I have the following problem. I'm using SQLite3 to store some code table information.
There is a text file that contains all the rows I need. I've trimmed it down to one to make things easier.
The codetbls.txt file contains the one row I want insert into the table codetbls.
Using notepad++ to view the file contents shows the following:
codetbls.txt (Encoding: UTF-8)
1A|Frequency|Fréquence
I've created the following table:
create table codetbls (
id char(2) COLLATE NOCASE PRIMARY KEY NOT NULL,
name_eng varchar(50) COLLATE NOCASE,
name_fr varchar(50) COLLATE NOCASE
);
I then execute the following:
.read codetbls.txt codetbls
Now, when I run a select, I see the following:
select * from codetbls;
id name_eng name_fr
--+---------+----------
1A|Frequency|Fr├®quence
I don't understand why it doesn't show properly.
If I execute an insert statement with 'é' using the shell prompt, it shows up correctly. However using the .read command doesn't seem to work.
Based on other suggestions, I have tried the following:
- changed datatype to 'text'
- changed character encoding to UTF-8 without BOM
I don't know why it doesn't show. Any help?