I have the following problem. I'm using SQLite3 to store some code table information.
There is a text file that contains all the rows I need. I've trimmed it down to one to make things easier.
The codetbls.txt file contains the one row I want insert into the table codetbls.
Using notepad++ to view the file contents shows the following:
codetbls.txt (Encoding: UTF-8)
1A|Frequency|Fréquence
I've created the following table:
create table codetbls (
id char(2) COLLATE NOCASE PRIMARY KEY NOT NULL,
name_eng varchar(50) COLLATE NOCASE,
name_fr varchar(50) COLLATE NOCASE
);
I then execute the following:
.read codetbls.txt codetbls
Now, when I run a select, I see the following:
select * from codetbls;
id name_eng name_fr
--+---------+----------
1A|Frequency|Fr├®quence
I don't understand why it doesn't show properly.
If I execute an insert statement with 'é' using the shell prompt, it shows up correctly. However using the .read command doesn't seem to work.
Based on other suggestions, I have tried the following:
- changed datatype to 'text'
- changed character encoding to UTF-8 without BOM
I don't know why it doesn't show. Any help?
Related
I have a json file as:
[xyz#innolx20122 ~]$ cat test_cgs.json
{"technology":"AAA","vendor":"XXX","name":"RBNI","temporal_unit":"hour","regional_unit":"cell","dataset_metadata":"{\"name\": \"RBNI\", \"intervals_epoch_seconds\": [[1609941600, 1609945200]], \"identifier_column_names\": [\"CELLID\", \"CELLNAME\", \"NETWORK\"], \"vendor\": \"XXX\", \"timestamp_column_name\": \"COLLECTTIME\", \"regional_unit\": \"cell\"}","rk":1}
which I am trying to upload to below table in Postgres
CREATE TABLE temp_test_table
(
technology character varying(255),
vendor character varying(255),
name character varying(255),
temporal_unit character varying(255),
regional_unit character varying(255),
dataset_metadata json,
rk character varying(255)
);
and here is my copy command
db-state=> \copy temp_test_table(technology,vendor,name,temporal_unit,regional_unit,dataset_metadata,rk) FROM '/home/eksinvi/test_cgs.json' WITH CSV DELIMITER ',' quote E'\b' ESCAPE '\';
ERROR: extra data after last expected column
CONTEXT: COPY temp_test_table, line 1: "{"technology":"AAA","vendor":"XXX","name":"RBNI","temporal_unit":"hour","regional_unit":"cell","data..."
I even tried loading this file to big query table but no luck
bq load --autodetect --source_format=NEWLINE_DELIMITED_JSON --allow_quoted_newlines --allow_jagged_rows --ignore_unknown_values test-project:vikrant_test_dataset.cg_test_table "gs://test-bucket-01/test/test_cgs.json"
any of the solution would work for me. I want to load this json either to Postgres table or bigquery table.
I had similar problems. In my case, it was related to NULL columns and encoding of the file. I also had to specify a custom delimiter because my columns sometimes included the default limiter and it would make the copy fail.
\\copy mytable FROM 'filePath.dat' (DELIMITER E'\\t', FORMAT CSV, NULL '', ENCODING 'UTF8' );
In my case, I was exporting data to a CSV file from SQL Server and importing it to postgres. In SQL Server, we had unicode characters that would show up as "blanks" but that would screw up the copy command. I had to search the SQL table for those characters with regex queries and eliminate invalid characters. It's an edge case but that was part of the problem in my case.
I am using PgAdmin 4 with PostgreSQL 12. Below is a simple tabledefinition vocabulary and a view vocabulary_input. When I use the SELECT - statement from the vocabulary_input in the Query-tool screen, I can update and add rows. However, when I choose "View / Edit data -- all rows" from vocabulary_input the view is locked. Why is that?
CREATE TABLE public.vocabulary
(
entry text COLLATE pg_catalog."default" NOT NULL,
description text COLLATE pg_catalog."default",
reference text COLLATE pg_catalog."default" NOT NULL,
CONSTRAINT vocabulary_pkey PRIMARY KEY (entry, reference)
);
CREATE OR REPLACE VIEW public.vocabulary_input
AS
SELECT vocabulary.entry,
vocabulary.description,
vocabulary.reference
FROM vocabulary
ORDER BY vocabulary.entry, vocabulary.reference;
I have posted this question at the developers site of pgadmin with the question if this is a bug or "expected behavior".
This is the answer:
"I am rejecting this RM, as in the future we will get rid of View/Edit data. We will enhance the query tool further, given that we can edit in place in the query tool."
So, my conclusion is that's is an interface issue and has nothing to do with SQL.
See: https://redmine.postgresql.org/issues/5532
I imported a csv file to an sql table, but this csv file will change on a regular basis. Is there a way to refresh the table based on the changes in the csv file without removing the table, creating it again, and using the 'import' function in pgadmin?
If possible, would such a solution also exist for the entire schema, consisting of tables based on imported csv files?
Thank you in advance!
Edit To Add: This assumes you have decent access to the postgres server so not just a pure PGADMIN solution.
You can do this file an FDW (File Data Wrapper).
https://www.postgresql.org/docs/9.5/file-fdw.html or for your correct version.
For example I have a FDW setup to look at the Postgres logfile from within SQL rather than having to open an ssh session to the server.
The file exists as a table in the schema when you access it refreshes the data from the file.
The code I used for the file is as follows, obviously the file needs to be on the db server local system.
create foreign table pglog
(
log_time timestamp(3) with time zone,
user_name text,
database_name text,
process_id integer,
connection_from text,
session_id text,
session_line_num bigint,
command_tag text,
session_start_time timestamp with time zone,
virtual_transaction_id text,
transaction_id bigint,
error_severity text,
sql_state_code text,
message text,
detail text,
hint text,
internal_query text,
internal_query_pos integer,
context text,
query text,
query_pos integer,
location text,
application_name text
)
server pglog
options (filename '/var/db/postgres/data11/log/postgresql.csv', format 'csv');
I want to migrate from MySQL to PostgreSQL.My query for create table is like this.
CREATE TABLE IF NOT EXISTS conftype
(
CType char(1) NOT NULL,
RegEx varchar(300) default NULL,
ErrStr varchar(300) default NULL,
Min integer default NULL,
Max integer default NULL,
PRIMARY KEY (CType)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin;
What is the converted form of this query. I am confused with DEFAULT CHARSET=latin1 COLLATE=latin1_bin part. How can I convert this part?
That one would mean that the table uses only latin-1 (iso-8859-1) character set and latin-1 binary sorting order. In PostgreSQL the character set is database-wide, there is no option to set it on table level.
You could create a mostly compatible database with:
CREATE DATABASE databasenamegoeshere WITH ENCODING 'LATIN1' LC_COLLATE='C'
LC_CTYPE='C' TEMPLATE=template0;
However, I personally would consider a MySQL->PostgreSQL port also worthy of switching to UTF-8/Unicode.
The character set is defined when you create the database, you can't overwrite that per table in Postgres.
A non-standard collation can be defined only on column level in Postgres, not on table level. I think(!) that the equivalent to latin1_bin in MySQL would be the "C" collation in Postgres.
So if you do need a different collation, you need something like this
RegEx varchar(300) default NULL collate "C",
ErrStr varchar(300) default NULL collate "C",
min and max are reserved wordds in SQL and you shouldn't use them as column names (although using them as column names will work I strongly suggest you find different names to avoid problems in the future)
For example, there is a table named 'testtable' that has following columns: testint (integer) and testtext (varchar(30)).
What i want to do is pretty much something like that:
INSERT INTO testtable VALUES(15, CONTENT_OF_FILE('file'));
While reading postgresql documentation, all I could find is COPY TO/FROM command, but that one's applied to tables, not single columns.
So, what shall I do?
If this SQL code is executed dynamically from your programming language, use the means of that language to read the file, and execute a plain INSERT statement.
However, if this SQL code is meant to be executed via the psql command line tool, you can use the following construct:
\set content `cat file`
INSERT INTO testtable VALUES(15, :'content');
Note that this syntax is specific to psql and makes use of the cat shell command.
It is explained in detail in the PostgreSQL manual:
psql / SQL Interpolation
psql / Meta-Commands
If I understand your question correctly, you could read the single string(s) into a temp table and use that for insert:
DROP SCHEMA str CASCADE;
CREATE SCHEMA str;
SET search_path='str';
CREATE TABLE strings
( string_id INTEGER PRIMARY KEY
, the_string varchar
);
CREATE TEMP TABLE string_only
( the_string varchar
);
COPY string_only(the_string)
FROM '/tmp/string'
;
INSERT INTO strings(string_id,the_string)
SELECT 5, t.the_string
FROM string_only t
;
SELECT * FROM strings;
Result:
NOTICE: drop cascades to table str.strings
DROP SCHEMA
CREATE SCHEMA
SET
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "strings_pkey" for table "strings"
CREATE TABLE
CREATE TABLE
COPY 1
INSERT 0 1
string_id | the_string
-----------+---------------------
5 | this is the content
(1 row)
Please note that the file is "seen" by the server as the server sees the filesystem. The "current directory" from that point of view is probably $PG_DATA, but you should assume nothing, and specify the complete pathname, which should be reacheable and readable by the server. That is why I used '/tmp', which is unsafe (but an excellent rendez-vous point ;-)