postgres - inserting a column value as literal 'null' - postgresql

Using pgadmin, I am trying to import data into a table from a CSV file - as it's a lot faster than running SQL scripts.
I am having problem when one of the column value is 'NULL'. The problem is that postgres is taking it as null where as this should really be 'NULL'. How can I avoid this?
in other words, the below CSV import fails:
my_column
A1B2
NULL // Postgres fails here because it thinks I am inserting a null into non-nullable column, but it's a text value
Thanks,

Related

Ora2PG REPLACE_AS_BOOLEAN property and exclude one column by replacing

I'm using ora2pg to import an oracle db schema into a postgresql db schema. I configured all in the correct way and I'm able to dump the oracle dn into the postgresql db.
In the schema that I'm converting I have some columns number(1,0) that I need to convert as boolean in the pg schema.
At first I used this configuration
REPLACE_AS_BOOLEAN NUMBER:1
so every column with this type will be conmverted as boolean in the pg db.
The problem is that I have a column in the oracle schema defined as number(1,0). This column has to remain numeric and maintain the same type on the pg schema, so it hasn't to be converted as boolean.
This means that i changed the property in this manner
REPLACE_AS_BOOLEAN TABLE1:COLUMN1 TABLE2:COLUMN2 TABLE3:COLUMN3
I have a lot of columns that they have to be converted as boolean and the definition of this property became very long.
Is there a method to define the REPLACE_AS_BOOLEAN property to replace all the column with type number(1,0), but with some exception for one or some of them?
I had to wrote the property with the list of all the tables name and columns name

Postgresql: JSONB Data Type column not null error

I am trying to alter one column which has JSONB data type in postgresql.
ALTER TABLE my_schema_name."my_table_name" ALTER COLUMN "my_column_name" SET NOT NULL;
It is giving me below error.
SQL Error [23502]: ERROR: column "my_column_name" contains null values
Note: columns are added newly and there is no data in these columns.
This is something I have updated table with default value and then alter the column with Not null.

pyspark hive - Insert NULL as DB null through text file

While inserting text file from pyspark shell to hive table.
NULL values treating as string in table.
If i query hive table, records can be retried only with filter condition = 'NULL' rather than is null.
Can any one suggest how to insert data as DB NULLS in table
Check if your spark dataframe is having null or None.
And while writing to the hive table set the nullValue option as
df.write.option('nullValue', None).saveAsTable(path)
This will solve your issue.

Typecasting a Dataframe returns 'null' for empty fields

I have a raw data loaded into my hive tables with all the columns as strings by default. Now I need to change the datatypes of hive tables to export to SQLServer.
When Typecasting the hive columns the empty fields returns 'NULL', tried loading the hive tables into dataframe and typecast the columns, but still dataframe also returning 'null' for empty fields. SQLserver couldn't recognize such values.
Can anyone suggest a solution to avoid the 'null' values in display when I get data from hive or dataframes.
If you want to change the data type only because you want to have that particular format in exported data, consider using writing to a directory as per your requirement and then export using sqoop/any other tool.
INSERT OVERWRITE DIRECTORY '<HDFS path>'
Row format delimited
Fields terminated by '<delimiter>'
SELECT
a,
b
From
table_name
Where <condition>;
While exporting, if you have null values consider using these arguments in your sqoop command
--null-string "\\N" --null-non-string "\\N"
Hope this helps you

Importing .csv to postgres. what to do with time?

I would like to import some .csv data into postgres and have issues with a data type :
one of my attributes is birthday:
1968-06-24 00:00:00
Therefore I use timestamp, as suggested by pgmodeler. However I always get the message:
postgres=# \connect new_database
You are now connected to database "new_database" as user "postgres".
new_database=# \copy players FROM '/Users/Desktop/Rdaten/Data/players.csv' DELIMITER ';' CSV HEADER
Error:
ERROR: invalid input syntax for type timestamp: "NULL"
CONTEXT: COPY players, line 267, column birthday: "NULL"
new_database=#
What can I do about that?
Not sure if this is the same issue, but I got this error trying to import a CSV where null values were listed as 'NULL' and fixed it by adding "null 'NULL'" to the copy command--by default Postgres expects NULL values to be input as empty strings.