How can I get Technical Names of my Fields Using COPY Command - postgresql

I wrote my code, I am getting all the data from my Postgres Database but fields technical names stored in my database are not coming under my xlsx file??
Below is my code:
conn = psycopg2.connect("dbname=cush2 user=tryton50 password=admin host=localhost")
cur = conn.cursor()
sql = "COPY (SELECT * FROM cushing_syndrome) TO STDOUT WITH CSV DELIMITER ','"
with open("/home/cf/Desktop/result_1.xlsx", "w") as file:
cur.copy_expert(sql, file)

Related

I am unable to COPY my CSV to postgres using psycopg2/copy_expert

edit:
In postgresql.conf, the log_statement is set to:
#log_statement = 'none' # none, ddl, mod, all
My objective is to COPY a .cvs file containing ~300k records to Postgres.
I am running the script below and nothing happens; no error or warning but still the csv is not uploaded.
Any thoughts?
import psycopg2
# Try to connect
try:
conn = psycopg2.connect(database="<db>", user="<user>", password="<pwd>", host="<host>", port="<port>")
print("Database Connected....")
except:
print("Unable to Connect....")
cur = conn.cursor()
try:
sqlstr = "COPY \"HISTORICALS\".\"HISTORICAL_DAILY_MASTER\" FROM STDIN DELIMITER ',' CSV"
with open('/Users/kevin/Dropbox/Stonks/HISTORICALS/dump.csv') as f:
cur.copy_expert(sqlstr, f)
conn.commit()
print("COPY pass")
except:
print("Unable to COPY...")
# Close communication with the database
cur.close()
conn.close()
This is what my .csv looks like
Thanks!
Kevin
I suggest you to load in first time your df with pandas
import pandas as pd
import psycopg2
conn = psycopg2.connect(database="<db>", user="<user>", password="<pwd>", host="<host>", port="<port>")
cur = conn.cursor()
df = pd.read_csv('data.csv')
cur.copy_from(df, schema , null='', sep=',/;', columns=(df.columns))
For the part columns=(df.columns) I forgot if they want turple or list but should work with a conversion and you should read this
Pandas dataframe to PostgreSQL table using psycopg2 without SQLAlchemy? who could help you

The sql works fine but with python it doesn't insert values into table

I'm trying to use python for stored procs in sql, I have tested my sql code and it works fine, but when I execute it via python, the values are not inserted into my table
Note: I don't have any errors when executing
My code below:
import psycopg2
con = psycopg2.connect(dbname='dbname'
, host='host'
, port='5439', user='username', password='password')
def executeScriptsfromFile(filename):
#Open and read the file as a single buffer
cur = con.cursor()
fd = open(filename,'r')
sqlFile = fd.read()
fd.close()
#all SQL commands(split on ';')
sqlCommands = filter(None,sqlFile.split(';'))
#Execute every command from the input file
for command in sqlCommands:
# This will skip and report errors
# For example, if the tables do not yet exist, this will skip over
# the DROP TABLE commands
try:
cur.execute(command)
con.commit()
except Exception as inst:
print("Command skipped:", inst)
cur.close()
executeScriptsfromFile('filepath.sql')
Insert comment in sql:
INSERT INTO schema.users
SELECT
UserId
,Country
,InstallDate
,LastConnectDate
FROM #Source;
Note: As I said the sql works perfectly fine when I tested it.

How to get data out of a postgres bytea column into a python variable using sqlalchemy?

I am working with the script below.
If I change the script so I avoid the bytea datatype, I can easily copy data from my postgres table into a python variable.
But if the data is in a bytea postgres column, I encounter a strange object called memory which confuses me.
Here is the script which I run against anaconda python 3.5.2:
# bytea.py
import sqlalchemy
# I should create a conn
db_s = 'postgres://dan:dan#127.0.0.1/dan'
conn = sqlalchemy.create_engine(db_s).connect()
sql_s = "drop table if exists dropme"
conn.execute(sql_s)
sql_s = "create table dropme(c1 bytea)"
conn.execute(sql_s)
sql_s = "insert into dropme(c1)values( cast('hello' AS bytea) );"
conn.execute(sql_s)
sql_s = "select c1 from dropme limit 1"
result = conn.execute(sql_s)
print(result)
# <sqlalchemy.engine.result.ResultProxy object at 0x7fcbccdade80>
for row in result:
print(row['c1'])
# <memory at 0x7f4c125a6c48>
How to get the data which is inside of memory at 0x7f4c125a6c48 ?
You can cast it use python bytes()
for row in result:
print(bytes(row['c1']))

Psycopg2 copy_from throws DataError: invalid input syntax for integer

I have a table with some integer columns. I am using psycopg2's copy_from
conn = psycopg2.connect(database=the_database,
user="postgres",
password=PASSWORD,
host="",
port="")
print "Putting data in the table: Opened database successfully"
cur = conn.cursor()
with open(the_file, 'r') as f:
cur.copy_from(file=f, table = the_table, sep=the_delimiter)
conn.commit()
print "Successfully copied all data to the database!"
conn.close()
The error says that it expects the 8th column to be an integer and not a string. But, Python's write method can only read strings to the file. So, how would you import a file full of string representation of number to postgres table with columns that expect integer when your file can only have character representation of the integer (e.g. str(your_number)).
You either have to write numbers in integer format to the file (which Python's write method disallows) or psycopg2 should be smart enough to the conversion as part of copy_from procedure, which it apparently is not. Any idea is appreciated.
I ended up using copy_expert command. Note that on Windows, you have to set the permission of the file. This post is very useful setting permission.
with open(the_file, 'r') as f:
sql_copy_statement = "copy {table} FROM '"'{from_file}'"' DELIMITER '"'{deli}'"' {file_type} HEADER;".format(table = the_table,
from_file = the_file,
deli = the_delimiter,
file_type = the_file_type
)
print sql_copy_statement
cur.copy_expert(sql_copy_statement, f)
conn.commit()

Import from CSV into SQL Server 2005

The first day of month I will import from test.csv file information into my SQL Server 2005 database. I have this information in test.csv in the one column:
Receiver_number|Card_Number|Lname|purchase_date|tr_verif|station_name|prod_grp|product|unit_price|vol|amount|discount|sum_no_vat|vat|sum_with_vat|country|currency|milage|origin_amount|station_id
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1003680708|704487347252000033|3|2014-02-02T19:00:00|005375|IX Fortas|01|95 Miles|3.574|109.82|474.88|-35.78|392.46|82.42|510.66|LT|LTL||510.66|65059
1003680708|704487347252000034|3|2014-02-02T19:00:00|005375|IX Fortas|24|Cola|2.893|1.00|3.50|0|2.89|.61|3.50|LT|LTL||3.50|65059
Every value is separated by a | symbol.
How I can get it in SQL query?
I have used SSIS, I have tried convert to excel and changed regional settings in the computer but I could get this result in SQL Server 2005....
Use bulk insert like below
BULK INSERT <your_table>
FROM '<your_path>\test.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = '|',
ROWTERMINATOR = '\n',
ERRORFILE = 'C:\import_error.csv',
TABLOCK
)