copy (...) from program (gzip '..../data.csv.gz') in Postgresql on windows is failed - postgresql

My data.csv is a large volume and I don't have enough storage in my system for that.
Because of this, I want to import data.csv.gz into a table of the database in PostgreSQL.
I tried the following code to import that file.
copy origin (...) from program 'gzip D:\Download\data.csv.gz' (format CSV);
But I've got this error:
ERROR: program "gzip D:\Download\origin_visit.csv.gz" failed
DETAIL: child process exited with exit code 1
Origin is my table in PostgreSQL.
My OS is Windows 10.
I installed gzip in my system and added its path to the environment.
How can I run the code in PostgreSQL? Is there any way to import data.csv.gz to PostgreSQL?

Related

VS Code and pytest: Where is the default junit-xml output path defined and why is there one?

I am trying to run a debug the tests of my Python project inside the VS Code interface.
I followed the instruction from VS Code's website using pytest but when trying to run a test the output fails:
============================= test session starts =============================
platform win32 -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: c:\Users\myUserName\Projects\myProjectName
plugins: localserver-0.5.0
collected 1 item
myProjectName\tests\test_static_analysis.py . [100%]
- generated xml file: C:\Users\MYUSERNAME\AppData\Local\Temp\tmp-26696NTXW7ChqfMEN.xml
-
============================== 1 passed in 0.18s ==============================
Error: Error: cannot open file:///c%3A/Users/myUserName/Projects/myProjectName/C. Detail:
Unable to read file 'c:\Users\myUserName\Projects\myProjectName\C' (Error: Unable to resolve
non-existing file 'c:\Users\myUserName\Projects\myProjectName\C')
Error: Error: cannot open file:///c%3A/Users/myUserName/Projects/myProjectName/C. Detail:
Unable to read file 'c:\Users\myUserName\Projects\myProjectName\C' (Error: Unable to resolve
non-existing file 'c:\Users\myUserName\Projects\myProjectName\C')
Maybe VS Code doesn't have the authorization to write in AppData.
What concerns me is why is VS Code launching pytest with a junit-xml output option?
The command actually executed by VS Code is:
c:; cd 'c:\Users\myUserName\Projects\myProjectName'; & 'C:\ProgramData\Anaconda3\envs\venv-myProjectName\python.exe' 'c:\Users\myUserName\.vscode\extensions\ms-python.python-2021.8.1105858891\pythonFiles\lib\python\debugpy\launcher' '53774' '--' 'c:\Users\myUserName\.vscode\extensions\ms-python.python-2021.8.1105858891\pythonFiles\testlauncher.py' 'c:\Users\myUserName\Projects\myProjectName' 'pytest' '--override-ini' 'junit_family=xunit1' '--rootdir' 'c:\Users\myUserName\Projects\myProjectName' '--junit-xml=C:\Users\MYUSERNAME\AppData\Local\Temp\tmp-26696pYh1w3pF2cXx.xml' './myProjectName/tests/test_static_analysis.py::TestStaticAnalysisVesselForceOnLateralCenter::test_pos_surge_on_vessel'
When going in Settings > python.testing.pytestArgs it is empty.
Where is this output path defined?
How can I change it to be in the local working directory?
Do I need to have a junit-xml output? Is it mandatory for VS Code UI to work?

Error while try to connect table on DB2 using Python (SQL0332N)

I'm connecting to DB2-LUW database using Python 3.7 and some queries get error : "SQL0332N Character conversion from the source code page "1252" to the target code page "874" is not supported.***".
First I try to test the connection of Python to the database on DB2 by recreating a new table.
I insert 1 record and read it back. When I read the inserted row, I get the error.
Results in interactive python:
import ibm_db_dbi as dbi
print(dbi.__version__)
3.0.2
conn = dbi.connect("DATABASE=<db>;HOSTNAME=<hostname>;PORT=<port>;PROTOCOL=TCPIP;UID=<user>;PWD=<pwd>;", "", "")
c = conn.cursor()
c.execute('create table ibm_db_tst (col1 int)')
Out[5]: True
c.execute('insert into ibm_db_tst values(2)')
Out[6]: True
c.execute('select col1 from ibm_db_tst')
Out[7]: True
print(c.fetchone())
Traceback (most recent call last):
File "C:\Users\2400566\Anaconda3\lib\site-packages\ibm_db_dbi.py",
line 1449, in _fetch_helper
row = ibm_db.fetch_tuple(self.stmt_handler)
SQLCODE=-332lumn information cannot be retrieved: [IBM][CLI
Driver][DB2/NT64] SQL0332N Character conversion from the source code
page "1252" to the target code page "874" is not supported.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "", line 1, in
print(c.fetchone())
File "C:\Users\2400566\Anaconda3\lib\site-packages\ibm_db_dbi.py", line 1475, in fetchone
row_list = self._fetch_helper(1)
File "C:\Users\2400566\Anaconda3\lib\site-packages\ibm_db_dbi.py", line 1456, in _fetch_helper
raise self.messages[len(self.messages) - 1]
SQLCODE=-332_dbi::Error: [IBM][CLI Driver][DB2/NT64] SQL0332N Character conversion from the source code page "1252" to the target code page "874" is not supported.
I'm not sure what's wrong? Need advice.
my Python version is Python 3.7.7 running on Window 10 PC x64
DB2 is on Windows server 2012 x64 .
DB2 version is DB2 v11.1.0.1527.
Database territory : GB
Database code page : 1252
Database code set : 1252
Database country/region code : 44
Thanks in advance.
In case that your python script is using db2dsdriver to connect to database on server, try to set the DisableUnicode keyword to 0 to enforce unicode code page (i.e. 1208) on Windows.
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.swg.im.dbclient.config.doc/doc/r0054636.html

How to set the set_stream_blob_threshold in FIrebird fdb python library?

Trying to migrate data from Firebird DB to MS Sql Server using fdb(2.0.1) and pyodbc. Since there are blobs in the Firebird database which are over 64K, they are being returned as BlobReader objects. Since i would like not to deal with the bytes myself and write them using pyodbc. The docs say that you can turn off the 64K threshold by passing -1 to the cursor.set_stream_blob_threshold. However that doesn't seem to work, since fdb.fbcore.ProgrammingError is thrown...
https://fdb.readthedocs.io/en/v2.0/reference.html#fdb.Cursor.set_stream_blob_treshold
Here is how i call the function:
import fdb
class Firebird:
def __init__(self, db_name: str):
self.__fb_conn = fdb.connect(database=db_name, user='someuser', password='somepass', charset='ISO8859_1')
self.__fb_cursor = self.__fb_conn.cursor()
#change the blob safety threshold to unlimited for troubleshooting
self.__fb_cursor.set_stream_blob_treshold(-1) #doesn't work :(
Here is a stack trace for the error:
(.venv) >python3.8.exe -i
Python 3.8.5 (tags/v3.8.5:580fbb0, Jul 20 2020, 15:57:54) [MSC v.1924 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from commonlibs import Firebird
>>>
>>> fb = Firebird('somedb.fdb')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\user1\dev\commonlibs\Firebird.py", line 13, in __init__
self.__fb_cursor.set_stream_blob_treshold(int(-1)) #doesn't work :(
File "C:\Users\user1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\fdb\fbcore.py", line 3930, in set_stream_blob_treshold
raise ProgrammingError
fdb.fbcore.ProgrammingError
Per Mark's comment:
I don't know much about the data source and what sort of blobs. It was one of those situations where the other teams guy said: "Hey, here is some data from this partner, let's see what inside"
However when trying to pass the obj.read() value to the pyodbc for BlobReader objects, it did insert some of the blobs. However with a lot of them pyodbc would report this error:
pyodbc.Error: ('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Warning: Partial insert/update. The insert/update of a text or image column(s) did not succeed. (0) (SQLPutData); [HY000] [Microsoft][ODBC SQL Server Driver][SQL Server]The text, ntext, or image pointer value conflicts with the column name specified. (7125)')
I was kind hoping i could avoid all this pyodbc and .read() stuff by setting that threshold, but i wonder if the pyodbc error would show up regardless...

Why am I getting "Unhandled exception: local variable 'pwd' referenced before assignment"?

I'm trying to transfer a schema from my personal machine to RDS via Workbench. I've exported an SQL dump file and am trying to import it into RDS. However, I get the following error:
Unhandled exception: local variable 'pwd' referenced before assignment
Check the log for more details.
The log file has this:
14:05:01 [WRN][wb_admin_export.py:process_db:277]: Task exited with code 1
14:05:01 [ERR][ pymforms]: Unhandled exception in Python code:
Traceback (most recent call last):
File "C:\Program Files\MySQL\MySQL Workbench 8.0 CE\modules\wb_admin_export.py", line 1334, in _update_progress
r = self.update_progress()
File "C:\Program Files\MySQL\MySQL Workbench 8.0 CE\modules\wb_admin_export.py", line 913, in update_progress
self.start()
File "C:\Program Files\MySQL\MySQL Workbench 8.0 CE\modules\wb_admin_export.py", line 1323, in start
password = self.get_mysql_password(self.bad_password_detected)
File "C:\Program Files\MySQL\MySQL Workbench 8.0 CE\modules\wb_admin_export.py", line 963, in get_mysql_password
if pwd is None:
UnboundLocalError: local variable 'pwd' referenced before assignment
An earlier attempt yielded a little more detail:
14:00:24 [ERR][wb_admin_export.py:process_db:251]: Error from task: ERROR 1045 (28000): Access denied for user 'admin'#'<some_numbers_I_probably_shouldn't_share!>.skybroadband.com' (using password: YES)
14:00:24 [WRN][wb_admin_export.py:process_db:277]: Task exited with code 1
14:00:24 [ERR][ pymforms]: Unhandled exception in Python code:
Traceback (most recent call last):
File "C:\Program Files\MySQL\MySQL Workbench 8.0 CE\modules\wb_admin_export.py", line 1334, in _update_progress
r = self.update_progress()
File "C:\Program Files\MySQL\MySQL Workbench 8.0 CE\modules\wb_admin_export.py", line 913, in update_progress
self.start()
File "C:\Program Files\MySQL\MySQL Workbench 8.0 CE\modules\wb_admin_export.py", line 1323, in start
password = self.get_mysql_password(self.bad_password_detected)
File "C:\Program Files\MySQL\MySQL Workbench 8.0 CE\modules\wb_admin_export.py", line 963, in get_mysql_password
if pwd is None:
UnboundLocalError: local variable 'pwd' referenced before assignment
14:00:43 [ERR][wb_admin_utils.py:page_activated:329]: Exception activating the page - 'Label' object has no attribute 'remove_from_parent'Error from task: ERROR 1045 (28000): Access denied for user 'admin'#'<some_numbers_I_probably_shouldn't_share!> (using password: YES)
This has confused me somewhat as I'm not using Python to transfer anything - I'm using Workbench. Clearly I have a password issue but what is it exactly and how do I fix it? I'm logged into RDS and can add or remove schemas/tables etc manually so Workbench knows what the correct passwords are...
For me, the error is regarding db privilege:
mysqldump: Got error: 1044: Access denied for user 'myuser'#'%' to database 'mydb' when doing LOCK TABLES
You have to uncheck the lock-tables option from the "Advanced Options" available in the top right of the data exporter in MySQL Workbench.
If you're using command to export then add --lock-tables=FALSE flag.
If you only want to migrate your DB Structure:
Open workbench
select the connection of your local DB
On the left, in the 'navigator' panel choose 'administration'
'Data Export'
select the schema you want to export
on the right side of the window you should find a 'select box'. switch Dump Structure and Data to Dump Data only
In the same way, select Dump Data only when you will import it!
Honestly, I don't know exactly how it solved the error:
'UnboundLocalError: local variable 'pwd' referenced before assignment ',
but just moving the Structure without the data worked for me.
Go to C:\Users\User_Name\AppData\Roaming\MySQL\Workbench\sql_workspaces and delete the workspace of the server where you are getting error, or in easy way, you can delete all folders in sql_workspaces

How to solve error on docker:layers_calculator to compute the Merkle tree on private tangle?

I want to setup a private tangle on my own virtual machine with Ubuntu 18.04, 4GB RAM and 20GB memory.
I have follow this instructions: https://docs.iota.org/docs/compass/0.1/how-to-guides/set-up-a-private-tangle. Every command works fine until reach this one: bazel run //docker:layers_calculator.
It shows an error as follows:
Starting local Bazel server and connecting to it...
ERROR: /home/istabraq/compass/third-party/maven_deps.bzl:3:5: Traceback (most recent call last):
File "/home/istabraq/compass/WORKSPACE", line 42
maven_jars()
File "/home/istabraq/compass/third-party/maven_deps.bzl", line 3, in maven_jars
native.maven_jar(<4 more arguments>)
type 'struct' has no method maven_jar()
ERROR: error loading package '': Encountered error while reading extension file 'protobuf_deps.bzl': no such package '#com_google_protobuf_deps//': error loading package 'external': Could not load //external package
ERROR: error loading package '': Encountered error while reading extension file 'protobuf_deps.bzl': no such package '#com_google_protobuf_deps//': error loading package 'external': Could not load //external package
INFO: Elapsed time: 4.743s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
FAILED: Build did NOT complete successfully (0 packages loaded)
How can I solve this problem? what I have missed?
read carefully the message given after running bazel installer:
Make sure you have "/home/yourusername/bin" in your path. You can also activate bash completion by adding the following line to your :
source /home/yourusername/.bazel/bin/bazel-complete.bash
You can check with: "bazel info" or "bazel version"
Unfortunately, there are further errors:
https://github.com/iotaledger/compass/issues/142
I have solve this issue by using this commands:
Step 3: Set up your environment
If you ran the Bazel installer with the --user flag as above, the Bazel executable is installed in your $HOME/bin directory. It’s a good idea to add this directory to your default paths, as follows:
export PATH="$PATH:$HOME/bin"
You can also add this command to your ~/.bashrc or ~/.zshrc file to make it permanent.
reference:
https://docs.bazel.build/versions/master/install-ubuntu.html