Error trying to send message to serve Ubuntu 18.04 LTS - server

When I try to send a quoted string from client to server, it works. However, when i try to send a variable storing the input of the user to the server, it does not. Anybody knows why?
server_file
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind( ("0.0.0.0", 1234) )
buff, addr = s.recvfrom(100)
print buff, addr
client_file
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
nume_user = input()
# s.sendto( nume_user, ("127.0.0.1", 1234) ) # this does not work
s.sendto("john", ("127.0.0.1", 1234) ) # this works
buff, addr = s.recvfrom(100)
print buff
This is the error that I am getting ( Ubuntu 18.04 LTS )
Traceback (most recent call last:
File "c4-1.py", line 5, in <module>
nume_user = input()
File "<string>", line 1, in <module>
NameError: name 'ionut' is not defined

From the documentation:
input([prompt])
Equivalent to eval(raw_input(prompt)).
Thus it will read the string you enter (in this case ionut) and then eval it. Since ionut is not a declared variable or otherwise valid Python statement it will throw the error shown.
Also from the documentation:
Consider using the raw_input() function for general input from users.
This is what you should use instead and then you don't get the error.
Apart from that consider using Python3 instead of Python2 which you currently use. Python2 is end of life and also the input function in Python3 is more what you seem to expect - see this documentation.

Related

Pyodbc connection with amazon rds postgres database produces error when executing SQL commands (syntax error)

I have set up a connection between pyodbc and the aws rds (postgresql database) and have installed psqlodbc (which is what the Postgres Unicode(x64) odbc driver is). Everything looks fine until I run a SQL query. It returns a syntax error but there is nothing wrong with my syntax. I'm not exactly sure what would be the issue.
This is Python 3.7 by the way.
import pyodbc
mypw = 'skjhaf234234dkjhkjx'
string = 'Driver={PostgreSQL Unicode(x64)};Server=myfakeserveraddress.rds.amazonaws.com;Database=mydb;UID=myusername;PWD='+mypw+';'
connection = pyodbc.connect(string)
c = connection.cursor()
c.execute("SELECT * FROM schema_table.test_table;")
Error Message:
Traceback (most recent call last):
File "", line 1, in
pyodbc.ProgrammingError: ('42601', '[42601] ERROR: syntax error at or near "'schema_table.test_table'";\nError while executing the query (1) (SQLExecDirectW)')
Without the single quotation marks ' surrounding the table name, I get this error
c.execute("SELECT * from schema_table.test_table")
Traceback (most recent call last): File "", line 1, in
pyodbc.ProgrammingError: ('25P02', '[25P02] ERROR: current
transaction is aborted, commands ignored until end of transaction
block;\nError while executing the query (1) (SQLExecDirectW)')
PS My company has disabled pip installs so I cannot upgrade my packages and am limited to using only a few packages (including this one).
How can I execute my commands without errors?
It seems I have figured it out.... I added autocommit=False to the connection initialization and it seems fine now.... Perhaps it has something to do with the underlying parsing of the sql commands. Keeping the question in case it helps someone.
import pyodbc
mypw = 'skjhaf234234dkjhkjx'
string = 'Driver={PostgreSQL Unicode(x64)};Server=myfakeserveraddress.rds.amazonaws.com;Database=mydb;UID=myusername;PWD='+mypw+';'
connection = pyodbc.connect(string, autocommit=False)
c = connection.cursor()
c.execute("SELECT * FROM schema_table.test_table;")

How to use Micropython Classes in separate files

Getting started with MicroPython and having problems with classes in separate files:
In main.py:
import clientBase
import time
if __name__ == "__main__":
time.sleep(15) # Delay to open Putty
print("Starting")
print("Going to class")
cb = clientBase.ClientBaseClass
cb.process()
In clientBase.py:
class ClientBaseClass:
def __init__(self):
print("init")
def process(self):
print("Process")
Compiles and copies to Pico without errors but does not run. Putty output: No idea how to run Putty (or other port monitor) without blocking port!
MPY: soft reboot
Traceback (most recent call last):
Thanks
Python Conslole:
"C:\Users\jluca\OneDrive\Apps\Analytical Engine\Python\Client\venv\Scripts\python.exe" "C:\Program Files\JetBrains\PyCharm Community Edition 2021.2.4\plugins\python-ce\helpers\pydev\pydevconsole.py" --mode=client --port=59708
import sys; print('Python %s on %s' % (sys.version, sys.platform))
sys.path.extend(['C:\Users\jluca\OneDrive\Apps\Analytical Engine\Python\Client', 'C:\Users\jluca\AppData\Roaming\JetBrains\PyCharmCE2021.2\plugins\intellij-micropython\typehints\stdlib', 'C:\Users\jluca\AppData\Roaming\JetBrains\PyCharmCE2021.2\plugins\intellij-micropython\typehints\micropython', 'C:\Users\jluca\AppData\Roaming\JetBrains\PyCharmCE2021.2\plugins\intellij-micropython\typehints\rpi_pico', 'C:/Users/jluca/OneDrive/Apps/Analytical Engine/Python/Client'])
PyDev console: starting.
Python 3.10.3 (tags/v3.10.3:a342a49, Mar 16 2022, 13:07:40) [MSC v.1929 64 bit (AMD64)] on win32
The first problem I see here is that you're not properly instantiating the ClientBaseClass object. You're missing parentheses here:
if __name__ == "__main__":
time.sleep(15) # Delay to open Putty
print("Starting")
print("Going to class")
cb = clientBase.ClientBaseClass # <-- THIS IS INCORRECT
cb.process()
This is setting the variable cb the class ClientBaseClass, rather than creating a new object of that class.
You need:
if __name__ == "__main__":
time.sleep(15) # Delay to open Putty
print("Starting")
print("Going to class")
cb = clientBase.ClientBaseClass()
cb.process()
I don't know if that's your only problem or not; seeing your traceback will shed more details on the problem.
If I fix that one problem, it all seems to work. I'm using ampy to transfer files to my Pico board (I've also repeated the same process using the Thonny edit, which provides a menu-driven interface for working with Micropython boards):
$ ampy -p /dev/usbserial/3/1.4.2 put main.py
$ ampy -p /dev/usbserial/3/1.4.2 put clientBase.py
$ picocom -b 115200 /dev/usbserial/3/1.4.2
I press return to get the Micropython REPL prompt:
<CR>
>>>
And then type CTRL-D to reset the board:
>>> <CTRL-D>
MPY: soft reboot
And then the board comes up, the code executes as expected:
<pause for 15 seconds>
Starting
Going to class
init
Process
MicroPython v1.18 on 2022-01-17; Raspberry Pi Pico with RP2040
Type "help()" for more information.
>>>
(note that if you replace MicroPython with CircuitPython,the Pico will show up as a drive and you can just drag-and-drop files on it.)
Tried micropython and circuitpython with Pycharm, Thonny and VisualStudio code. The only thing that reliably works is CircuitPython with Mu editor. I think its all about the way the .py files are copied to the Pico board and life's too short to do more diagnostics. Mu is pretty basic but it works! Thanks for the help.

Error from Google Authentication in cloud sql connection

[LATEST UPDATE] Thanks to Jack's enormous help!!! I managed to connect to the Cloud SQL postgres DB and read/write my dataframes to the database. However, I am still experiencing the same error that I experienced previously, which is...
struct.error: 'h' format requires -32768 <= number <= 32767
This error doesnt happen when the dataframes are small, compact and columns do not have too many NaN values in them. However, when there are many NaN values in the columns, the program throws the following error.
Separately I have tried using df = df.fillna(0) to fill the NaN values with 0. But it did not work as well, and the same error surfaced. Please help!
Traceback (most recent call last):
File "...\falcon_vbackup\STEP5_SavetoDB_and_SendEmail.py", line 81, in <module>
main_SavetoDB_and_SendEmail(
File "...\falcon_vbackup\STEP5_SavetoDB_and_SendEmail.py", line 37, in main_SavetoDB_and_SendEmail
Write_Dataframe_to_SQLTable(
File "...\falcon_vbackup\APPENDIX_Database_ReadWrite_v2.py", line 143, in Write_Dataframe_to_SQLTable
df_Output.to_sql(sql_tablename, con=conn, schema='public', index=False, if_exists=if_exists, method='multi', chunksize=1000)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\core\generic.py", line 2963, in to_sql
return sql.to_sql(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\io\sql.py", line 697, in to_sql
return pandas_sql.to_sql(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\io\sql.py", line 1739, in to_sql
total_inserted = sql_engine.insert_records(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\io\sql.py", line 1322, in insert_records
return table.insert(chunksize=chunksize, method=method)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\io\sql.py", line 950, in insert
num_inserted = exec_insert(conn, keys, chunk_iter)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\io\sql.py", line 873, in _execute_insert_multi
result = conn.execute(stmt)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\base.py", line 1289, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\sql\elements.py", line 325, in _execute_on_connection
return connection._execute_clauseelement(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\base.py", line 1481, in _execute_clauseelement
ret = self._execute_context(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\base.py", line 1845, in _execute_context
self._handle_dbapi_exception(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\base.py", line 2030, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\util\compat.py", line 207, in raise_
raise exception
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\base.py", line 1802, in _execute_context
self.dialect.do_execute(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pg8000\dbapi.py", line 455, in execute
self._context = self._c.execute_unnamed(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pg8000\core.py", line 627, in execute_unnamed
self.send_PARSE(NULL_BYTE, statement, oids)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pg8000\core.py", line 601, in send_PARSE
val.extend(h_pack(len(oids)))
struct.error: 'h' format requires -32768 <= number <= 32767
Exception ignored in: <function Connector.__del__ at 0x00000213190D8700>
Traceback (most recent call last):
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\google\cloud\sql\connector\connector.py", line 167, in __del__
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\concurrent\futures\_base.py", line 447, in result
concurrent.futures._base.TimeoutError:
I have setup a postgresql in GCP's Cloud SQL. I am trying to connect to it using
google.cloud.sql.connector. I have created a Service Account from the GCP Console, and downloaded the json keys.
I want to use a service account , credentials/ keys (in the format of reading a .json file placed in the same directory as my main.py code) to authenticate access to cloud_sql.
I am trying to authenticate, but I keep getting an error that says that the service account json file was not found.
Can anyone help to figure out how to fix this error? Thank you!
import pandas as pd
from google.cloud.sql.connector import connector
import os
import pandas as pd
import pandas as pd
import sqlalchemy
import os
# configure Cloud SQL Python Connector properties
def getconn():
conn = connector.connect(
os.environ['LL_DB_INSTANCE_CONNECTION_NAME'],
"pg8000",
user=os.environ['LL_DB_USER'],
password=os.environ['LL_DB_PASSWORD'],
db=os.environ['LL_DB_NAME'])
return conn
# Show existing SQLTables within database
def Show_SQLTables_in_Database(conn):
if conn!=None:
# Show what tables remain in database
results = conn.execute("""SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'""").fetchall()
for table in results:
print(table)
if __name__=="__main__":
# Set the Google Application Credentials as environment variable
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = os.path.join(os.getcwd(),"Google-Credentials-LL-tech2.json")
# create connection pool to re-use connections
pool = sqlalchemy.create_engine("postgresql+pg8000://", creator=getconn)
with pool.connect() as db_conn:
# Show what tables remain in database
results = db_conn.execute("""SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'""").fetchall()
for table in results:
print(table)
The error you are seeing means that the .json file is not being found. This is most likely being caused by os.getcwd() which gets the path of the current working directory from where main.py is being called. This leads to errors if you are calling the file from anywhere other than the parent directory.
Working case: python main.py
Error case: python folder/main.py
Change the line where you set credentials to the following:
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = os.path.join(os.path.dirname(os.path.abspath(__file__)),"Google-Credentials-LL-tech2.json")
This will allow the credentials path to be properly set for all cases of where your main.py is called from.
Responding to your latest update of the error.
First, make sure that your service account has the Cloud SQL Client role applied to it.
Secondly, try executing the following basic script prior to your custom configuration, this will help isolate the error to the Python Connector or the service account/implementation.
The following should just connect to your database and print the time.
from google.cloud.sql.connector import connector
import sqlalchemy
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = os.path.join(os.path.dirname(os.path.abspath(__file__)),"GSheet-Credentials-LL-tech2.json")
# build connection for db using Python Connector
def getconn():
conn = connector.connect(
os.environ['LL_DB_INSTANCE_CONNECTION_NAME'],
"pg8000",
user=os.environ['LL_DB_USER'],
password=os.environ['LL_DB_PASSWORD'],
db=os.environ['LL_DB_NAME'],
)
return conn
# create connection pool
pool = sqlalchemy.create_engine("postgresql+pg8000://", creator=getconn)
def db_connect():
with pool.connect() as conn:
current_time = conn.execute(
"SELECT NOW()").fetchone()
print(f"Time: {str(current_time[0])}")
db_connect()
If that still gives the error, please provide the full stacktrace of the error so that I can try and debug it further with more info.

pySerial running command to list ports

I am using pySerial and I am running this command using CMD to list available COM ports and displays a COM port number when found:
python -m serial.tools.list_ports
I know that the command line will import the serial module when I use the python -m flag and I can access the objects inside it so it should show the output. However, the same command however does not work when run using the IDLE shell:
import serial
print(serial.tools.list_ports_common)
This returns an error AttributeError: module 'serial' has no attribute 'tools'
Why is it not working at IDLE?
You need to import it first:
from serial.tools import list_ports
list_ports.main() # Same result as python -m serial.tools.list_ports
You can check out the source here
You can simply try connecting to each possible port (COM0...COM255). Then add the ports with successful connections to a list. Here is my example:
import serial
def connectedCOMports ():
allPorts = [] #list of all possible COM ports
for i in range(256):
allPorts.append("COM" + str(i))
ports = [] #a list of COM ports with devices connected
for port in allPorts:
try:
s = serial.Serial(port) #attempt to connect to the device
s.close()
ports.append(port) #if it can connect, add it the the list
except:
pass #if it can't connect, don't add it to the list
return(ports)
print(connectedCOMports())
When I ran this program, it printed ['COM7'] to the console. This represents the ESP32 microcontroller that I connected to my USB port.

Why does socket.connect() stop working when used within a loop in Python 3.4?

I'm just starting to learn programming python, and have been following a tutorial for creating a simple port scanner in order to learn about programming sockets. I'm able to make a successful connection to localhost when I manually enter all the code for a single iteration, however if I take the same code, and apply it within a for loop utilizing try/except, I immediately get exceptions for every port in the range, even when I know that some of the ports are open. I believe that I've isolated the problem to socket.connect() because I've entered code below that that I know never gets executed.
I can enter the following code, and get a successful return:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(10)
port = 22
s.connect(('127.0.0.1', port))
s.send(b'test')
banner = s.recv(1024)
print(banner)
s.close()
returns:
b'SSH-2.0-OpenSSH_6.2\r\n'
Process finished with exit code 0
However, as soon as I take that code and move it into a for loop with the port number as the iterator, it stops working.
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(10)
for port in range(1,26):
print("[+]Attempting to connect to : " + str(port))
try:
s.connect(('127.0.0.1', port))
s.send(b'test')
banner = s.recv(1024)
s.close()
if banner:
print("Port " + port + "is Open: " + banner)
except: print("[+]Port " + str(port) + " is closed")
returns:
[+]Attempting to connect to : 1
[+]Port 1 is closed
[+]Attempting to connect to : 2
[+]Port 2 is closed
[+]Attempting to connect to : 3
[+]Port 3 is closed
....ETC....ETC....ETC....
[+]Attempting to connect to : 24
[+]Port 24 is closed
[+]Attempting to connect to : 25
[+]Port 25 is closed
Even though I KNOW port 22 is open and listening on localhost. (i.e. I am able to ssh to 127.0.0.1 without issue). I have tried everything I can think of to no avail, including changing the data type of port to an int manually by using the internal int() function, I've tried the socket.connect_ex object, etc. I've also put code right below the socket.connect statement just to see if it shows up, which it never does.
The Zen of Python states:
Errors should never pass silently.
Unless explicitly silenced.
Only you have not silenced the error but instead just replaced it with a message that is non-descriptive of what actually happened:
>>> "Port" + 1
Traceback (most recent call last):
File "<pyshell#15>", line 1, in <module>
"Port "+1
TypeError: Can't convert 'int' object to str implicitly
is what you will get if opening port 1 worked, but after you close a socket you can't connect to anything else:
>>> a = socket.socket()
>>> a.close()
>>> a.connect(("www.python.com",80))
Traceback (most recent call last):
File "<pyshell#18>", line 1, in <module>
a.connect(("www.python.com",80))
OSError: [Errno 9] Bad file descriptor
So you need to create a new socket inside the loop for it to work properly but most importantly: you need to limit the errors you catch:
try:
#if this is the only line you expect to fail, then it is the only line in the try
s.connect(('127.0.0.1', port))
except ConnectionError:
#if a ConnectionError is the only one you expect, it is the only one you catch
print("[+]Port " + str(port) + " is closed")
else: #if there was no error
s.send(b'test')
banner = s.recv(1024)
s.close()
if banner:
print("Port " + port + "is Open: " + banner)
then you will see the actual errors you are getting instead of guessing what went wrong which is also against The Zen of Python:
In the face of ambiguity, refuse the temptation to guess.