I am using pySerial and I am running this command using CMD to list available COM ports and displays a COM port number when found:
python -m serial.tools.list_ports
I know that the command line will import the serial module when I use the python -m flag and I can access the objects inside it so it should show the output. However, the same command however does not work when run using the IDLE shell:
import serial
print(serial.tools.list_ports_common)
This returns an error AttributeError: module 'serial' has no attribute 'tools'
Why is it not working at IDLE?
You need to import it first:
from serial.tools import list_ports
list_ports.main() # Same result as python -m serial.tools.list_ports
You can check out the source here
You can simply try connecting to each possible port (COM0...COM255). Then add the ports with successful connections to a list. Here is my example:
import serial
def connectedCOMports ():
allPorts = [] #list of all possible COM ports
for i in range(256):
allPorts.append("COM" + str(i))
ports = [] #a list of COM ports with devices connected
for port in allPorts:
try:
s = serial.Serial(port) #attempt to connect to the device
s.close()
ports.append(port) #if it can connect, add it the the list
except:
pass #if it can't connect, don't add it to the list
return(ports)
print(connectedCOMports())
When I ran this program, it printed ['COM7'] to the console. This represents the ESP32 microcontroller that I connected to my USB port.
Related
Getting started with MicroPython and having problems with classes in separate files:
In main.py:
import clientBase
import time
if __name__ == "__main__":
time.sleep(15) # Delay to open Putty
print("Starting")
print("Going to class")
cb = clientBase.ClientBaseClass
cb.process()
In clientBase.py:
class ClientBaseClass:
def __init__(self):
print("init")
def process(self):
print("Process")
Compiles and copies to Pico without errors but does not run. Putty output: No idea how to run Putty (or other port monitor) without blocking port!
MPY: soft reboot
Traceback (most recent call last):
Thanks
Python Conslole:
"C:\Users\jluca\OneDrive\Apps\Analytical Engine\Python\Client\venv\Scripts\python.exe" "C:\Program Files\JetBrains\PyCharm Community Edition 2021.2.4\plugins\python-ce\helpers\pydev\pydevconsole.py" --mode=client --port=59708
import sys; print('Python %s on %s' % (sys.version, sys.platform))
sys.path.extend(['C:\Users\jluca\OneDrive\Apps\Analytical Engine\Python\Client', 'C:\Users\jluca\AppData\Roaming\JetBrains\PyCharmCE2021.2\plugins\intellij-micropython\typehints\stdlib', 'C:\Users\jluca\AppData\Roaming\JetBrains\PyCharmCE2021.2\plugins\intellij-micropython\typehints\micropython', 'C:\Users\jluca\AppData\Roaming\JetBrains\PyCharmCE2021.2\plugins\intellij-micropython\typehints\rpi_pico', 'C:/Users/jluca/OneDrive/Apps/Analytical Engine/Python/Client'])
PyDev console: starting.
Python 3.10.3 (tags/v3.10.3:a342a49, Mar 16 2022, 13:07:40) [MSC v.1929 64 bit (AMD64)] on win32
The first problem I see here is that you're not properly instantiating the ClientBaseClass object. You're missing parentheses here:
if __name__ == "__main__":
time.sleep(15) # Delay to open Putty
print("Starting")
print("Going to class")
cb = clientBase.ClientBaseClass # <-- THIS IS INCORRECT
cb.process()
This is setting the variable cb the class ClientBaseClass, rather than creating a new object of that class.
You need:
if __name__ == "__main__":
time.sleep(15) # Delay to open Putty
print("Starting")
print("Going to class")
cb = clientBase.ClientBaseClass()
cb.process()
I don't know if that's your only problem or not; seeing your traceback will shed more details on the problem.
If I fix that one problem, it all seems to work. I'm using ampy to transfer files to my Pico board (I've also repeated the same process using the Thonny edit, which provides a menu-driven interface for working with Micropython boards):
$ ampy -p /dev/usbserial/3/1.4.2 put main.py
$ ampy -p /dev/usbserial/3/1.4.2 put clientBase.py
$ picocom -b 115200 /dev/usbserial/3/1.4.2
I press return to get the Micropython REPL prompt:
<CR>
>>>
And then type CTRL-D to reset the board:
>>> <CTRL-D>
MPY: soft reboot
And then the board comes up, the code executes as expected:
<pause for 15 seconds>
Starting
Going to class
init
Process
MicroPython v1.18 on 2022-01-17; Raspberry Pi Pico with RP2040
Type "help()" for more information.
>>>
(note that if you replace MicroPython with CircuitPython,the Pico will show up as a drive and you can just drag-and-drop files on it.)
Tried micropython and circuitpython with Pycharm, Thonny and VisualStudio code. The only thing that reliably works is CircuitPython with Mu editor. I think its all about the way the .py files are copied to the Pico board and life's too short to do more diagnostics. Mu is pretty basic but it works! Thanks for the help.
Until Python 3.4 you were able to determine target's operating system with
Python as follows:
import nmap
nm = nmap.PortScanner()
scanner = nm.scan(IP, port, arguments='-O')
print(scanner['scan'][IP]['osmatch'])
I'm using Python 3.6 and osmatch returns nothing.
Is there a way how to go about this ?
I've tested your script with Python 3.7.6:
import nmap
nm = nmap.PortScanner()
scanner = nm.scan(IP, port, arguments='-O')
print(scanner['scan'][IP]['osmatch'])
and it works well. The problem you have is that, for some reasons, the scan didn't retrieve any result, and the result object is empty, but if you try again on a different host it should work.
My Operating System is Manjora17.1.12, the Python version is 3.7.0, and the Supervisor's version is 3.3.4.
I have a python script, it just shows a notification. The code is:
import os
os.system('notify-send hello')
The supervisor config is :
[program:test_notify]
directory=/home/zz
command=python -u test_notify.py
stdout_logfile = /home/zz/supervisord.d/log/test_notify.log
stderr_logfile = /home/zz/supervisord.d/log/test_notify.log
But when I execute the python script with the supervisor, it doesn't show the notification.
Proper environment variables need to be set (DISPLAY & DBUS_SESSION_BUS_ADDRESS). You can do it in many different ways, depending on your needs, like e.g.
a) per subprocess
import os
os.system('DISPLAY=:0 DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus notify-send hello')
b) in script globally
import os
os.environ['DISPLAY'] = ':0'
os.environ['DBUS_SESSION_BUS_ADDRESS'] = 'unix:path=/run/user/1000/bus'
os.system('notify-send hello')
c) in supervisor config per program
[program:test_notify]
;
; your variables
;
user=john
environment=DISPLAY=":0",DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus"
The above examples have a couple of assumptions (you may want to change these settings accordingly):
script is run as user john
UID of user john is 1000
notification appear on display :0
To run script as root and show notification for regular user, use sudo as described on Arch wiki Desktop_notifications.
I'm running an application which builds and sends ICMP ECHO requests to a few different ip addresses. The application is written in Crystal. When attempting to open a socket from within the crystal docker container, Crystal raises an exception: Permission Denied.
From within the container, I have no problem running ping 8.8.8.8.
Running the application on macos, I have no problem.
Reading the https://docs.docker.com/engine/security/apparmor/ and https://docs.docker.com/engine/security/seccomp/ pages on apparmor and seccomp I was sure I'd found the solution, but the problem remains unresolved, even when running as docker run --rm --security-opt seccomp=unconfined --security-opt apparmor=unconfined socket_permission
update/edit: After digging into capabilities(7), I added the following line to my dockerfile: RUN setcap cap_net_raw+ep bin/ping trying to let the socket get opened but without change.
Thanks!
Relevant crystal socket code, full working code sample below:
# send request
address = Socket::IPAddress.new host, 0
socket = IPSocket.new Socket::Family::INET, Socket::Type::DGRAM, Socket::Protocol::ICMP
socket.send slice, to: address
Dockerfile:
FROM crystallang/crystal:0.23.1
WORKDIR /opt
COPY src/ping.cr src/
RUN mkdir bin
RUN crystal -v
RUN crystal build -o bin/ping src/ping.cr
ENTRYPOINT ["/bin/sh","-c"]
CMD ["/opt/bin/ping"]
Running the code, first native, then via docker:
#!/bin/bash
crystal run src/ping.cr
docker build -t socket_permission .
docker run --rm --security-opt seccomp=unconfined --security-opt apparmor=unconfined socket_permission
And finally, a 50 line crystal script which fails to open a socket in docker:
require "socket"
TYPE = 8_u16
IP_HEADER_SIZE_8 = 20
PACKET_LENGTH_8 = 16
PACKET_LENGTH_16 = 8
MESSAGE = " ICMP"
def ping
sequence = 0_u16
sender_id = 0_u16
host = "8.8.8.8"
# initialize packet with MESSAGE
packet = Array(UInt16).new PACKET_LENGTH_16 do |i|
MESSAGE[ i % MESSAGE.size ].ord.to_u16
end
# build out ICMP header
packet[0] = (TYPE.to_u16 << 8)
packet[1] = 0_u16
packet[2] = sender_id
packet[3] = sequence
# calculate checksum
checksum = 0_u32
packet.each do |byte|
checksum += byte
end
checksum += checksum >> 16
checksum = checksum ^ 0xffff_ffff_u32
packet[1] = checksum.to_u16
# convert packet to 8 bit words
slice = Bytes.new(PACKET_LENGTH_8)
eight_bit_packet = packet.map do |word|
[(word >> 8), (word & 0xff)]
end.flatten.map(&.to_u8)
eight_bit_packet.each_with_index do |chr, i|
slice[i] = chr
end
# send request
address = Socket::IPAddress.new host, 0
socket = IPSocket.new Socket::Family::INET, Socket::Type::DGRAM, Socket::Protocol::ICMP
socket.send slice, to: address
# receive response
buffer = Bytes.new(PACKET_LENGTH_8 + IP_HEADER_SIZE_8)
count, address = socket.receive buffer
length = buffer.size
icmp_data = buffer[IP_HEADER_SIZE_8, length-IP_HEADER_SIZE_8]
end
ping
It turns out the answer is that Linux (and by extension docker) does not give the same permissions that macOS does for DGRAM sockets. Changing the socket declaration to socket = IPSocket.new Socket::Family::INET, Socket::Type::RAW, Socket::Protocol::ICMP allows the socket to connect under docker.
A little more still is required to run the program in a non-root context. Because raw sockets are restricted to root, the binary must also be issued the correct capability for access to a raw socket, CAP_NET_RAW. However, in docker, this isn't necessary. I was able to get the program to run outside of super-user context by running sudo setcap cap_net_raw+ep bin/ping. This is a decent primer on capabilities and the setpcap command
MacOS doesn't use the same system of permissions, so setcap is just an unrecognized command. As a result, to get the above code to compile and run successfully on macOS without super-user context, I changed the socket creation code to:
socket_type = Socket::Type::RAW
{% if flag?(:darwin) %}
socket_type = Socket::Type::DGRAM
{% end %}
socket = IPSocket.new Socket::Family::INET, socket_type, Socket::Protocol::ICMP
Applying the CAP_NET_RAW capability for use in linux happens elsewhere in the build process if needed.
With those changes, I'm not seeing any requirement for changes to seccomp or apparmor from the default shipped with Docker in order to run the program.
I have some Vxworks embedded os and I want to check the netstat.
This is what I tried:
-> inetstatShow
And the output is:
C interp: unknown symbol name 'inetstatShow'.
How can I have netstat command in this?
inetstatShow is provided by netShow library - you need to be sure that your OS configuration includes netShow, or you can dynamically load it using ld.
The lkup function can be used to list symbols that are available to the shell. Try lkup "Show" to list all symbols that include the sub-string "Show" for example.
VxWorks supports netstat command.
-> netstat "-n -a" /* state of sockets */
-> netstat "-n -r" /* routing table */