Until Python 3.4 you were able to determine target's operating system with
Python as follows:
import nmap
nm = nmap.PortScanner()
scanner = nm.scan(IP, port, arguments='-O')
print(scanner['scan'][IP]['osmatch'])
I'm using Python 3.6 and osmatch returns nothing.
Is there a way how to go about this ?
I've tested your script with Python 3.7.6:
import nmap
nm = nmap.PortScanner()
scanner = nm.scan(IP, port, arguments='-O')
print(scanner['scan'][IP]['osmatch'])
and it works well. The problem you have is that, for some reasons, the scan didn't retrieve any result, and the result object is empty, but if you try again on a different host it should work.
Related
I am using pySerial and I am running this command using CMD to list available COM ports and displays a COM port number when found:
python -m serial.tools.list_ports
I know that the command line will import the serial module when I use the python -m flag and I can access the objects inside it so it should show the output. However, the same command however does not work when run using the IDLE shell:
import serial
print(serial.tools.list_ports_common)
This returns an error AttributeError: module 'serial' has no attribute 'tools'
Why is it not working at IDLE?
You need to import it first:
from serial.tools import list_ports
list_ports.main() # Same result as python -m serial.tools.list_ports
You can check out the source here
You can simply try connecting to each possible port (COM0...COM255). Then add the ports with successful connections to a list. Here is my example:
import serial
def connectedCOMports ():
allPorts = [] #list of all possible COM ports
for i in range(256):
allPorts.append("COM" + str(i))
ports = [] #a list of COM ports with devices connected
for port in allPorts:
try:
s = serial.Serial(port) #attempt to connect to the device
s.close()
ports.append(port) #if it can connect, add it the the list
except:
pass #if it can't connect, don't add it to the list
return(ports)
print(connectedCOMports())
When I ran this program, it printed ['COM7'] to the console. This represents the ESP32 microcontroller that I connected to my USB port.
Is there a way to start mitmproxy v.7.0.2 programmatically in the background?
ProxyConfig and ProxyServer have been removed since version 7.0.0, and the code below isn't working.
from mitmproxy.options import Options
from mitmproxy.proxy.config import ProxyConfig
from mitmproxy.proxy.server import ProxyServer
from mitmproxy.tools.dump import DumpMaster
import threading
import asyncio
import time
class Addon(object):
def __init__(self):
self.num = 1
def request(self, flow):
flow.request.headers["count"] = str(self.num)
def response(self, flow):
self.num = self.num + 1
flow.response.headers["count"] = str(self.num)
print(self.num)
# see source mitmproxy/master.py for details
def loop_in_thread(loop, m):
asyncio.set_event_loop(loop) # This is the key.
m.run_loop(loop.run_forever)
if __name__ == "__main__":
options = Options(listen_host='0.0.0.0', listen_port=8080, http2=True)
m = DumpMaster(options, with_termlog=False, with_dumper=False)
config = ProxyConfig(options)
m.server = ProxyServer(config)
m.addons.add(Addon())
# run mitmproxy in backgroud, especially integrated with other server
loop = asyncio.get_event_loop()
t = threading.Thread( target=loop_in_thread, args=(loop,m) )
t.start()
# Other servers, such as a web server, might be started then.
time.sleep(20)
print('going to shutdown mitmproxy')
m.shutdown()
from BigSully's gist
You can put your Addon class into your_script.py and then run mitmdump -s your_script.py. mitmdump comes without the console interface and can run in the background.
We (mitmproxy devs) officially don't support manual instantiation from Python anymore because that creates a massive amount of support burden for us. If you have some Python experience you can probably find your way around.
What if my addon has additional dependencies?
Approach 1: pip install mitmproxy is still perfectly supported and gets you the same functionality as the standalone binaries. Bonus tip: You can run venv/bin/mitmproxy or venv/Scripts/mitmproxy.exe to invoke mitmproxy in your virtualenv without having your virtualenv activated.
Approach 2: You can install mitmproxy with pipx and then run pipx inject mitmproxy <your dependency name>. See https://docs.mitmproxy.org/stable/overview-installation/#installation-from-the-python-package-index-pypi for details.
How can I debug mitmproxy itself?
If you are debugging from the command line (be it print statements or pdb), the easiest approach is to run mitmdump instead of mitmproxy, which provides the same functionality minus the console interface. Alternatively, you can use PyCharm's remote debug functionality, which also works while the console interface is active (https://github.com/mitmproxy/mitmproxy/blob/main/examples/contrib/remote-debug.py).
This example below should work fine with mitmproxy v7
from mitmproxy.tools import main
from mitmproxy.tools.dump import DumpMaster
options = main.options.Options(listen_host='0.0.0.0', listen_port=8080)
m = DumpMaster(options=options)
# the rest is same in the previous versions
from mitmproxy.addons.proxyserver import Proxyserver
from mitmproxy.options import Options
from mitmproxy.tools.dump import DumpMaster
options = Options(listen_host='127.0.0.1', listen_port=8080, http2=True)
m = DumpMaster(options, with_termlog=True, with_dumper=False)
m.server = Proxyserver()
m.addons.add(
// addons here
)
m.run()
Hi, I think that should do it
I am following https://docs.platformio.org/en/latest/boards/nordicnrf52/nrf52840_dk.html but I don't actually have a DK, I have an NRF52840 "Dongle". Does anybody know if it's possible for that to work directly with PlatformIO? It has a built in bootloader, but I don't think it emulates the right kind of programmer. I have nrfutil installed but that wants a package (.zip) and platformio is producing .elf/.hex ... not sure how to connect these tools.
platformio.ini configuration:
[env:nrf52840_dongle]
platform = nordicnrf52
board = nrf52840_dk
framework = zephyr
board_build.zephyr.variant = nrf52840dongle_nrf52840
extra_scripts = dfu_upload.py
upload_protocol = custom
add to project root dfu_upload.py script:
import sys
import os
from os.path import basename
Import("env")
platform = env.PioPlatform()
def dfu_upload(source, target, env):
firmware_path = str(source[0])
firmware_name = basename(firmware_path)
genpkg = "".join(["nrfutil pkg generate --hw-version 52 --sd-req=0x00 --application ", firmware_path, " --application-version 1 firmware.zip"])
dfupkg = "nrfutil dfu serial -pkg firmware.zip -p COM14 -b 115200"
print( genpkg )
os.system( genpkg )
os.system( dfupkg )
print("Uploading done.")
# Custom upload command and program name
env.Replace(PROGNAME="firmware", UPLOADCMD=dfu_upload)
add nrfutil location to your system configuration "path" variable
before upload firmware switch dongle to dfu mode (button reset)
setup dongle COM number in line: dfupkg = "nrfutil dfu serial -pkg firmware.zip -p COM14 -b 115200" in dfu_upload.py
lots of examples you can find here: Zephyr github
You can use nrfutil pkg generate to convert the hex files into a package:
https://infocenter.nordicsemi.com/topic/ug_nrfutil/UG/nrfutil/nrfutil_pkg.html
FYI, you might not get much benefit from using PlatformIO since you don't have a debugging interface. Depending on the framework you're using, there might be other options, like this documentation for Zephyr:
https://docs.zephyrproject.org/latest/boards/arm/nrf52840dongle_nrf52840/doc/index.html
My Operating System is Manjora17.1.12, the Python version is 3.7.0, and the Supervisor's version is 3.3.4.
I have a python script, it just shows a notification. The code is:
import os
os.system('notify-send hello')
The supervisor config is :
[program:test_notify]
directory=/home/zz
command=python -u test_notify.py
stdout_logfile = /home/zz/supervisord.d/log/test_notify.log
stderr_logfile = /home/zz/supervisord.d/log/test_notify.log
But when I execute the python script with the supervisor, it doesn't show the notification.
Proper environment variables need to be set (DISPLAY & DBUS_SESSION_BUS_ADDRESS). You can do it in many different ways, depending on your needs, like e.g.
a) per subprocess
import os
os.system('DISPLAY=:0 DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus notify-send hello')
b) in script globally
import os
os.environ['DISPLAY'] = ':0'
os.environ['DBUS_SESSION_BUS_ADDRESS'] = 'unix:path=/run/user/1000/bus'
os.system('notify-send hello')
c) in supervisor config per program
[program:test_notify]
;
; your variables
;
user=john
environment=DISPLAY=":0",DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus"
The above examples have a couple of assumptions (you may want to change these settings accordingly):
script is run as user john
UID of user john is 1000
notification appear on display :0
To run script as root and show notification for regular user, use sudo as described on Arch wiki Desktop_notifications.
I designed a GUI application using wxPython that communicate with a local database (Mongodb) located in the same folder. My main application has the relative path to the database daemon to start it every time the GUI is lunched.
This is the main.py:
import mongodb
class EVA(wx.App):
# wxPython GUI here
pass
if __name__ == "__main__":
myMongodb = mongodb.Mongodb()
myMongodb.start()
myMongodb.connect()
app = EVA(0)
app.MainLoop()
This is the mongodb.py module:
from pymongo import Connection
import subprocess, os , signal
class Mongodb():
pid = 0
def start(self):
path = "/mongodb-osx-x86_64-1.6.5/bin/mongod"
data = "/data/db/"
cmd = path + " --dbpath " + data
MyCMD = subprocess.Popen([cmd],shell=True)
self.pid = MyCMD.pid
def connect(self):
try:
connection = Connection(host="localhost", port=27017)
db = connection['Example_db']
return db
except Exception as inst:
print "Database connection error: " , inst
def stop(self):
os.kill(self.pid,signal.SIGTERM)
Every thing works fine from the terminal. However, when I used py2app to make a standalone version of my program on Mac OS (OS v10.6.5, Python v2.7), I am able to lunch the GUI but can't start the database. It seems py2app changed the location of Mongodb executable folder and broke my code.
I use the following parameters with py2app:
$ py2applet --make-setup main.py
$ rm -rf build dist
$ python setup.py py2app --iconfile /icons/main_icon.icns -r /mongodb-osx-x86_64-1.6.5
How to force py2app to leave my application structure intact?
Thanks.
Py2app changes the current working directory to the foo.app/Content/Resources folder within the app bundle when it starts up. It doesn't seem to be the case from the code you show above, but if you have any paths that are dependent on the CWD (including relative pathnames) then you'll have to deal with that somehow. One common way to deal with it is to also copy the other stuff you need into that folder within the application bundle, so it will then truly be a standalone bundle that is not dependent on its location in the filesystem and hopefully also not dependent on the machine it is running upon.