Dump packets collected with mitmproxy - mitmproxy

I was using mitmproxy to see the traffic. Now I want to store all of those packets into a file. Is that possible or should I use mitmdump next time?

It does not matter if you use mitmproxy or mitmdump.
Both programs support nearly the same functionality, except that mitmdump is a command-line tool that does not shows a GUI.
For saving the captured web traffic you can start mitmproxy or mirmdump using the -w command-line option:
mitmproxy -w outfile
mitmdump -w outfile
https://docs.mitmproxy.org/stable/tools-mitmdump/

Related

How to do netcat with gawk networking?

I'm on machine that have very old netcat version which don't support HTTP proxy option (-X). And i need to do netcat through HTTP proxy on this machine (to proxyfy ssh with proxy command).
I've tryed to send CONNECT <host>:<port> with the old netcat before passing the stdin/out to ssh but without success.
I've found that gawk as networking support according documentation and I've tried to make netcat working with it but without success.
How to send CONNECT command first, read the response of the proxy and go in read/write stdin/stdout in two way loop after?

how to capture openflow packets using tshark

I have a system with arch linux running OVS. I also have a controller running in the same box. I have the following setup:-
ovs-vsctl set-controller br-int tcp:192.168.1.201:6633
I was hoping to use tshark( tshark 2.2.8) to capture the openflow using the following command:-
sudo tshark -i br-int -d tcp.port==6633,openflow -O openflow_v4
it dumps all the all the flows that is flowing in the system but no packetIn openflow messages. I did confirm packetIn message was received by the controller. ( pasting the last few lines:-)
EVENT ofp_event->EventOFPPacketIn
packet in 1237689849893337 b8:27:xx:xx:yy:yy:zz ff:ff:ff:ff:ff:ff:3
I also understand from the tshark document that by default it uses the port 6653 for openflow.
tshark -G decodes | grep -i openflow
tcp.port 6653 openflow
However I was in the impression that I can still look for openflow traffic by using the following capture command:-
https://wiki.wireshark.org/OpenFlow
tshark tcp port 6633
This also doesn't work as no events are captured though I can see the controller receiving lots of events..
would greatly appreciate any help here.
My guess would be that you're not listening on the correct interface. Try the following:
sudo tshark -i any -d tcp.port==6633,openflow -O openflow_v4
If that doesn't work, it's possible your controller and switch are not communicating using OpenFlow 1.3. To make sure you see everything, try:
sudo tshark -i any -d tcp.port==6633
Details. Unless there's something particular in your setup, packets from Open vSwitch to the controller and back do not go through the bridge. Since both ends of the communication are on the same host, packets are probably going through the loopback interface:
sudo tshark -i lo -d tcp.port==6633
I was able to reproduce your setup and issue to confirm my answer with Open vSwitch 2.5.2 and Floodlight (master branch). I can see packets passing through on the loopback interface with both tcpdump and tshark.

Running Snort as Service

I am running snort on windows to sniff single interface. I wanted to sniff two interface with snort and I learned I have fire same command twice for different interface.
Now i want run it as service and i used this command`
c:\snort\bin\snort.exe /SERVICE /INSTALL -i 1 -l c:\snort\log -c c:\snort\etc\snort.conf
This will create a service for Snort .
So, how to run snort as service for multiple interface ?
Any help will be appreciated.
I'm not all that familiar with Snort on windows, but if you're able to do it, it should work similar to Linux. You would have to bridge the interfaces (windows 7 steps) and use the bridge with the -i. If you bridge your 2 interfaces and then run "snort -W" and see the bridge show up, you should just be able to use that to sniff on both interfaces. I have never tested this though, but in theory it should work.

SFTP inline put without interaction

I am trying to automate an application deployment as part of this I need to upload a file to a server. I have created a minimal user and configured chroot for the SFTP server but I can't work out how to upload a file non interactive.
At present I am doing scp myfile buildUser#myserver.com:newBuilds/
I tried sftp buildUser#myserver.com myfile (newBuilds is the chroot dir) but this didn't upload anything but it did connect.
The reason for favouring this aproach and NOT using scp is that its a lot more difficult to restrict scp access (from the information I have learned).
If you are using OpenSSH server, chrooting works for both SCP and SFTP.
For instructions see:
https://www.techrepublic.com/article/chroot-users-with-openssh-an-easier-way-to-confine-users-to-their-home-directories/
So I believe your question is irrelevant.
Anyway, sftp (assuming OpenSSH) is not really designed for command-line-only upload. You typically use -b switch to specify batch file with put command.
sftp buildUser#myserver.com -b batchfile
With batchfile containing:
put /local/path /remote/path
If you really need command-line-only upload, see:
Single line sftp from terminal or
Using sftp like scp
So basically, you can use various forms of input redirection like:
sftp buildUser#myserver.com <<< 'put /local/path /remote/path'
Or simply use scp, instead of sftp. Most servers support both. And actually OpenSSH scp supports SFTP protocol since 8.7.
Since OpenSSH 9.0 is even uses SFTP by default. In 8.7 through 8.9, the SFTP has to be selected via -s parameter. See my answer to already mentioned Single line sftp from terminal.
You can pass inline commands to SFTP like this:
sftp -o PasswordAuthentication=no user#host <<END
lcd /path/to/local/dir
cd /path/to/remote/dir
put file
END
I resolved this issue by approaching it from a different side. I tried configuring chroot for sftp but could not get this to work. My solution was to use rssh and only allow scp. This works for me because the user I am trying to restrict is known and authenticated user.

What is a faster alternative to Python's http.server (or SimpleHTTPServer)?

Python's http.server (or SimpleHTTPServer for Python 2) is a great way of serve the contents of the current directory from the command line:
python -m http.server
However, as far as web servers go, it's very slooooow...
It behaves as though it's single threaded, and occasionally causes timeout errors when loading JavaScript AMD modules using RequireJS. It can take five to ten seconds to load a simple page with no images.
What's a faster alternative that is just as convenient?
http-server for node.js is very convenient, and is a lot faster than Python's SimpleHTTPServer. This is primarily because it uses asynchronous IO for concurrent handling of requests, instead of serialising requests.
Installation
Install node.js if you haven't already. Then use the node package manager (npm) to install the package, using the -g option to install globally. If you're on Windows you'll need a prompt with administrator permissions, and on Linux/OSX you'll want to sudo the command:
npm install http-server -g
This will download any required dependencies and install http-server.
Use
Now, from any directory, you can type:
http-server [path] [options]
Path is optional, defaulting to ./public if it exists, otherwise ./.
Options are [defaults]:
-p The port number to listen on [8080]
-a The host address to bind to [localhost]
-i Display directory index pages [True]
-s or --silent Silent mode won't log to the console
-h or --help Displays help message and exits
So to serve the current directory on port 8000, type:
http-server -p 8000
I recommend: Twisted (http://twistedmatrix.com)
an event-driven networking engine written in Python and licensed under the open source MIT license.
It's cross-platform and was preinstalled on OS X 10.5 to 10.12. Amongst other things you can start up a simple web server in the current directory with:
twistd -no web --path=.
Details
Explanation of Options (see twistd --help for more):
-n, --nodaemon don't daemonize, don't use default umask of 0077
-o, --no_save do not save state on shutdown
"web" is a Command that runs a simple web server on top of the Twisted async engine. It also accepts command line options (after the "web" command - see twistd web --help for more):
--path= <path> is either a specific file or a directory to be
set as the root of the web server. Use this if you
have a directory full of HTML, cgi, php3, epy, or rpy
files or any other files that you want to be served up
raw.
There are also a bunch of other commands such as:
conch A Conch SSH service.
dns A domain name server.
ftp An FTP server.
inetd An inetd(8) replacement.
mail An email service
... etc
Installation
Ubuntu
sudo apt-get install python-twisted-web (or python-twisted for the full engine)
Mac OS-X (comes preinstalled on 10.5 - 10.12, or is available in MacPorts and through Pip)
sudo port install py-twisted
Windows
installer available for download at http://twistedmatrix.com/
HTTPS
Twisted can also utilise security certificates to encrypt the connection. Use this with your existing --path and --port (for plain HTTP) options.
twistd -no web -c cert.pem -k privkey.pem --https=4433
go 1.0 includes a http server & util for serving files with a few lines of code.
package main
import (
"fmt"; "log"; "net/http"
)
func main() {
fmt.Println("Serving files in the current directory on port 8080")
http.Handle("/", http.FileServer(http.Dir(".")))
err := http.ListenAndServe(":8080", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
Run this source using go run myserver.go or to build an executable go build myserver.go
Try webfs, it's tiny and doesn't depend on having a platform like node.js or python installed.
If you use Mercurial, you can use the built in HTTP server. In the folder you wish to serve up:
hg serve
From the docs:
export the repository via HTTP
Start a local HTTP repository browser and pull server.
By default, the server logs accesses to stdout and errors to
stderr. Use the "-A" and "-E" options to log to files.
options:
-A --accesslog name of access log file to write to
-d --daemon run server in background
--daemon-pipefds used internally by daemon mode
-E --errorlog name of error log file to write to
-p --port port to listen on (default: 8000)
-a --address address to listen on (default: all interfaces)
--prefix prefix path to serve from (default: server root)
-n --name name to show in web pages (default: working dir)
--webdir-conf name of the webdir config file (serve more than one repo)
--pid-file name of file to write process ID to
--stdio for remote clients
-t --templates web templates to use
--style template style to use
-6 --ipv6 use IPv6 in addition to IPv4
--certificate SSL certificate file
use "hg -v help serve" to show global options
Here's another. It's a Chrome Extension
Once installed you can run it by creating a new tab in Chrome and clicking the apps button near the top left
It has a simple gui. Click choose folder, then click the http://127.0.0.1:8887 link
https://www.youtube.com/watch?v=AK6swHiPtew
I found python -m http.server unreliableā€”some responses would take seconds.
Now I use a server called Ran https://github.com/m3ng9i/ran
Ran: a simple static web server written in Go
Also consider devd a small webserver written in go. Binaries for many platforms are available here.
devd -ol path/to/files/to/serve
It's small, fast, and provides some interesting optional features like live-reloading when your files change.
If you have PHP installed you could use the builtin server.
php -S 0:8080
give polpetta a try ...
npm install -g polpetta
then you can
polpetta ~/folder
and you are ready to go :-)
Using Servez as a server
Download Servez
Install It, Run it
Choose the folder to serve
Pick "Start"
Go to http://localhost:8080 or pick "Launch Browser"
Note: I threw this together because Web Server for Chrome is going away since Chrome is removing support for apps and because I support art students who have zero experience with the command line
Yet another node based simple command line server
https://github.com/greggman/servez-cli
Written partly in response to http-server having issues, particularly on windows.
installation
Install node.js then
npm install -g servez
usage
servez [options] [path]
With no path it serves the current folder.
By default it serves index.html for folder paths if it exists. It serves a directory listing for folders otherwise. It also serves CORS headers. You can optionally turn on basic authentication with --username=somename --password=somepass and you can serve https.
I like live-server. It is fast and has a nice live reload feature, which is very convenient during developpement.
Usage is very simple:
cd ~/Sites/
live-server
By default it creates a server with IP 127.0.0.1 and port 8080.
http://127.0.0.1:8080/
If port 8080 is not free, it uses another port:
http://127.0.0.1:52749/
http://127.0.0.1:52858/
If you need to see the web server on other machines in your local network, you can check what is your IP and use:
live-server --host=192.168.1.121
And here is a script that automatically grab the IP address of the default interface. It works on macOS only.
If you put it in .bash_profile, the live-server command will automatically launch the server with the correct IP.
# **
# Get IP address of default interface
# *
function getIPofDefaultInterface()
{
local __resultvar=$1
# Get default route interface
if=$(route -n get 0.0.0.0 2>/dev/null | awk '/interface: / {print $2}')
if [ -n "$if" ]; then
# Get IP of the default route interface
local __IP=$( ipconfig getifaddr $if )
eval $__resultvar="'$__IP'"
else
# Echo "No default route found"
eval $__resultvar="'0.0.0.0'"
fi
}
alias getIP='getIPofDefaultInterface IP; echo $IP'
# **
# live-server
# https://www.npmjs.com/package/live-server
# *
alias live-server='getIPofDefaultInterface IP && live-server --host=$IP'
I've been using filebrowser for the past couple of years and it is the best alternative I have found.
Features I love about it:
Cross-platform: It supports Linux, MacOs and Windows (+). It also supports docker (+).
Downloading stuff is a breeze. It can automatically convert a folder into zip, tar.gz and etc. for transferring folders.
You can file or folder access to every use.