I have been trying to use tcpdump to capture the SAML request to the server.
I am interested in the SAMLResponse so i can decoded and get the XML but tcpdump seems to truncate the output so I miss a lot of data:
tcpdump -A -nnSs 0 'tcp port 8080 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
This should capture all HTTP request/response/body which it does but the SAMLResponse is truncated:
SAMLResponse=PHNhbWxwOlJlc3BvbnNlIElEPSJfMDAyMDg3MmQtZTlmMi00ZGU5LTkxMGYtM2NiNDc1MjVkNTk2IiBWZXJzaW9uPSIyLjAiIElzc3VlSW5zdGFudD0iMjAxOS0xMS0xM1QyMTo0ODo0Mi42ODlaIiBEZXN0aW5hdG
if I decode that I get:
samlp:Response ID="_0020872d-e9f2-4de9-910f-3cb47525d596" Version="2.0" IssueInstant="2019-11-13T21:48:42.689Z" Destinat
An incomplete output. if I add -w /tmp/out.pcap i am able to see the entire SAMLResponse in wireshark, what am i missing here?
I am on a linux i would like to work with this from the command line. What i dont understand is that sometimes i get more characters than others.
I am not sure if this is in another call separate from this one if it is how to join them in tcpdump?
thanks
a alternative is to use tcpflow
tcpflow -c 'port 8080'
Extract of man tcpflow
DESCRIPTION
tcpflow is a program that captures data transmitted as part of TCP
connections (flows), and stores the data in a way that is
convenient for protocol analysis or debugging. Rather than showing
packet-by-packet information, tcpflow reconstructs the actual data
streams and stores each flow in a separate file for later analysis.
or you can use tshark
Related
I am trying to capture live packets from a network interface using Pycapa from Metron,but when i try to consume the messages from the topic, i am receiving the following strange characters.
!i�f�U�_� ��mP�pO���62.��a#;�k��o��0�?
!i�f�U�_� ��mP�pO���62.��a#;�k��o��0�?
I am not using Confluent platform. In this case, can someone guide me to a solution?
Thank you
Based on the docs it looks like pycapa stores the raw network packet data, which is probably what you're seeing here.
If you look at the examples you'll see there's one for consuming this raw data and piping it into something like tshark which can read the raw data and render it readable:
pycapa --consumer \
--kafka-broker localhost:9092 \
--kafka-topic ciscotopic1 \
--max-packets 10 \
| tshark -i -
According to the documentation there are two ways to send log information to the SwisscomDev ELK service.
Standard way via STDOUT: Every output to stdout is sent to Logstash
Directly send to Logstash
Asking about way 2. How is is this achieved, especially how is the input expected?
We're using Monolog in our PHP buildpack based application and using its stdout_handler is working fine.
I was trying the GelfHandler (connection refused), SyslogUdPHandler (no error, but no result), both configured to use VCAPServices logstashHost and logstashPort as API endpoint / host to send logs to.
Binding works, env variables are set, but I have no idea how to send SwisscomDev ELK service Logstash API endpoint compatible log information from our application.
Logstash is configured with a tcp input, which is reachable via logstashHost:logstashPort. The tcp input is configured with its default codec, which is the line codec (source code; not the plain codec as stated in the documentation).
The payload of the log event should be encoded in JSON so that the fields are automatically recognized by Elasticsearch. If this is the case, the whole log event is forwarded without further processing to Elasticsearch.
If the payload is not JSON, the whole log line will end up in the field message.
For your use case with Monolog, I suggest you to use the SocketHandler (pointing it to logstashHost:logstashPort) in combination with the LogstashFormatter which will take care of the JSON encoding with the log events being line delimited.
My rsyslog logs locally correctly, however I wanted to also receive the logs remotely, so I added the rule:
*.* ##myIP:5141
to the end of my rsyslog.conf
To receive the output, I'm running logstash with the configuration
input { tcp { port => 5141 } }
output { stdout {} }
Logstash expects UTF-8 encoding, however I get the error
Received an event that has a different character encoding than you configured
The messages themselves seem to be garbled, or a mix of encodings, for example:
\u0016\u0003\u0002\u0000V\u0001\u0000\u0000R\u0003\u0002S\xB1R\xAB5K\xF6\\\xB9\xB2\xB4\xB1\xAE0\t\u007F\xDF`5\xF6\u0015\xC8)H\xD7H\xCF+&\xD5T5\u0000\u0000$\u00003\u0000E\u00009\u0000\x88\u0000\u0016\u00002\u0000D\u00008\u0000\x87\u0000\u0013\u0000f\u0000/\u0000A\u00005\u0000\x84\u0000
Note some entries are \u00, while others are \x. There are even multiple backslashes.
I was wondering if I messed up the settings somehow, or if there is something between me and the server which is messing up the messages?
I have also tried using the syslog logstash input, which gives the same result
Another example:
\u0016\u0003\u0002\u0000V\u0001\u0000\u0000R\u0003\u0002S\xB1RiZ^\xC3\xD9\u001Cj\a\xD4\xE0\xECr\x8E\xAC\xF5\u001A\xB9+\u07B9\xE5\xF9\xA3''z\u0018}9\u0000\u0000$\u00003\u0000E\u00009\u0000\x88\u0000\u0016\u00002\u0000D\u00008\u0000\x87\u0000\u0013\u0000f\u0000/\u0000A\u00005\u0000\x84\u0000
EDIT:
I found the source of my problem, and it was encryption related. Unfortunately I can't disclose what I did to fix it, suffice to say John Petrone's answer below is good start for similar problems that future readers may experience
So that magic string you're getting back that looks like broken encoding is actually the SSL Handshake request.
I suspect what you've done is (like I just did) misconfigured the tcp input in logstash. Specifically, I forgot to add the ssl_enable => true. So it was listening for normal TCP and got SSL Handshake and dutifully recorded it as garbage.
The problem is that a syslog source that you are ingesting is sending data in non UTF-8 format which is causing problems with Logstash, as that is what it is expecting. You've basically got 3 courses of action:
Have Rsyslog correct this for you: Use the Rsyslog mmutf8fix module
to fix invalid UTF-8 sequences.
http://www.rsyslog.com/doc/mmutf8fix.html
Change Logstash to use a more appropriate charset: You can change
the default charset for the plain codec:
http://logstash.net/docs/1.4.2/codecs/plain . You will need to
experiment a bit, I'd check here for a starting point.
https://logstash.jira.com/browse/LOGSTASH-1047
Change your source to output UTF-8: Not knowing the sources being
collected by Rsyslog I can't comment on what it would take to make
this change.
I'd start with option 1 and if that does not work move to option 2.
based on #docwhat answer.
nano logstash/pipeline/logstash.conf
# Or
nano /path/to/logstash.conf
input {
beats {
port => 5000
ssl => false
}
#tcp {
# port => 5000
#}
}
My end result is to try and see the plist being sent between my iphone & my mac (I know its a plist because I can see bplist00 in the hexdump).
I have an app sending data between my iphone to my mac via a bonjour service.
I use tcpdump to capture the traffic and try and transform the payload hexdump into binary to then convert it into a plist text file.
Here are my steps:
Make sure the iphone and mac are connected, have the command ready to send
Run tcp dump: sudo tcpdump -vSs 0 -A -i en1 -w Dump.pcap 'tcp port 57097' on my wireless network (I used Bonjour Browser to find what port the service is registered on), then hit the send command on the phone.
Convert the pcap file to a text file: tshark -V -r Dump.pcap > Dump.txt (the end result is this)
Manually remove the headers and other info from the text file so that I am just left with the payload (we now have this in the file)
Do a reverse hex dump to convert the file into binary: xxd -r Dump.txt Dump1.txt
Convert the binary plist to a text file: plutil -convert xml1 Dump1.txt
However, at step 6 is where things fail: Dump1.txt: Property List error: Conversion of string failed. The string is empty. / JSON error: JSON text did not start with array or object and option to allow fragments not set. (although it could have been a mistake from an earlier step). And I'm not sure why it reports errors on JSON when I have asked for an XML conversion?
This low level network capturing is not something I am normally akin to (I'm normally higher up with fiddler or charles, but considering this isn't via HTTP I need to go lower down the stack).
Can someone please tell me if what I am doing is correct, or whether there is an easier way to do this?
How can I go about capturing the plist being sent to my mac?
My guess is your issue is somewhere around step 4, where you manually edit the request. I was just trying something similar, using Charles rather than tcpdump, and got the exact same error with a payload that I knew contained a plist. Not sure why we got the JSON error message.
I was able to resolve it by directly saving the binary-encoded plist request body from Charles to a file (Charles has a "Save Request" menu option), then run plutil -convert xml1 FILENAME -o - on it, and it worked just fine.
I am looking fopointers on the best approach to process incoming emails to a certain vhost and to call an external script with the email data as parameters - basically to allow email to be sent to a certain "private" email address at a host which then auto inserts something into that sites database. I currently have exim set up as the mail handler.
You have to follow exim single file configurations structure. In routers section write your own custom router that will deliver email to your desired php script. In transport section write your own custom transport that will ensure delivery to the desired script using curl. Just write the following configurations in your /etc/exim.cnf file:
############ROUTERS
runscript:
driver = accept
transport = run_script
unseen
no_expn
no_verify
############TRANSPORT
run_script:
debug_print = "T: run_script for $local_part#$domain"
driver = pipe
command = /home/bin/curl http://my.domain.com/mailTest.php --data-urlencode $original_local_part#$original_domain
Where mailTest.php will be your destined script.
Procmail is a good generic answer. If your needs are very specific, you could hook in your own script directly from your .forward (or Exim's corresponding construct -- can't remember exactly how it differs), but oftentimes, wrapping your own script inside a simple .procmailrc helps you avoid a bunch of iffy details of email delivery, and concentrate on the actual processing.
:0
' ^Subject: secretpassword adduser \/[A-Z]+
| echo "insert $MATCH into users" | mysql -d users