Sniffing bonjour traffic between iPhone and Mac - iphone

My end result is to try and see the plist being sent between my iphone & my mac (I know its a plist because I can see bplist00 in the hexdump).
I have an app sending data between my iphone to my mac via a bonjour service.
I use tcpdump to capture the traffic and try and transform the payload hexdump into binary to then convert it into a plist text file.
Here are my steps:
Make sure the iphone and mac are connected, have the command ready to send
Run tcp dump: sudo tcpdump -vSs 0 -A -i en1 -w Dump.pcap 'tcp port 57097' on my wireless network (I used Bonjour Browser to find what port the service is registered on), then hit the send command on the phone.
Convert the pcap file to a text file: tshark -V -r Dump.pcap > Dump.txt (the end result is this)
Manually remove the headers and other info from the text file so that I am just left with the payload (we now have this in the file)
Do a reverse hex dump to convert the file into binary: xxd -r Dump.txt Dump1.txt
Convert the binary plist to a text file: plutil -convert xml1 Dump1.txt
However, at step 6 is where things fail: Dump1.txt: Property List error: Conversion of string failed. The string is empty. / JSON error: JSON text did not start with array or object and option to allow fragments not set. (although it could have been a mistake from an earlier step). And I'm not sure why it reports errors on JSON when I have asked for an XML conversion?
This low level network capturing is not something I am normally akin to (I'm normally higher up with fiddler or charles, but considering this isn't via HTTP I need to go lower down the stack).
Can someone please tell me if what I am doing is correct, or whether there is an easier way to do this?
How can I go about capturing the plist being sent to my mac?

My guess is your issue is somewhere around step 4, where you manually edit the request. I was just trying something similar, using Charles rather than tcpdump, and got the exact same error with a payload that I knew contained a plist. Not sure why we got the JSON error message.
I was able to resolve it by directly saving the binary-encoded plist request body from Charles to a file (Charles has a "Save Request" menu option), then run plutil -convert xml1 FILENAME -o - on it, and it worked just fine.

Related

IBM Aspera get size of file before download

I am using Aspera Connect on mac to download files from a server. It works fine in terminal, but i was wondering if before i download a file, i could read its size first and then decide if i want to download it or not. I found the flag
'--precalculate-job-size'
but it's only doing that right before download and there's no way to stop the download.
The current command i use is this:
/Applications/Aspera\ Connect.app/Contents/Resources/./ascp -QT -l 200M -P33001 -i "/Applications/Aspera Connect.app/Contents/Resources/asperaweb_id_dsa.openssh" emp_ext3#fasp.ebi.ac.uk:/{asp_path} {local_path}
The resources for the flags are here:
https://download.asperasoft.com/download/docs/ascp/2.7/html/index.html
To answer your question, without going too much in the details:
If you want to display the size of an elements on an Aspera server for which you have access, you can use the command line "Amelia", see:
https://www.rubydoc.info/gems/asperalm
mlia server --url=ssh://fasp.ebi.ac.uk:33001 --username=emp_ext3 --ssh-keys=~/.aspera/mlia/aspera_bypass_dsa.pem br /10002/data/100_movie_gc.mrcs
there are plenty of options, like : --format=csv --fields=size
Note that this displays individual file sizes, but not recursive folder size.
a few other things:
You are not exactly using "Connect", but rather the "ascp" command line. Connect refers rather to the browser extension and lightweight app. while ascp is the implementation of Aspera FASP transfer protocol, found basically in all Aspera products.
the latest ascp documentation can be found here: https://www.ibm.com/support/knowledgecenter/SSL85S_3.9.6/hsts_admin_linux/dita/hsts_admin_linux_ascp_usage.html
did you know you can also use the free client:
https://downloads.asperasoft.com/en/downloads/2
it includes also ascp, but also a graphical user interface

What, and why, does GNU gettext send over the public internet?

I was trying out gettext and ran msginit --locale=en --input=messages.po and this is what I see:
[... blah ...]
Is the following your email address?
localUserName#localHostName
Please confirm by pressing Return, or enter your email address.
Retrieving http://translationproject.org/team/index.html... done.
A translation team for your language (en) does not exist yet.
If you want to create a new translation team for en or en_PG, please visit
http://www.iro.umontreal.ca/contrib/po/HTML/teams.html
http://www.iro.umontreal.ca/contrib/po/HTML/leaders.html
http://www.iro.umontreal.ca/contrib/po/HTML/index.html
Created en_PG.po.
What was or would have been disclosed? What is the purpose of this disclosure?
$ msginit --version
msginit (GNU gettext-tools) 0.19.8.1
You don't need wireshark for tracing here. A text editor will do:
The tool msginit invokes a shell script <prefix>/share/gettext/projects/TP/team-address that tries to download (via <prefix>/lib/gettext/urlget) the table with translation teams from http://translationproject.org/team/index.html, and it falls back to a local copy installed under <prefix>/share/gettext/projects/TP/teams.html. The purpose of this is to fill the PO header Language-Team with an up-to-date email address.
I agree that the user should at least be prompted before an internet connection is opened.
I have opened an upstream issues for that:
https://savannah.gnu.org/bugs/index.php?57847
In versions before 0.20, the program (it is the above-mentioned script team-address) always reports that a "translation team for your language (xy) does not exist yet", no matter what locale you specify. This is fixed in gettext version 0.20.1.
Thanks for pointing this out!
As a workaround, you may edit the shell script team-address to not invoke urlget but use the local copy directly.

Truncated SAMLResponse with TCPdump

I have been trying to use tcpdump to capture the SAML request to the server.
I am interested in the SAMLResponse so i can decoded and get the XML but tcpdump seems to truncate the output so I miss a lot of data:
tcpdump -A -nnSs 0 'tcp port 8080 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
This should capture all HTTP request/response/body which it does but the SAMLResponse is truncated:
SAMLResponse=PHNhbWxwOlJlc3BvbnNlIElEPSJfMDAyMDg3MmQtZTlmMi00ZGU5LTkxMGYtM2NiNDc1MjVkNTk2IiBWZXJzaW9uPSIyLjAiIElzc3VlSW5zdGFudD0iMjAxOS0xMS0xM1QyMTo0ODo0Mi42ODlaIiBEZXN0aW5hdG
if I decode that I get:
samlp:Response ID="_0020872d-e9f2-4de9-910f-3cb47525d596" Version="2.0" IssueInstant="2019-11-13T21:48:42.689Z" Destinat
An incomplete output. if I add -w /tmp/out.pcap i am able to see the entire SAMLResponse in wireshark, what am i missing here?
I am on a linux i would like to work with this from the command line. What i dont understand is that sometimes i get more characters than others.
I am not sure if this is in another call separate from this one if it is how to join them in tcpdump?
thanks
a alternative is to use tcpflow
tcpflow -c 'port 8080'
Extract of man tcpflow
DESCRIPTION
tcpflow is a program that captures data transmitted as part of TCP
connections (flows), and stores the data in a way that is
convenient for protocol analysis or debugging. Rather than showing
packet-by-packet information, tcpflow reconstructs the actual data
streams and stores each flow in a separate file for later analysis.
or you can use tshark

Use log4j to log message in liberty console

Our log server consumes our log messages through kubernetes pods sysout formatted in json and indexes json fields.
We need to specify some predefined fields in messages, so that we can track transactions across pods.
For one of our pod we use Liberty profile and have issue to configure logging for these needs.
One idea was to use log4j to send customized json message in console. But all message are corrupted by Liberty log system that handles and modifies all logs done in console. I failed to configure Liberty logging parameters (copySystemStreams = false, console log level = NO) for my needs and always have liberty modify my outputs and interleaved non json messages.
To workaround all that I used liberty consoleFormat="json" logging parameter, but this introduced unnecessary fields and also do not allow me to specify my custom fields.
Is it possible to control liberty logging and console ?
What is the best way to do my use case with Liberty (and if possible Log4j)
As you mentioned, Liberty has the ability to log to console in JSON format [1]. The two problems you mentioned with that, for your use case, are 1) unnecessary fields, and 2) did not allow you to specify your custom fields.
Regarding unnecessary fields, Liberty has a fixed set of fields in its JSON schema, which you cannot customize. If you find you don't want some of the fields I can think of a few options:
use Logstash.
Some log handling tools, like Logstash, allow you to remove [2] or mutate [3] fields. If you are sending your logs to Logstash you could adjust the JSON to your needs that way.
change the JSON format Liberty sends to stdout using jq.
The default CMD (from the websphere-liberty:kernel Dockerfile) is:
CMD ["/opt/ibm/wlp/bin/server", "run", "defaultServer"]
You can add your own CMD to your Dockerfile to override that as follows (adjust jq command as needed):
CMD /opt/ibm/wlp/bin/server run defaultServer | grep --line-buffered "}" | jq -c '{ibm_datetime, message}'
If your use case also requires sending log4J output to stdout, I would suggest changing the Dockerfile CMD to run a script you add to the image. In that script you would need to tail your log4J log file as follows (this could be combined with the above advice on how to change the CMD to use jq as well)
`tail -F myLog.json &`
`/opt/ibm/wlp/bin/server run defaultServer`
[1] https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/rwlp_logging.html
[2] https://www.elastic.co/guide/en/logstash/current/plugins-filters-prune.html
[3] https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html
Just in case it helps, I ran into the same issue and the best solution I found was:
Convert app to use java.util.Logging (JUL)
In server.xml add <logging consoleSource="message,trace" consoleFormat="json" traceSpecification="{package}={level}"/> (swap package and level as required).
Add a bootstrap.properties that contains com.ibm.ws.logging.console.format=json.
This will give you consistent server and application logging in JSON. A couple of lines at the boot of the server are not json but that was one empty line and a "Launching defaultServer..." line.
I too wanted the JSON structure to be consistent with other containers using Log4j2 so, I followed the advice from dbourne above and add jq to my CMD in my dockerfile to reformat the JSON:
CMD /opt/ol/wlp/bin/server run defaultServer | stdbuf -o0 -i0 -e0 jq -crR '. as $line | try (fromjson | {level: .loglevel, message: .message, loggerName: .module, thread: .ext_thread}) catch $line'
The stdbuf -o0 -i0 -e0 stops pipe ("|") from buffering its output.
This strips out the liberty specific json attributes, which is either good or bad depending on your perspective. I don't need to new values so I don't have a good recommendation for that.
Although the JUL API is not quite as nice as Log4j2 or SLF4j, it's very little code to wrap the JUL API in something closer to Log4j2 E.g. to have varargs rather than an Object[].
OpenLiberty will also dynamically change logging if you edit the server.xml so, it pretty much has all the necessary bits; IMHO.

ffmpeg2theora oggfwd not working with icecast2

I have a camera streaming (mjpeg) in http://192.168.x.x/image (where x are the rest of the IP). I start my icecast2 server (Ubuntu 10.10) and then I stream using:
ffmpeg2theora -f mjpeg http://192.168.x.x/image -o /dev/stdout - | oggfwd localhost 8000 password /test
The mountpoint is created but the video is not showing on Firefox. I do see the video box but it's just infinitely showing the "thinking" icon and video does not show.
If I download a proper ogg file and do
cat proper_ogg_file.ogg | oggfwd localhost 8000 password /test
I see the video on the icecast server's website.
In addition I did:
ffmpeg2theora -f mjpeg http://192.168.x.x/image -o test_video.ogg
Once I stop the process (CTRL+C) and go to my Desktop where the video is saved and open it with VLC or any other media player, it plays the portion of the stream that I allowed to be recorded all the way up to pressing CTRL+C.
If I take that file and use the previous method:
cat test_video.ogg | oggfwd localhost 8000 password /test
I get the same issue as when I was directly piping the camera to stdout and then to oggfwd. So therefore I assume this is a "conversion" to ogg issue? Can anybody help? Any idea why i can't do that?
I found a solution. The solution is to use flumotion. It is a lot easier to use and works for what I needed it. I can provide information on how to use it if anybody needs to do so.
Thank you