I'm wondering how/if one converts a .dat file into something modifiable and readable. Any info would vastly help. For example a code snippet, program, or just general information.
I generally use hexdump, but it's not available for all operating systems, and probably wouldn't help you more than what you're looking at now.
As already pointed out in comments, DAT is a generic extension used to signify a (usually proprietary) binary format -- other than not being text, there's no commonality. You need to know what format it's in before you can have any hope of translating it into something legible by humans.
All we "know" from the extension is that the file is NOT line-based text -- and that could be wrong.
Related
I'm writing documentation, and part of it includes small programs written in languages other than Racket. I can of course include them inline (using #verbatim), but I'd like to be able to at least minimally test/run them, so it'd be much more convenient to store them in separate files and just include the source.
What's the easiest way do that? i.e., I'd like to do something like:
#verbatim|{#include-file{path/to/file.ext}}| (though of course that doesn't quite work) and have the content included, literally. I thought that Ben Greenman's https://gitlab.com/bengreenman/scribble-include-text would do this, but it's behaving oddly, probably because there are character sequences in the file that are not playing well.
converting dicm to nii works great with the help of this script from xiangruili :
https://github.com/xiangruili/dicm2nii/blob/master/dicm2nii.m
BUT I need to modify the output filenames and add a string to it. The function save_json of the script (dicm2nii.m) was promising, but I am new to matlab and have the feeling that there is a simple solution to this problem.
Cant somebody help me, please!
Thanks!
As #Wolfie correctly mentioned, this is not something easily addressed for people unfamiliar with the particular program. But I took a very quick look since I currently use other tools for DICOM to NiFTI conversion and was curious about this one. Here are some general comments that hopefully will help.
The "json" file is just for metadata, and you probably care more about the .nii image file (or both). It looks like nii_tool('save', nii, filename, force_3D) handles the latter.
The nii_tool and save_json calls are just passed a variable containing the output filename, which you could create/modify using any of the standard MATLAB methods (e.g., sprintf or strcat). There are already some examples of this within the code in the calls to nii_tool('save', ...).
Since you say that you are new to MATLAB, it is probably easiest for you (and likely everyone) to write a script to rename the files after exportation. That way, you don't have to worry about catching all of the cases/ instances within the 3000 lines of code that someone else wrote and just fix it with a simple program at the other end. Becomes a much more tractable problem that way.
As an aside, I currently use dcm2niix ( available from GitHub or NITRC) for this conversion outside of MATLAB.
According to this comment from the general question Is it possible to create a quine in every turing-complete language? it seems like it is said that it's possible.
However I didn't find any Ook! Quine on the internet.
Do you think that it's really possible?
And if yes will we be able to find it?
It wouldn't even be very difficult. You would want to code it in brainfuck and then translate, and the internal representation for each command should be a pair of numbers (probably from 0-2) to represent the punctuation of each half-command. You could borrow much of the structure from Erik Bosman's brainfuck quine.
Updated: here. https://gist.github.com/danielcristofani/1fe53487df1f7afcb5b91c06d95184b2
This is ~40 commands taken directly from Erik Bosman's quine, another ~120 freshly written commands of rather clunky output code to handle Ook!'s verbosity, and then the data segment to represent all that.
I already know how to do this, but I figured I would go ahead and ask since the Rebol documentation is offline today and so in the future others will have an easier time finding out how.
In a nutshell, use read for reading a text file (a string will be returned) and write for writing a string down to a file. For example:
write %hello.txt "Hello World!"
print read %hello.txt
Such text-mode I/O relies on UTF-8 for reading/writing, other encodings will be supported in the future.
Additionally, you can use /binary refinement for both functions to switch to binary mode. You can also use the higher-level load and save counterparts, which will try to encode/decode the data using one of the available codecs (UTF-8 <=> Red values, JPG/PNG/GIF/BMP <=> image! value).
Use help followed by a function name for more info.
I am writing a small sniffer as part of a personal project. I am using Net::Pcap (really really great tool).
In the packet-processing loop I am using the excellent Net::Frame for unpacking all the headers and getting at the data. I am getting concerned that this might not be terribly efficient (Net::Frame is great but seems to be more than I need for this project).
Also I dislike that for some Debian systems I had to manually compile libdumbnet (the package provided in the official apt repositories didn't seem to work, Net-Libdnet-0.92 didn't like it).
All I want is to get at the payload inside a TCP segment. Is there any alternative ?
Thank you.
P.S. Would it be really really bad (read "thedailywtf.com worthy") if I just took the packet and searched it for some pattern ?
I recently wrote a PCAP dump file unpacker in C and then afterwards wished I'd just used the open source libraries instead (when I realised they existed and were so easy to use). I have to say that as it's a binary file format it's probably easier to do in C than Perl, but I'll no doubt get boo'ed by all the Perl fanatics out there.
What I will say is that using existing code will be quicker all round than coding it yourself, but if you really really want to, the file format is freely available online and is really quite simple.
As for searching for a pattern, it almost certainly won't work. It's a binary file format and the packets can be fragmented and/or duplicated, so the only reliable way to know where a message starts and ends is by unpacking the headers, checking the packet flags, reading the content length field, etc. etc. Doing pattern searches may work 90% of the time, but at some point you'll find a packet capture log that means you need to change your code. And then a while later find another packet that means another change, and so on and so forth.