Modifying PORTQRY output to easily use in csv, xls files - powershell

looking for some pointers with using portqry and specifically, piping or modifying the output to a CSV.
I have around 20000 servers to check, all with a variance of 3-4 open ports to test and automating portqry seems the easiest way for me. Can't use 3rd party software as it's an Enterprise firewalled solution I'm testing.
So, I can easily use a FOR loop to make portqry run across a txt file line by line which outputs into another txt file.
This is fairly useless at the numbers I'm testing as I need to be able to filter and analyse the data easily.
I then moved onto using a couple of .bats to redirect my output based on the errorlevel of portqry, ie 0,1,2,3. Which kind of works but is still a pain as I have to hardcode the output for my CSV.
(Excuse the pseudo code)
:TOP
FOR %I in foo.txt, DO 'PORTQRY -n %I -e PORT#
#IF errorlevel =0 '%I,22,LISTENING' >> fooLISTENING.txt
goto END
ELSE
#IF errorlevel =1........
ETC ETC
This is still a bit of a pain as I'd rather see all targets in one place and then filter by listening, etc.
Another issue is that not being able to resolve the host doesn't seem to be an applicable errorlevel number.
At this point I'm starting to look at PowerShell to accomplish this, assign variables at each line then write them into a CSV, but it'd take a while to get that setup in my environment.
Any ideas while I have portqry though? Or indeed if you know of PS that would do the trick that'd be great too.

Related

WinDbg scripting - how to delete a file?

I'm working with an existing framework of WinDbg scripts that go through a series of test scripts Test1.txt, Test2.txt, etc., which are generated by C++ code and which output results.
For example a chunk of one of the test scripts would be,
.if (($spat(#"${var}","18300.000000")==1))
{
.logappend C:\Tests\TestResults.txt
.printf "TestNumber=\t1\tExpected=\t18300.000000\tActual=\t%.6f\t******PASSED******\n",poi(poi(#$t2+#$t6)+0x10)
.logclose
}
I'm trying to add functionality that will create a file whose name displays the current # of the test being run, so that users can see their progress without needing to open a file.
My thought process was that I would set up the script generator, so that at the start of Test #N, it would add a line to the script to create a file 'currentlyRunningTestN.txt', and at the end of Test #N, it would add a line to the script to delete that file. However, I don't see any delete function in the WinDbg meta command glossary: https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/meta-commands, or in the list of supported C functions like printf. Am I just missing something, or is deleting files not supported by WinDbg (or equivalently renaming files, which would also serve my purpose?) If deleting/renaming don't work, is there another way to achieve the functionality I'm looking for?
With the .shell command, you can execute any DOS-like command. Although I never tried deleting a file, it should be possible.
As you may have noticed, WinDbg scripting does not always work on first attempt, please make sure your scripting will not result in a big data loss on your customer's PC whilst deleting files.

What happens when powershell script encounters EOF while a quote is open?

Unicorn.py generates a string that looks like
powershell -flag1 -flag2 "something " obfuscation; powershell "more gibbrish
Interestingly, if this command is saved in a file filename.txt Windows executes it before opening the file in notepad.txt (by which time the file is empty).
Why is the file executed despite the extension?
What does the script do when it encounters EOF after odd number of quotation marks?
Edited:
Unicorn (https://github.com/trustedsec/unicorn) is a script that "enables privilege elevation and arbitrary code execution". If you know what it means. Of course I did NOT put the actual string, just the key features.
Purely out of IT security interest.
I think that if you read the manual in unicorn.py, at absolutely no time does it say that the script should be left in the txt file.
The PowerShell script is written inside the txt file and called the "payload" (very hacker like). What is left for you is always how to execute this code on the victim's computer.
The manual proposes Word code injection, simply executing the PowerShell in cmd (I quote "Next simply copy the powershell command to something you have the ability for remote command execution."), Excel Auto_Open attack, and so on.
If reading the manual is too much there is always a video. The only time the "hacker" uses a notepad like is on his linux operated system (how ironic)… I watched it because I love this Papa Roach music Last Resort...
For those who are concerned about IT security I recommend this article dosfuscation. This is really instructive about how you have to be extra careful when receiving mails, outside document,... and how humanity can waste so much time spying, deceiving, inventing new twisted strategies... Aren't we great !
Windows like any other system has many system flaw but opening notepad is not one of them. Unless your notepad has been replaced by a hacker using unicorn…
There is an even number of brackets in the obfuscated script. Did you mix up '' with "?
Empty txt file means that you've sent the attack.txt over network to a drive accessible by updated antivirus and antivirus quarantined/deleted file contents. Since you didn't know about this interaction with antivirus your environment is NOT secure. Which means you might have other malware from previous test lurking on your "clean" network.

SAS- Reading multiple compressed data files

I hope you are all well.
So my question is about the procedure to open multiple raw data files that are compressed.
My files' names are ordered so I have for example : o_equities_20080528.tas.zip o_equities_20080529.tas.zip o_equities_20080530.tas.zip ...
Thank you all in advance.
How much work this will be depends on whether:
You have enough space to extract all the files simultaneously into one folder
You need to be able to keep track of which file each record has come from (i.e. you can't tell just from looking at a particular record).
If you have enough space to extract everything and you don't need to track which records came from which file, then the simplest option is to use a wildcard infile statement, allowing you to import the records from all of your files in one data step:
infile "c:\yourdir\o_equities_*.tas" <other infile options as per individual files>;
This syntax works regardless of OS - it's a SAS feature, not shell expansion.
If you have enough space to extract everything in advance but you need to keep track of which records came from each file, then please refer to this page for an example of how to do this using the filevar option on the infile statement:
http://www.ats.ucla.edu/stat/sas/faq/multi_file_read.htm
If you don't have enough space to extract everything in advance, but you have access to 7-zip or another archive utility, and you don't need to keep track of which records came from each file, you can use a pipe filename and extract to standard output. If you're on a Linux platform then this is very simple, as you can take advantage of shell expansion:
filename cmd pipe "nice -n 19 gunzip -c /yourdir/o_equities_*.tas.zip";
infile cmd <other infile options as per individual files>;
On windows it's the same sort of idea, but as you can't use shell expansion, you have to construct a separate filename for each zip file, or use some of 7zip's more arcane command-line options, e.g.:
filename cmd pipe "7z.exe e -an -ai!C:\yourdir\o_equities_*.tas.zip -so -y";
This will extract all files from all of the matching archives to standard output. You can narrow this down further via the 7-zip command if necessary. You will have multiple header lines mixed in with the data - you can use findstr to filter these out in the pipe before SAS sees them, or you can just choose to tolerate the odd error message here and there.
Here, the -an tells 7-zip not to read the zip file name from the command line, and the -ai tells it to expand the wildcard.
If you need to keep track of what came from where and you can't extract everything at once, your best bet (as far as I know) is to write a macro to process one file at a time, using the above techniques and add this information while you're importing each dataset.

Programmatically change text config files in Linux with minimal effort

I am looking for a tool that would ease the modification of text configuration files for tasks like:
Set ForwardAgent yes on /etc/ssh/ssh_config
Append HGUSER to AcceptEnv in /etc/ssh/sshd_config (that's more complex as it does accept several params, if yours is not alread there it should add it)
Most important:
running it several times should have no side effects.
if something looks weird, it should complain (for example if you find the same line several times in a file, or if the expected syntax does not match).
Is there any linux tool that can easily be used to automate things like this?
The whole point is to be able to write these config patches somewhere so you can deploy them on several machines or on a new machine when needed.
I would certainly do this with bash scripting. Here is a great tutorial.
http://linuxconfig.org/Bash_scripting_Tutorial
to change a line in a file you could do something like:
check the file exists
grep for the value you want to change - error if it appears multiple times or something
use sed to change that line
to append something to a file
check if file exists
grep to ensure it hasn't been appended to already
echo whatever >> file - the double greater than appends to a file
with each of these I would make a backup copy of the file first, just in case something goes wrong
You might want to have a look at the Unified Configuration Interface (UCI) used in Embedded Linux systems. If you have the flexibility to adapt the UCI format for your config files, this is pretty similar to what you are looking for.

Parsing syslogs with Perl using a named pipe?

I'm trying to write a script that will grab logs across a network and parse them for relevant information and perform some action (email if there's a critical issue, simply write to a log file if its a warning). I am using an AIX machine with syslogd to process the logs. Right now it is performing like usual, writing all logs to files ... a lot of files.
I was advised to use Perl and Named Pipes to implement the script. I've just spent some time reading up on named pipes and I find them quite fascinating. However, I'm stumped as to how the "flow" of information should work in this situation and how to make perl handle it.
For example, should I create a fifo outside of the script and tell syslogd to write to it by default and have my script on the other end parsing it? Can Perl do that and (for you sysadmins) is this a smart/possible option?
This is my first encounter with Perl and with named pipes.
You can surely create a named pipe in Perl, although it seems to me that for what you are trying to do, it is better to create the named pipe outside of perl, as you are suggesting, and then have syslogd write to it, and read the pipe from perl.
I don't know very well AIX, but this could do for creating a pipe (source):
mkfifo -p /var/adm/syslog.pipe
To have syslogd write to it, define this in /var/adm/syslog.pipe:
*.info |/var/adm/syslog.pipe
Then:
kill -HUP `cat /var/run/syslogd.pid`
You could also put all this stuff into your perl script: in case the pipe did not exist or syslogd were not using it, the script would arrange all required things for you.
Possibly you could provide some more details as to what you are trying to do, if you need more help.