Exchange mailbox traffic auditing? - powershell

I´m interested in getting a specific users mailbox statistics in Exchange 2010. The purpose is to gather details about a support mailbox and the support team performance.
In my case I'd like to try and get the:
number of received and sent mails last 24 hrs
number of mails added to subfolder last 24 hrs
average time emails spend in Inbox
average time emails spend being unread
I know how to get the first part of #2 and export it to a file based on the current date, but have no idea how to limit the time frame to the last 24 hrs:
$date = (Get-Date).toString(‘yyyy-MM-dd’)
Get-MailboxFolderStatistics "username" | sort-object itemsinfolder -descending | ft Folder, FolderPath, ItemsInFolder, FolderSize -auto | export-csv -path $date.csv
Some statistics might not be supported, but I'd very much like some help with what is possible.

I don't believe you're going to get all the stats you're wanting with get-mailboxstatistics. You're going to have to go into that mailbox and start examining emails.
I'd start with Glen Scales blog:
http://gsexdev.blogspot.com/
and research usint the EWS managed API with Powershell. The "Modified" property on an email should reflect the last time it was moved within the mailbox. You can determine which emails have or haven't been read from the item properties, but I don't know of a property that records when they were read so you may need to run the script periodically to monitor for which ones have been read since the last time it checked.

Related

Powershell Exchange Online email counting report

I'm trying to work on some monthly statistics for our company, where I could set a schedule task with powershell to get this data monthly. First of all, I would like to extract the total number of emails (received and sent) over the last 30 days.
I've been looking at the ExchangeOnlineManagement Module, and from what I can see, I would need to use the "Start-HistoricalSearch" to be able to retrieve emails. But I don't want to create a CSV of all emails, I just want as a result the total number of emails, but I can't seem to be able to do something like.
Has anyone had the same need or can help me understand how can I reach this goal?
Thanks

Get-MessageTracking logs for multiple users

We've been given a task to get stats on whether 6000+ Contacts have been used in the last 30 days.
We're on Exchange 2013 CU9
So obviously
$ht = Get-TransportService *EXC*
$ht | Get-MessageTrackingLog -Recipients user#domain.com -EventId send
So I'm writing a function to allow us to put single addresses into the above and if we want also feed in a list or CSV. that's easy enough.
But with over 6000 to search on, and 32 Transport servers with 30 day so of logs this is going to take an incredibly long time.
Just wondering if anyone had seen this issue before and come up with a way of speeding things up?
Thanks in advance.
You would be better just to collect the raw logs and parse those with the Log Parser studio https://gallery.technet.microsoft.com/office/Log-Parser-Studio-cd458765 which will perform much better.

PS Active Directory per user login count

I've been looking for a solution but cannot find anything easy enough or reliable, so please excuse for bringing this up again.
In a typical AD environment, what I want to get is a monitoring report that would say something in the lines that "user X has logged in YYYY times in the past ZZ period".
The lastlogon date on get-aduser in PS is the only thing I can find, as that changes with each login, but cannot be easily scripted in a scheduled run to generate a report for eg.
Has anyone implemented this or use any tools that can track this?
There's only one reliable way to do what you want: collect and parse the audit logs from all Domain Controllers.
If you have a few users that you want to keep track of over time, an alternative could be monitoring the sum of the logonCount values for that user. Since the logonCount attribute is not replicated, you will have to collect it from each DC per user, then calculate the difference.
You can probably check the AD replication logs for replication of changes to the lastlogontimestamp property.
This script will do what you want for a particular user - closest I've managed to come so far.

Discrepancy in Date Created attribute between Powershell script output and Windows Explorer

I wrote a simple powershell script that recursively walks a file tree and returns the paths of each node along with the time of its creation in tab-separated form, so that I can write it out to a text file and use it to do statistical analysis:
echo "PATH CREATEDATE"
get-childitem -recurse | foreach-object {
$filepath = $_.FullName
$datecreated = $_.CreationTime
echo "$filepath $datecreated"
}
Once I had done this, however, I noticed that the CreationDate times that get produced by the script are exactly one hour ahead of what Windows Explorer says when I look at the same attribute of the same files. Based on inspecting the rest of my dataset (which recorded surrounding events in a different format), it's clear that the results I get from explorer are the only ones that fit the overall narrative, which leads me to believe that there's something wrong with the Powershell script that makes it write out the incorrect time. Does anyone have a sense for why that might be?
Problem background:
I'm trying to correct for a problem in the design of some XML log files, which logged when the users started and stopped using an application when it was actually supposed to log how long it took the users to get through different stages of the workflow. I found a possible way to overcome this problem, by pulling date information from some backup files that the users sent along with the XML logs. The backups are generated by our end-user application at the exact moment when a user transitions between stages in the workflow, so I'm trying to bring information from those files' timestamps together with the contents of the original XML log to figure out what I wanted to know about the workflow steps.
Summary of points that have come out in comment discussion:
The files are located on the same machine as the script I'm running (not a network store)
Correcting for daylight savings and time zones has improved the data quality, but not for the specific issue posed in the original question.
I never found the ultimate technical reason for the discrepancy between the timestamps from powershell vs. explorer, but I was able to correct for it by just subtracting an hour off all the timestamps I got from the powershell script. After doing that, however, there was still a large amount of disagreement between the time stamps I got from out of my XML log files and the ones I pulled from the filesystem using the powershell script. Reasoning that the end-users probably stayed in the same time zone when they were generating the files, I wrote a little algorithm to estimate the time zone of each user by evaluating the median amount of time between steps 1 and 2 in the workflow and steps 2 and 3. If there was a problem with the user's time zone, one of those two timespans would be negative (since the time of the step 2 event was estimated and the times of the steps 1 and 3 events were known from the XML logs.) I then rounded the positive value down to the nearest hour and applied that number of hours as an offset to that user's step 2 times. Overall, this took the amount of bad data in my dataset from 20% down to 0.01%, so I'm happy with the results.
In case anyone needs it, here's the code I used to make the hour offset in the timestamps (not powershell code, this was in a C# script that handled another part of data processing):
DateTime step2time = DateTime.Parse(LastModifyDate);
TimeSpan shenanigansCorrection = new TimeSpan(step2time.Hour-1,step2time.Minute,step2time.Second);
step2time= step2time.Date + shenanigansCorrection;
The reason for redefining the step2time variable is that DateTimes aren't mutable in .NET.

Remotely running Get-EventLog wipes actual event log

I wrote a script to remotely fetch event logs in PowerShell, but I'm not a fan of the way the script makes its own event log entries.
Specifically this script grabs several types of event IDs including the logon/logoff events. Recently I was asked to run the script to fetch events for another user, and had to have this data fetched in a few hours. Normally I start the script and let it run for the majority of the day (because there is usually a LOT of data here), but this time to speed up the process, I spun up 4 instances of the script to fetch this data faster than usual. Each instance of the script was looking at a different time frame so that all 4 scripts combined were fetching in the time frame I had been asked for.
Over the course of 3 hours or so, I had over a million different login attempts for my user ID on this remote computer. I had so many logins that I ended up overwriting the event log data I was originally asked to fetch.
Lessons learned, I'm now researching how to make this faster, more efficient, and more reliable.
Here's the heart of my code, pretty plain and simple, and works for the most part.
$output = Get-EventLog `
-instanceID 4672,4647,4634,4624,4625,4648,4675,4800,4801,4802,4803 `
-logName Security `
-after (Get-Date $inStartDate) `
-before (Get-Date $inEndDate).addDays(1) `
-message ("*" + $InUserID + "*") `
-computerName $inPCID
I guess there are several questions that I haven't been able to figure out in my research thus far. Why would Get-EventLog need to make so many connections? Is it because the connection kept dropping or something?
What would be the fastest way to fetch this data - Using the native Get-EventLog command by specifying a -ComputerName, or should I be using something like Enter-PSSession or Invoke-Command.
Will Enter-PSSession and Invoke-Command both have the same problem I'm having with Get-EventLog?
I'd like to avoid using Enter-PSSession and Invoke-Command for the simple fact that I can't guarantee all machines in the company will have remote-execution enabled.
So, the problem was that Get-EventLog was ultimately wiping the remote event logs I was trying to fetch. Not sure why, but Get-EventLog made over a million "connections" that appeared as logon/logoff events, thus overwriting my logs.
Per #Richard's comment, I did a bit of research, and decided to test using Get-WinEvent, the updated and more powerful replacement for Get-EventLog. After testing around with this for a week, the worst case scenario that I ran into was my script creating 4 logon/logoff events. This is completely acceptable and much nicer than over a million events.
A side question I had related to this topic was performance. Because we're gathering many remote logs, we sometimes need this done as fast as possible. My question is whether or not Get-WinEvent would be fast enough to pull logs, when compared to an Enter-PSSession or an Invoke-Command.
For the scope of this question, Get-WinEvent satisfied both the speed requirements as well as the event requirements and relying on the cmdlet to perform my remoting worked great.
My code is pretty simple, but I'll post it for record reasons:
Get-WinEvent `
-computerName $inPCID `
-FilterHashtable #{LogName="Security";Id=4672,4647,4634,4624,4625,4648,4675,4800,4801,4802,4803; `
StartTime = $inStartDate; EndTime = $inEndDate} | Where-object {$_.message -like ("*" + $InUserID + "*")} `
| export-csv $fullOutputFile