I have this script. I'm trying to count how many file are in.
clear
$ftp_uri = "ftp://ftp.domain.net:"
$user = "username"
$pass = "password"
$subfolder = "/test/out/"
$ftp_urix = $ftp_uri + $subfolder
$uri=[system.URI] $ftp_urix
$ftp=[system.net.ftpwebrequest]::Create($uri)
$ftp.Credentials=New-Object System.Net.NetworkCredential($user,$pass)
#Get a list of files in the current directory.
$ftp.Method=[system.net.WebRequestMethods+ftp]::ListDirectorydetails
$ftp.UseBinary = $true
$ftp.KeepAlive = $false
$ftp.EnableSsl = $true
$ftp.Timeout = 30000
$ftp.UsePassive=$true
try
{
$ftpresponse=$ftp.GetResponse()
$strm=$ftpresponse.GetResponseStream()
$ftpreader=New-Object System.IO.StreamReader($strm,'UTF-8')
$list=$ftpreader.ReadToEnd()
$lines=$list.Split("`n")
$lines
$lines.Count
$ftpReader.Close()
$ftpresponse.Close()
}
catch{
$_|fl * -Force
$ftpReader.Close()
$ftpresponse.Close()
}
In the directory I have three files but $lines.count return 4. $lines have 4 rows, three files and an empty line. Somebody can explain me the mystery?
The $list contains:
file1`nfile2`nfile3`n
If you split the string by "`n", you (correctly) get four parts, with the last one being empty.
You can use an overload of String.Split that takes StringSplitOptions and use RemoveEmptyEntries:
$list.Split("`n", [System.StringSplitOptions]::RemoveEmptyEntries)
Related
We want to generate an SR per row based on the criteria of a CSV file looking like:
SR templete
The additional criterion:
If the SLO countdown is less than 7 days then the due date is always 7 days for the ticket to be due. Otherwise then then countdown is number SLO _Countdown
The support group is always servicedesk
Unless the host_name does not contain "RES" then it is the support group is EITS_HW_Notes and it will be assigned to "custodian".
No matter what an SR is generated even if null.
My difficulty is my lack familiarity with smlets. I am happy to consider generating tickets via email as well. But would like help on how best to do that via powershell. But the code I came up with is below:
`#Prod
#$GLOBAL:smdefaultcomputer = "prodserver"
#Test
$GLOBAL:smdefaultcomputer = "testserver"
Import-Module SMlets
$path = "C:\Temp\Test.csv"
$csv = Import-csv -path $path
#Variable / Class Setup
$srClass = Get-SCSMClass -name System.WorkItem.ServiceRequest
$srprior = Get-SCSMEnumeration -Name ServiceRequestPriorityEnum.Medium
$srurg = Get-SCSMEnumeration -Name ServiceRequestUrgencyEnum.Medium
#$ararea = get-SCSMEnumeration -Name ServiceRequestAreaEnum.Other
$ararea = get-SCSMEnumeration -Name Enum.add3768303064ec18890170ba33cffda
$title = “Title Goes Here”
$descrip = "Description info goes here"
#Service Request Arguements
$srargs = #{
Title = $title;
Urgency = $srurg;
Priority = $srprior;
ID = “SR{0}”;
Area = $ararea;
SupportGroup = "ServiceDesk";
Description = $descrip
}
#Create Service Request
$newServiceRequest = New-SCSMOBject -Class $srClass -PropertyHashtable $srargs -PassThru
#get SR ID of the new object
$SRId = $newServiceRequest.id
#Get Projection & Object for Created Service Request
$srTypeProjection = Get-SCSMTypeProjection -name System.WorkItem.ServiceRequestProjection$
$SRProj = Get-scsmobjectprojection -ProjectionName $srTypeProjection.Name -filter “Id -eq $SRId”
#Set Afffected User
$userClass = Get-SCSMClass -Name Microsoft.AD.UserBase$
$cType = "Microsoft.EnterpriseManagement.Common.EnterpriseManagementObjectCriteria"
$cString = "UserName = 'itservicenotifications' and Domain = 'SHERMAN'"
$crit = new-object $cType $cString,$userClass
$user = Get-SCSMObject -criteria $crit
$AffectedUserRel = get-scsmrelationshipclass -name System.WorkItemAffectedUser$
New-SCSMRelationshipObject -RelationShip $AffectedUserRel -Source $newServiceRequest -Target $user -Bulk`
I tried the above code but am running into issues recognizing the column name in the CSV file and am unfamiliar with SMLETS + powershell if statements.
Columns are:
CSV Columns
CSV text with examples is: Columns with examples
Could you paste the CSV columns as text, please? Or, better, a sample CSV with one or two rows (redact any sensitive data).
I would expect a CSV to contain multiple rows - even if yours does not, it's good defensive programming to act as if it does. So the first modification I suggest is:
$path = "C:\Temp\Test.csv"
$csv = Import-csv -path $path
foreach ($Row in $csv)
{
# the rest of your code goes in here
}
I find it helpful while debugging to go step-by-step. If I understand your problem right, it's about building the right hashtable in $srargs to pass to New-SCSMOBject. So the next modification is:
foreach ($Row in $csv)
{
$srClass = Get-SCSMClass -name System.WorkItem.ServiceRequest
# etc
$srargs = #{
Title = $title
Urgency = $srurg
Priority = $srprior
ID = “SR{0}”
Area = $ararea
SupportGroup = "ServiceDesk"
Description = $descrip
}
$srargs # write the hashtable so you can inspect it
# skip the rest of the code for now
}
I understand your question as "how to express the logic of":
support group is always servicedesk
Unless the host_name does not contain "RES"
then the support group is contents of EITS_HW_Notes cell in CSV
and it will be assigned to "custodian"
I can't help you with setting the assignee. But we can rejig the rest of the statement:
if host_name contains "RES"
SupportGroup = servicedesk
else
SupportGroup = contents of EITS_HW_Notes cell
You can code that like this:
foreach ($Row in $csv)
{
$srClass = Get-SCSMClass -name System.WorkItem.ServiceRequest
# etc
if ($Row.host_name -like "*RES*")
{
$SupportGroup = "ServiceDesk"
}
else
{
$SupportGroup = $Row.EITS_HW_Notes
}
$srargs = #{
Title = $title
# etc
SupportGroup = $SupportGroup
Description = $descrip
}
}
Does that get you any closer to your solution?
Final Update: Turns out I didn't need Binary writer. I could just copy memory streams from one archive to another.
I'm re-writing a PowerShell script which works with archives. I'm using two functions from here
Expand-Archive without Importing and Exporting files
and can successfully read and write files to the archive. I've posted the whole program just in case it makes things clearer for someone to help me.
However, there are three issues (besides the fact that I don't really know what I'm doing).
1.) Most files have this error on when trying to run
Add-ZipEntry -ZipFilePath ($OriginalArchivePath + $PartFileDirectoryName) -EntryPath $entry.FullName -Content $fileBytes}
Cannot convert value "507" to type "System.Byte". Error: "Value was either too large or too small for an unsigned byte." (replace 507 with whatever number from the byte array is there)
2.) When it reads a file and adds it to the zip archive (*.imscc) it adds a character "a" to the beginning of the file contents.
3.) The only file it doesn't error on are text files, when I really want it to handle any file
Thank you for any assistance!
Update: I've tried using System.IO.BinaryWriter, with the same errors.
Add-Type -AssemblyName 'System.Windows.Forms'
Add-Type -AssemblyName 'System.IO.Compression'
Add-Type -AssemblyName 'System.IO.Compression.FileSystem'
function Folder-SuffixGenerator($SplitFileCounter)
{
return ' ('+$usrSuffix+' '+$SplitFileCounter+')'
}
function Get-ZipEntryContent(#returns the bytes of the first matching entry
[string] $ZipFilePath, #optional - specify a ZipStream or path
[IO.Stream] $ZipStream = (New-Object IO.FileStream($ZipFilePath, [IO.FileMode]::Open)),
[string] $EntryPath){
$ZipArchive = New-Object IO.Compression.ZipArchive($ZipStream, [IO.Compression.ZipArchiveMode]::Read)
$buf = New-Object byte[] (0) #return an empty byte array if not found
$ZipArchive.GetEntry($EntryPath) | ?{$_} | %{ #GetEntry returns first matching entry or null if there is no match
$buf = New-Object byte[] ($_.Length)
Write-Verbose " reading: $($_.Name)"
$_.Open().Read($buf,0,$buf.Length)
}
$ZipArchive.Dispose()
$ZipStream.Close()
$ZipStream.Dispose()
return ,$buf
}
function Add-ZipEntry(#Adds an entry to the $ZipStream. Sample call: Add-ZipEntry -ZipFilePath "$PSScriptRoot\temp.zip" -EntryPath Test.xml -Content ([text.encoding]::UTF8.GetBytes("Testing"))
[string] $ZipFilePath, #optional - specify a ZipStream or path
[IO.Stream] $ZipStream = (New-Object IO.FileStream($ZipFilePath, [IO.FileMode]::OpenOrCreate)),
[string] $EntryPath,
[byte[]] $Content,
[switch] $OverWrite, #if specified, will not create a second copy of an existing entry
[switch] $PassThru ){#return a copy of $ZipStream
$ZipArchive = New-Object IO.Compression.ZipArchive($ZipStream, [IO.Compression.ZipArchiveMode]::Update, $true)
$ExistingEntry = $ZipArchive.GetEntry($EntryPath) | ?{$_}
If($OverWrite -and $ExistingEntry){
Write-Verbose " deleting existing $($ExistingEntry.FullName)"
$ExistingEntry.Delete()
}
$Entry = $ZipArchive.CreateEntry($EntryPath)
$WriteStream = New-Object System.IO.StreamWriter($Entry.Open())
$WriteStream.Write($Content,0,$Content.Length)
$WriteStream.Flush()
$WriteStream.Dispose()
$ZipArchive.Dispose()
If($PassThru){
$OutStream = New-Object System.IO.MemoryStream
$ZipStream.Seek(0, 'Begin') | Out-Null
$ZipStream.CopyTo($OutStream)
}
$ZipStream.Close()
$ZipStream.Dispose()
If($PassThru){$OutStream}
}
$NoDeleteFiles = #('files_meta.xml' ,'course_settings.xml', 'assignment_groups.xml', 'canvas_export.txt', 'imsmanifest.xml')
Set-Variable usrSuffix -Option ReadOnly -Value 'part' -Force
$MaxImportFileSize = 1000
$compressionLevel = [System.IO.Compression.CompressionLevel]::Optimal
$SplitFileCounter = 1
$FileBrowser = New-Object System.Windows.Forms.OpenFileDialog
$FileBrowser.filter = "Canvas Export Files (*.imscc)| *.imscc"
[void]$FileBrowser.ShowDialog()
$FileBrowser.FileName
$FilePath = $FileBrowser.FileName
$OriginalArchivePath = $FilePath.Substring(0,$FilePath.Length-6)
$PartFileDirectoryName = $OriginalArchive + (Folder-SuffixGenerator($SplitFileCounter)) + '.imscc'
$CourseZip = [IO.Compression.ZipFile]::OpenRead($FilePath)
$CourseZipFiles = $CourseZip.Entries | Sort Length -Descending
$CourseZip.Dispose()
<#
$SortingTable = $CourseZip.entries | Select Fullname,
#{Name="Size";Expression={$_.length}},
#{Name="CompressedSize";Expression={$_.Compressedlength}},
#{Name="PctZip";Expression={[math]::Round(($_.compressedlength/$_.length)*100,2)}}|
Sort Size -Descending | format-table –AutoSize
#>
# Add mandatory files
ForEach($entry in $CourseZipFiles)
{
if ($NoDeleteFiles.Contains($entry.Name)){
Write-Output "Adding to Zip" + $entry.FullName
# Add to Zip
$fileBytes = Get-ZipEntryContent -ZipFilePath $FilePath -EntryPath $entry.FullName
Add-ZipEntry -ZipFilePath ($OriginalArchivePath + $PartFileDirectoryName) -EntryPath $entry.FullName -Content $fileBytes
}
}```
System.IO.StreamWriter is a text writer, and therefore not suitable for writing raw bytes. Cannot convert value "507" to type "System.Byte" indicates that an inappropriate attempt was made to convert text - a .NET string composed of [char] instances which are in effect [uint16] code points (range 0x0 - 0xffff) - to [byte] instances (0x0 - 0xff). Therefore, any Unicode character whose code point is greater than 255 (0xff) will cause this error.
The solution is to use a .NET API that allows writing raw bytes, namely System.IO.BinaryWriter:
$WriteStream = [System.IO.BinaryWriter]::new($Entry.Open())
$WriteStream.Write($Content)
$WriteStream.Flush()
$WriteStream.Dispose()
I am very new to powershell script. i am trying to get SSAS Tabular model connection string details for multiple servers. i have code which will return only for single server. How to modify the code to pass multiple servers?
$servername = "servername1"
# Connect SSAS Server
$server = New-Object Microsoft.AnalysisServices.Server
$server.connect($servername)
$DSTable = #();
foreach ( $db in $server.databases)
{
$dbname = $db.Name
$Srver = $db.ParentServer
foreach ( $ds in $db.Model.DataSources)
{
$hash = #
{
"Server" = $Srver;
"Model_Name" = $dbname ;
"Datasource_Name" = $ds.Name ;
"ConnectionString" = $ds.ConnectionString ;
"ImpersonationMode" = $ds.ImpersonationMode;
"Impersonation_Account" = $ds.Account;
}
$row = New-Object psobject -Property $hash
$DSTable += $row
}
}
As commented, you can surround the code you have in another foreach loop.
Using array concatenation with += is a bad idea, because on each addition, the entire array needs to be recreated in memory, so that is both time and memory consuming.
Best thing is to let PowerShell do the heavy lifting of collecting the data:
$allServers = 'server01','server02','server03' # etc. an array of servernames
# loop through the servers array and collect the utput in variable $result
$result = foreach($servername in $allServers) {
# Connect SSAS Server
$server = New-Object Microsoft.AnalysisServices.Server
$server.Connect($servername)
foreach ( $db in $server.databases) {
foreach ( $ds in $db.Model.DataSources) {
# output an object with the desired properties
[PsCustomObject]#{
Server = $db.ParentServer
Model_Name = $db.Name
Datasource_Name = $ds.Name
ConnectionString = $ds.ConnectionString
ImpersonationMode = $ds.ImpersonationMode
Impersonation_Account = $ds.Account
}
}
}
}
# output on screen
$result | Out-GridView -Title 'SSAS connection string details'
# output to a CSV file (change the path and filename here of course..)
$result | Export-Csv -Path 'D:\Test\MySSAS_Connections.csv' -UseCulture -NoTypeInformation
The above uses parameter -UseCulture because then the delimiter used for the CSV file is the same as your machine expects when double-clicking and opening in Excel. Without that, the default comma is used
I have been researching this for weeks now and can't seem to make much ground on the subject. I have a large PDF (900+ pages), that is the result of a mail merge. The result is 900+ copies of the same document which is one page, with the only difference being someone's name on the bottom. What I am trying to do, is have a powershell script read the document using itextsharp and save pages that contain a specific string (the person's name) into their respective folder.
This is what I have managed so far.
Add-Type -Path C:\scripts\itextsharp.dll
$reader = New-Object iTextSharp.text.pdf.pdfreader -ArgumentList
"$pwd\downloads\TMs.pdf"
for($page = 1; $page -le $reader.NumberOfPages; $page++) {
$pageText = [iTextSharp.text.pdf.parser.PdfTextExtractor]::GetTextFromPage($reader,$page).Split([char]0x000A)
if($PageText -match 'DAN KAGAN'){
Write-Host "DAN FOUND"
}
}
As you can see I am only using one name for now for testing. The script finds the name properly 10 times. What I cannot seem to find any information on, is how to extract pages that this string appears on.
I hope this was clear. If I can be of any help, please let me know.
Thanks!
I actually just finished writing a very similar script. With my script, I need to scan a PDF of report cards, find a student's name and ID number, and then extract that page and name it appropriately. However, each report card can span multiple pages.
It looks like you're using iTextSharp 5, which is good because so am I. iTextSharp 7's syntax is wildly different and I haven't learned it yet.
Here's the logic that does the page extraction, roughly:
$Document = [iTextSharp.text.Document]::new($PdfReader.GetPageSizeWithRotation($StartPage))
$TargetMemoryStream = [System.IO.MemoryStream]::new()
$PdfCopy = [iTextSharp.text.pdf.PdfSmartCopy]::new($Document, $TargetMemoryStream)
$Document.Open()
foreach ($Page in $StartPage..$EndPage) {
$PdfCopy.AddPage($PdfCopy.GetImportedPage($PdfReader, $Page));
}
$Document.Close()
$NewFileName = 'Elementary Student Record - {0}.pdf' -f $Current.Student_Id
$NewFileFullName = [System.IO.Path]::Combine($OutputFolder, $NewFileName)
[System.IO.File]::WriteAllBytes($NewFileFullName, $TargetMemoryStream.ToArray())
Here is the complete working script. I've removed as little as possible to provide you a near working example:
Import-Module -Name SqlServer -Cmdlet Invoke-Sqlcmd
Add-Type -Path 'C:\...\itextsharp.dll'
# Get table of valid student IDs
$ServerInstance = '...'
$Database = '...'
$Query = #'
select student_id, student_name from student
'#
$ValidStudents = #{}
Invoke-Sqlcmd -Query $Query -ServerInstance $ServerInstance -Database $Database -OutputAs DataRows | ForEach-Object {
[void]$ValidStudents.Add($_.student_id.trim(), $_.student_name)
}
$PdfFiles = Get-ChildItem "G:\....\*.pdf" -File |
Select-Object -ExpandProperty FullName
$OutputFolder = 'G:\...'
$StudentIDSearchPattern = '(?mn)^(?<Student_Id>\d{6,7}) - (?<Student_Name>.*)$'
foreach ($PdfFile in $PdfFiles) {
$PdfReader = [iTextSharp.text.pdf.PdfReader]::new($PdfFile)
$StudentStack = [System.Collections.Stack]::new()
# Map out the PDF file.
foreach ($Page in 1..($PdfReader.NumberOfPages)) {
[iTextSharp.text.pdf.parser.PdfTextExtractor]::GetTextFromPage($PdfReader, $Page) |
Where-Object { $_ -match $StudentIDSearchPattern } |
ForEach-Object {
$StudentStack.Push([PSCustomObject]#{
Student_Id = $Matches['Student_Id']
Student_Name = $Matches['Student_Name']
StartPage = $Page
IsValid = $ValidStudents.ContainsKey($Matches['Student_Id'])
})
}
}
# Extract the pages and save the files
$LastPage = $PdfReader.NumberOfPages
while ($StudentStack.Count -gt 0) {
$Current = $StudentStack.Pop()
$StartPage = $Current.StartPage
$EndPage = $LastPage
$Document = [iTextSharp.text.Document]::new($PdfReader.GetPageSizeWithRotation($StartPage))
$TargetMemoryStream = [System.IO.MemoryStream]::new()
$PdfCopy = [iTextSharp.text.pdf.PdfSmartCopy]::new($Document, $TargetMemoryStream)
$Document.Open()
foreach ($Page in $StartPage..$EndPage) {
$PdfCopy.AddPage($PdfCopy.GetImportedPage($PdfReader, $Page));
}
$Document.Close()
$NewFileName = 'Elementary Student Record - {0}.pdf' -f $Current.Student_Id
$NewFileFullName = [System.IO.Path]::Combine($OutputFolder, $NewFileName)
[System.IO.File]::WriteAllBytes($NewFileFullName, $TargetMemoryStream.ToArray())
$LastPage = $Current.StartPage - 1
}
}
In my test environment this processes about 500 students across 5 source PDFs in about 15 seconds.
I tend to use constructors instead of New-Object, but there's no real difference between them. I just find them easier to read.
I'm trying to create a Powershell script that will setup a brand new workspace in a temporary location, do a GetLatest on selected solutions/projects, and download the source code so that I can then trigger further build/versioning operations.
I think I have the script more or less right, but the problem is every time I run this, it tells me there were 0 operations... i.e. I already have the latest versions. This results in nothing at all being downloaded.
Can anyone see what I'm doing wrong?
$subfolder = [System.Guid]::NewGuid().ToString()
$tfsServer = "http://tfsserver:8080/tfs"
$projectsAndWorkspaces = #(
#("$/Client1/Project1","D:\Builds\$subfolder\Client1\Project1"),
#("$/Client1/Project2","D:\Builds\$subfolder\Client1\Project2"),
)
$tfsCollection = [Microsoft.TeamFoundation.Client.TfsTeamProjectCollectionFactory]::GetTeamProjectCollection($tfsServer)
$tfsVersionCtrl = $tfsCollection.GetService([type] "Microsoft.TeamFoundation.VersionControl.Client.VersionControlServer")
$tfsWorkspace = $tfsVersionCtrl.CreateWorkspace($subfolder, $tfsVersionCtrl.AuthorizedUser)
Write-Host "Operations:"
foreach ($projectAndWs in $projectsAndWorkspaces)
{
if (-not(Test-Path $projectAndWs[1]))
{
New-Item -ItemType Directory -Force -Path $projectAndWs[1] | Out-Null
}
$tfsWorkspace.Map($projectAndWs[0], $projectAndWs[1])
$recursion = [Microsoft.TeamFoundation.VersionControl.Client.RecursionType]::Full
$itemSpecFullTeamProj = New-Object Microsoft.TeamFoundation.VersionControl.Client.ItemSpec($projectAndWs[0], $recursion)
$fileRequest = New-Object Microsoft.TeamFoundation.VersionControl.Client.GetRequest($itemSpecFullTeamProj, [Microsoft.TeamFoundation.VersionControl.Client.VersionSpec]::Latest)
$getStatus = $tfsWorkspace.Get($fileRequest, [Microsoft.TeamFoundation.VersionControl.Client.GetOptions]::Overwrite)
Write-Host ("[{0}] {1}" -f $getStatus.NumOperations, ($projectAndWs[0].Substring($projectAndWs[0].LastIndexOf("/") + 1)))
}
Write-Host "Finished"
The $tfsServer = "http://tfsserver:8080/tfs" should be $tfsServer = "http://tfsserver:8080/tfs/nameOfACollection"
The "$/Client1/Project1" string smells. I would add a backtick before the dollar sign so it is not read as a variable or use single quotes.
Backtick
"`$/Client1/Project1"
Single quote
'$/Client1/Project1'