Note: When use Get-Host getting Version: 2.0 in powershell.
In Windows 2008 server, executed below code to create domainlist.xml file at System32 folder.
Start-Process -FilePath "C:\Windows\System32\rendom.exe" -ArgumentList "/list"
Using below query to read the xml but it is error out.
$path = "C:\Users\074129\Desktop\Domainlist.xml"
$xml = [xml](Get-Content -Path $path)
$xml.Forest.Domain.NetBiosName[2]
Error Message: Cannot index into a null array
No value is returning even for '$xml.Forest' command.
Please help to read and replace the node value in domainlist.xml file.
Let me know if any additional details required from my end.
If I correctly understood your situation you cannot get NetBiosName node from Domainlist.xml. Actually, there is a scenario when all NetBiosName nodes are empty that's why you should check them first (the reasons of this behavior are different).
$xml = 'C:\Users\074129\Desktop\Domainlist.xml'
if (($col = Select-Xml -Path $xml -XPath //NetBiosName) -is [Array]) {
$col[2].Node
# or if you need find all non-empty nodes
# $col | Where-Object {$_.Node.'#text' -ne $null}
}
This is a known problem of old powershell versions.
It should work if you load the xml manually instead if I remember correctly:
$path = "C:\Users\074129\Desktop\Domainlist.xml"
$xml = New-Object -TypeName "System.Xml.XmlDocument"
$xml.Load($path)
$xml.Forest.Domain.NetBiosName[2]
Related
I want to print .pdf-files on different printers - depending on their content.
How can I check whether a specific single word is present in a file?
To queue through a folder's content I've build the following so far:
Unblock-File -Path S:\test\itextsharp.dll
Add-Type -Path S:\test\itextsharp.dll
$files = Get-ChildItem S:\test\*.pdf
$adobe='C:\Program Files (x86)\Adobe\Acrobat DC\Acrobat\Acrobat.exe'
foreach ($file in $files) {
$reader = [iTextSharp.text.pdf.parser.PdfTextExtractor]
$Extract = $reader::GetTextFromPage($File.FullName,1)
if ($Extract -Contains 'Lieferschein') {
Write-Host -ForegroundColor Yellow "Lieferschein"
$printername='XX1'
$drivername='XX1'
$portname='192.168.X.41'
} else {
Write-Host -ForegroundColor Yellow "Etikett"
$printername='XX2'
$drivername='XX2'
$portname='192.168.X.42'
}
$arglist = '/S /T "' + $file.FullName + '" "' + $printername + '" "' + $drivername + " " + $portname
start-process $adobe -argumentlist $arglist -wait
Start-Sleep -Seconds 15
Remove-Item $file.FullName
}
And for now I got 2 problems with it:
1st: Add-Type -Path itextsharp.dll gives me an error.
Add-Type: One or more types in the assembly cannot be loaded. Get the LoaderExceptions property for more information. In line: 2 character: 1
I've read that it might be due to the file being blocked. There is no information about that in the properties though. And the Unblock-File comand and the start doesn't change/solve anything.
After using $error[0].exception.loaderexceptions[0] I get the information that BouncyCastle.Crypto, Version=1.8.6.0 is missing. Unfortunatelly I can't find any sources for that yet.
2nd: Will if ($Extract -Contains 'Lieferschein') work as I intend? Will it check for the phrase after the Add-Type gets loaded successfully?
Alternatively: There's also the possibility to make it depend from the content's format. One type of the files has the size of DIN A4 for example. The other one is smaller than that. If there's an easier way to check for that, you'd make me happy aswell.
Thank you in advance!
Searching for a keyword in a pdf using Powershell and iTextSharp.dll. It's a very common thing. You then just use your conditional logic to send to whatever printer you choose.
SO, something like this should do.
Add-Type -Path 'C:\path_to_dll\itextsharp.dll'
$pdfs = Get-ChildItem 'C:\path_to_pdfs' -Filter '*.pdf'
$export = 'D:\Temp\PdfExport.csv'
$results = #()
$keywords = #('Keyword1')
foreach ($pdf in $pdfs)
{
"processing - $($pdf.FullName)"
$reader = New-Object iTextSharp.text.pdf.pdfreader -ArgumentList $pdf.FullName
for ($page = 1; $page -le $reader.NumberOfPages; $page++)
{
$pageText = [iTextSharp.text.pdf.parser.PdfTextExtractor]::GetTextFromPage($reader, $page).Split([char]0x000A)
foreach ($keyword in $keywords)
{
if ($pageText -match $keyword)
{
$response = #{
keyword = $keyword
file = $pdf.FullName
page = $page
}
$results += New-Object PSObject -Property $response
}
}
}
$reader.Close()
}
"`ndone"
$results |
Export-Csv $export -NoTypeInformation
Update
As per your comment, regarding your error.
Again, iTextSharp is a legacy, and you really need to move to iText7.
Nonetheless, that is not a PowerShell code issue. It is an iTextSharp.dll missing dependency. Even with iText7, you need to ensure you have all the dependencies on your machine and properly loaded.
As noted in this SO Q&A:
How to use Itext7 in powershell V5, Exception when loading pdfWriter
1st:
After finding the correct version (1.8.6) on nuget.org the Add-Type commands work perfectly. As expected I didn't even need the unblock command as it was not marked as a blocked file in the properties. Now the script starts with:
Add-Type -Path 'c:\BouncyCastle.Crypto.dll'
Add-Type -Path 'c:\itextsharp.dll'
2nd
Regarding the check-queue: I just had to replace -contains with -match in my if clause.
if ($Extract -Contains 'Lieferschein')
The task is to update the app Settings in web.config and app.config using the power-shell scripting. After some search I found some script to update single file but not for multiple files. Can anyone help?
$Config = C:\inetpub\wwwroot\TestService\Web.config
$doc = (Get-Content $Config) -as [Xml]
$obj = $doc.configuration.appSettings.add | where {$_.Key -eq 'SCVMMServerName'}
$obj.value = CPVMM02
$doc.Save($Config)
I can give you a logical set off. You can get that line which you want to update using the -match in select-string then similarly you can select the remaining things which is already there in file using -notmatch.
Put them in variables. Update the line, store it back in the variable.
Then set both(the modified line variable and the remaining values which you have not modified) back to the file using set-content
Hope you got a set off on how to approach
There are many ways to do this, for instance:
"C:\inetpub\wwwroot\TestService\Web.config",
"C:\inetpub\wwwroot\TestService\App.config" |
ForEach-Object {
$doc = (Get-Content $_) -as [Xml]
$obj = $doc.configuration.appSettings.add |
Where-Object { $_.Key -eq 'SCVMMServerName' }
$obj.value = CPVMM02
$doc.Save($_)
}
I'm trying to apply a hash function to all the files inside a folder as some kind of version control. The idea is to make a testfile that lists the name of the file and the generated checksum. Digging online I found some code that should do the trick (in theory):
$list = Get-ChildItem 'C:\users\public\documents\folder' -Filter *.cab
$sha1 = New-Object System.Security.Cryptography.SHA1CryptoServiceProvider
foreach ($file in $list) {
$return = "" | Select Name, Hash
$returnname = $file.Name
$returnhash = [System.BitConverter]::ToString($sha1.ComputeHash([System.IO.File]::ReadAllBytes($file.Name)))
$return = "$returnname,$returnhash"
Out-File -FilePath .\mylist.txt -Encoding Default -InputObject ($return) -Append
}
When I run it however, I get an error because it tries to read the files from c:\users\me\, the folder where I'm running the script. And the file c:\users\me\aa.cab does not exist and hence can't be reached.
I've tried everything that I could think of, but no luck. I'm using Windows 7 with Powershell 2.0, if that helps in any way.
Try with .FullName instead of just .Name.
$returnhash = [System.BitConverter]::ToString($sha1.ComputeHash([System.IO.File]::ReadAllBytes($file.FullName)))
I have an array with 3 elements(feature) in my code. Currently i have declared them as $feature = "System","Battery","Signal","Current";
But in future there can be more features. So I thought of giving an option in my code to add new feature(implemented as GUI) and using $feature.Add("$new_feature") command.
This works perfectly for that particular run of the script. But when i run the script again, this new added feature is not appearing. How can i solve this issue, so that when ever new feature is added, then it will remain in the script for ever?
Is this possible?
The simplest approach would be to store the array data in a file:
# read array from file
$feature = #(Get-Content 'features.txt')
# write array back to file
$feature | Set-Content 'features.txt'
You can use $PSScriptRoot to get the location of the script file (so you can store the data file in the same folder). Prior to PowerShell v3 use the following command to determine the folder containing the script:
$PSScriptRoot = Split-Path $MyInvocation.MyCommand.Path -Parent
Another option is to store the data in the registry (easier to locate the data, but a little more complex to handle):
$key = 'HKCU:\some\key'
$name = 'features'
# read array from registry
$feature = #(Get-ItemProperty -Path $key -Name $name -EA SilentlyContinue | Select-Object -Expand $name)
# create registry value if it didn't exist before
if (-not $?) {
New-ItemProperty -Path $key -Name $name -Type MultiString -Value #()
}
# write array back to registry
Set-ItemProperty -Path $key -Name $name -Value $feature
I'm trying to get a script to query files on an IIS website, then download those files automatically. So far, I have this:
$webclient = New-Object System.Net.webclient
$source = "http://testsite:8005/"
$destination = "C:\users\administrator\desktop\testfolder\"
#The following line returns the links in the webpage
$testcode1 = $webclient.downloadstring($source) -split "<a\s+" | %{ [void]($_ -match "^href=['"]([^'">\s]*)"); $matches[1] }
foreach ($line in $test2) {
$webclient.downloadfile($source + $line, $destination + $line)
}
I'm not that good at PowerShell yet, and I get some errors, but I manage to get a couple test files I threw into my wwwroot folder (the web.config file seems undownloadable, so I'd imagine thats one of my errors). When I tried to change my $source value to a subfolder on my site that had some test text files(example = http://testsite:8005/subfolder/, I get errors and no downloads at all. Running my $testcode1 will give me the following links in my subfolder:
/subfolder/test2/txt
/
/subfolder/test1.txt
/subfolder/test2.txt
I don't know why it lists the test2 file twice. I figured my problem was that since it was returning the subfolder/file format, that I was getting errors because I was trying to download $source + $line, which would essentially be http://testsite:8005/subfolder/subfolder/test1.txt, but when I tried to remedy that by adding in a $root value that was the root directory of my site and do a foreach($line in $testcode1) { $webclient.downloadfile($root + $line, $destination + $line) }, I still get errors.
If some of you high speed gurus can help show me the error of my ways, I'd be grateful. I am looking to download all the files in each subfolder on my site, which I know would involve use of some recursive action, but again, I currently do not have the skill level myself to do that. Thank you in advance on helping me out!
Best way to download files from a website is to use
Invoke-WebRequest –Uri $url
Once you are able to get hold of the html you can parse the content for the links.
$result = (((Invoke-WebRequest –Uri $url).Links | Where-Object {$_.href -like “http*”} ) | select href).href
Give it a try. Its simpler than $webclient = New-Object System.Net.webclient
This is to augment A_N's answer with two examples.
Download this Stackoverflow question to C:/temp/question.htm.
Invoke-RestMethod -Uri stackoverflow.com/q/19572091/1108891 -OutFile C:/temp/question.htm
Download a simple text document to C:/temp/rfc2616.txt.
Invoke-RestMethod -Uri tools.ietf.org/html/rfc2616 -OutFile C:/temp/rfc2616.txt
I made a simple Powershell script to clone an openbsd package repo. It probably would work / could be implemented in other ways/use cases for similar things.
GitHub link
# Quick and dirty script to clone a package repo. Only tested against OpenBSD.
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$share = "\\172.16.10.99\wmfbshare\obsd_repo\"
$url = "https://ftp3.usa.openbsd.org/pub/OpenBSD/snapshots/packages/amd64/"
cd $share
$packages = Invoke-WebRequest -Uri $url -UseBasicParsing $url
$dlfolder = "\\172.16.10.99\wmfbshare\obsd_repo\"
foreach ($package in $packages.links.href){
if ((get-item $package -ErrorAction SilentlyContinue)){
write-host "$package already downloaded"
} else {
write-host "Downlading $package"
wget "$url/$package" -outfile "$dlfolder\$package"
}
}
I would try this:
$webclient = New-Object System.Net.webclient
$source = "http://testsite:8005/"
$destination = "C:\users\administrator\desktop\testfolder\"
#The following line returns the links in the webpage
$testcode1 = $webclient.downloadstring($source) -split "<a\s+" | %{ [void]($_ -match "^href=['"]([^'">\s]*)"); $matches[1] }
foreach ($line in $testcode1) {
$Destination = "$destination\$line"
#Create a new directory if it doesn't exist
if (!(Test-Path $Destination)){
New-Item $Destination -type directory -Force
}
$webclient.downloadfile($source + $line, $destination + $line)
}
I think your only issue here is that you were grabbing a new file from a new directory, and putting it into a folder that didn't exist yet (I could be mistaken).
You can do some additional troubleshooting if that doesn't fix your problem:
Copy each line individually into your powershell window and run them up to the foreach loop. Then type out your variable holding all the gold:
$testcode1
When you enter that into the console, it should spit out exactly what's in there. Then you can do additional troubleshooting like this:
"Attempting to copy $Source$line to $Destination$line"
And see if it looks the way it should all the way on down. You might have to adjust my code a bit.
-Dale Harris