Powershell - Efficient way to keep content and append to the same file? - powershell

I want to keep the first comment section lines of a file and overwrite everything else. Currently this section is 27 lines long.
Each line begins with a # (think of it as a giant comment section).
What I want to do is keep the initial comment section, delete everything following the comment section, then append a new string to this file just below this comment section.
I found a way to hardcode it, but I think this is pretty ineffecient. I don't think it's best to hardcode in 27 as a literal.
The way I've handled it is:
$fileProc = Get-Content $someFile
$keep = $fileProc[0..27]
$keep | Set-Content $someFile
Add-Content $someFile "`n`n# Insert new string here"
Add-Content $someFile "`n EMPTY_PROCESS.EXE"
Is there a more efficient way to handle this?

You can use a switch statement to efficiently extract the section of comment lines at the start.
Set-Content out.txt -Value $(
#(
switch -Wildcard -File $someFile {
'#*' { $_ }
default { break } # End of comments section reached.
}
) + "`n`n# Insert new string here", "`n EMPTY_PROCESS.EXE"
)
Note:
To be safe, the above writes to a new file, out.txt, but you can write directly back to $someFile, if desired.
Wildcard expression #* assumes that each line in the comment section starts with #, with no preceding whitespace; if you need to account for preceding whitespace, use the -Regex switch in lieu of -Wildcard, and use regex '^\s*#' in lieu of '#*'

Not sure about limiting it to first set of 27 or so lines but this should work.
First line below is to only keep the lines of file that start with '#'.
(Get-Content $somefile) | Where { $_ -match "^#" } | Set-Content $somefile
Add-Content $somefile "`n`nblah blah"
Add-Content $somefile "`nglug glug blug glug"
You can then use Add-Content for additional lines. Hope this helps :]

Efficient way [...] pretty inefficient [...] a more efficient way
Don't open the file many times, paying the cost of ACL security and AntiVirus checks and disk access delays.
Avoid PowerShell cmdlets and scriptblocks.
Avoid loops in PowerShell, push work to lower layers.
Avoid heavyweight searches like regex and wildcard.
Avoid making arrays of string for the lines.
Open file once, do a single linear scan and truncate when the pattern is found then write new data. Assuming no other comment lines in the data the pattern is "the last "\n#" is the start of the last comment, then the newline after that is the cutoff". e.g.:
$f = [System.IO.FileStream]::new('d:\test.txt', 'open')
$content = [System.IO.StreamReader]::new($f).ReadToEnd()
$lastComment = $content.LastIndexOf("`n#")
$nextLine = $content.IndexOf("`n", 1+$lastComment)
$f.SetLength($nextLine) # truncate
$w = [System.IO.StreamWriter]::new($f)
$w.WriteLine("new next Line")
$w.Close()
If there could be other comment lines, redesign the file so there is a sentinal value to find - easier than finding the absence of a thing.
Compared to mklement0's answer this doesn't cost any PowerShell cmdlet startup time, uses no subshells, no wildcard pattern matching, no arrays of string, and doesn't open the file twice. On a file with 10,000 comment lines:
your original code takes ~0.4 seconds
mklement0's code takes ~0.04 seconds
this code takes ~0.02 seconds.
A more efficient way - QED.

Related

Read value of variable in .ps1 and update the same variable in another .ps1

I'm trying to find an efficient way to read the value of a string variable in a PowerShell .ps1 file and then update the same variable/value in another .ps1 file. In my specific case, I would update a variable for the version # on script one and then I would want to run a script to update it on multiple other .ps1 files. For example:
1_script.ps1 - Script I want to read variable from
$global:scriptVersion = "v1.1"
2_script.ps1 - script I would want to update variable on (Should update to v1.1)
$global:scriptVersion = "v1.0"
I would want to update 2_script.ps1 to set the variable to "v1.1" as read from 1_script.ps1. My current method is using get-content with a regex to find a line starting with my variable, then doing a bunch of replaces to get the portion of the string I want. This does work, but it seems like there is probably a better way I am missing or didn't get working correctly in my tests.
My Modified Regex Solution Based on Answer by #mklement0 :
I slightly modified #mklement0 's solution because dot-sourcing the first script was causing it to run
$file1 = ".\1_script.ps1"
$file2 = ".\2_script.ps1"
$fileversion = (Get-Content $file1 | Where-Object {$_ -match '(?m)(?<=^\s*\$global:scriptVersion\s*=\s*")[^"]+'}).Split("=")[1].Trim().Replace('"','')
(Get-Content -Raw $file2) -replace '(?m)(?<=^\s*\$global:scriptVersion\s*=\s*")[^"]+',$fileversion | Set-Content $file2 -NoNewLine
Generally, the most robust way to parse PowerShell code is to use the language parser. However, reconstructing source code, with modifications after parsing, may situationally be hampered by the parser not reporting the details of intra-line whitespace - see this answer for an example and a discussion.[1]
Pragmatically speaking, using a regex-based -replace solution is probably good enough in your simple case (note that the value to update is assumed to be enclosed in "..." - but matching could be made more flexible to support '...' quoting too):
# Dot-source the first script in order to obtain the new value.
# Note: This invariably executes *all* top-level code in the script.
. .\1_script.ps1
# Outputs to the display.
# Append
# | Set-Content -Encoding utf8 2_script.ps1
# to save back to the input file.
(Get-Content -Raw 2_script.ps1) -replace '(?m)(?<=^\s*\$global:scriptVersion\s*=\s*")[^"]+', $global:scriptVersion
For an explanation of the regex and the ability to experiment with it, see this regex101.com page.
[1] Syntactic elements are reported in terms of line and column position, and columns are character-based, meaning that spaces and tabs are treated the same, so that a difference of, say, 3 character positions can represent 3 spaces, 3 tabs, or any mix of it - the parser won't tell you. However, if your approach allows keeping the source code as a whole while only removing and splicing in certain elements, that won't be a problem, as shown in iRon's helpful answer.
To compliment the helpful answer from #mklement0. In case your do go for the PowerShell abstract syntax tree (AST) class, you might use the Extent.StartOffset/Extent.EndOffset properties to reconstruct your script:
Using NameSpace System.Management.Automation.Language
$global:scriptVersion = 'v1.1' # . .\Script1.ps1
$Script2 = { # = Get-Content -Raw .\Script2.ps1
[CmdletBinding()]param()
begin {
$global:scriptVersion = "v1.0"
}
process {
$_
}
end {}
}.ToString()
$Ast = [Parser]::ParseInput($Script2, [ref]$null, [ref]$null)
$Extent = $Ast.Find(
{
$args[0] -is [AssignmentStatementAst] -and
$args[0].Left.VariablePath.UserPath -eq 'global:scriptVersion' -and
$args[0].Operator -eq 'Equals'
}, $true
).Right.Extent
-Join (
$Script2.SubString(0, $Extent.StartOffset),
$global:scriptVersion,
$Script2.SubString($Extent.EndOffset)
) # |Set-Content .\Script2.ps1

How to split a text file into two in PowerShell?

I have one text file with Script that I want to split into two
Below is the dummy script
--serverone
this is first part of my script
--servertwo
this is second part of my script
I want to create two text files that would look like
file1
--serverone
this is first part of my script
file2
--servertwo
this is second part of my script
So far, I have added a special character within the script that I know don't exist ("}")
$script = get-content -Path "C:\Users\shamvil\Desktop\test.txt"
$newscript = $script.Replace("--servertwo","}--servertwo")
$newscript.split("}")
but I don't know how to save the split into two separate places.
This might not be a best approach, so I am also open to different solution as well.
Please help, thanks!
Use a regex-based -split operation:
$i = 0
(Get-Content -Raw test.txt) -split '(?m)^(?=--)' -ne '' |
ForEach-Object { $fileName = 'file' + (++$i); Set-Content $fileName $_ }
This assumes that each block of lines that starts with a line that starts with -- is to be saved to a separate file.
Get-Content -Raw reads the entire file into a single, multi-line string.
As for the separator regex passed to -split:
The (?m) inline regex option makes anchors ^ and $ match on each line
^(?=--) therefore matches every line that starts with --, using a by definition non-capturing look-ahead assertion ((?=...)) to ensure that the -- isn't removed from the resulting blocks (by default, what matches the separator regex is not included).
-ne '' filters out the extra empty element that results from the separator expression matching at the very start of the string.
Note that Set-Content knows nothing about the character encoding of the input file and uses its default encoding; use -Encoding as needed.
zett42 points out that the file-writing part can be streamlined with the help of a delay-bind script-block parameter:
$i = 0
(Get-Content -Raw test.txt) -split '(?m)^(?=--)' -ne '' |
Set-Content -LiteralPath { (Get-Variable i -Scope 1).Value++; "file$i" }
The Get-Variable call to access and increment the $i variable in the parent scope is necessary, because delay-bind script blocks (as well as script blocks for calculated properties) run in a child scope - perhaps surprisingly, as discusssed in GitHub issue #7157
A shorter - but even more obscure - option is to use ([ref] $i).Value++ instead; see this answer for details.
zett42 also points to a proposed future enhancement that would obviate the need to maintain the sequence numbers manually, via the introduction of an automatic $PSIndex variable that reflects the sequence number of the current pipeline object: see GitHub issue #13772.

Replacing contents of a text file using PowerShell

I've looked all around this site and can't quite seem to find anything that fits my situation. Basically, I am trying to write an addition to the NETLOGON file that will replace text in a text file on all of our users' desktops. The current text is static across the board.
The text I want it changed to will be unique to each user. I want to change the current text (user1) to the users AD username (i.e. johnd, janed, etc.). I am using Windows Server 2008 R2 and all the workstations are Windows 7 Professional SP1 64 bit.
Here's what I have tried so far (with a few variables, which none have worked for one reason or the other):
gc c:\Users\%USERNAME%\desktop\VPN.txt' -replace "user1",$env:username | out-file c:\Users\%USERNAME%\desktop\VPN.txt
I didn't get an error, but it also did not go back to the normal "PS C:>" prompt, just ">>>" and the file did not change as anticipated.
If that is how you have the code exactly then I suppose it is because you have an opening single quote without a closing one. You are still going to have two other problems and you have one answer in your code. The >>> is the line continuation characters because the parser knows that the code is not complete and giving you the option to continue with the code. If you were purposely coding a single line on multiple lines you would consider this a feature.
$path = "c:\Users\$($env:username)\desktop\VPN.txt"
(Get-Content $path) -replace "user1",$env:username | out-file $path
Closed the path in quotes and used a variable since you called the path twice.
%name% is used in command prompt. Environment variables in PowerShell use the $env: provider which you did you once in your snippet.
-replace is a regex replaced tool that can work against Get-Content but you need to capture the result in a sub expression first.
Secondly with -replace is for regex and your string is not regex based you could just use .Replace() as well.
Set-Content is generally preferred over Out-File for performance reasons.
All that being said...
you could also try something like this.
$path = "c:\Users\$($env:username)\desktop\VPN.txt"
(Get-Content $path).Replace("user1",$env:username) | Set-Content $path
Do you want to only replace the first occurrence?
You could use a little regex here with a tweak in how you get the use Get-Content
$path = "c:\Users\$($env:username)\desktop\VPN.txt"
(Get-Content $path | Out-String) -replace "(.*?)user1(.*)",('$1{0}$2' -f $env:username) | out-file $path
Regex will match the entire file. There are two groups which it captures.
(.*?) - Up until the first "user1"
(.*) - Everything after that
Then we use the format operator to sandwich the new username in between those capture groups.
Use:
(Get-Content $fileName) | % {
if ($_.ReadCount -eq 1) {
$_ -replace "$original", "$content"
}
else {
$_
}
} | Set-Content $fileName

Count characters in string then insert delimiter using PowerShell

I have a linux server that will be generating several files throughout the day that need to be inserted in to a database; using Putty I can sftp them off to a server running SQL 2008. Problem is is the structure of the file itself, it has a string of text that are to be placed in different columns, but bulk insert in sql tries to put it all in to one column instead of six. Powershell may not be the best method, but I have seen on several sites how it can find and replace or append to the end of the line, can it count and insert?
So the file looks like this: '18240087A +17135555555 3333333333', where 18, 24, 00, 87, A are different columns, then there is a blank space between the A and the +, that is character count 10-19 which is another column, then characters 20-30 are a column, characters 31-36 are a space which is new column and so on. So I want to insert a '|' or a ',' so that sql understands where the columns end. Is this possible for PowerShell to count randomly?
This may not be the way to respond to all who did answer, i apologize in advance. As this is my first PowerShell script, I appreciate the input from each of you. This is an Avaya SIP server that is generating CDR records, which I must pull from the server and insert in to SQL for later reports. The file exported looks like this:
18:47 10/15
18470214A +14434444444 3013777777 CME-SBC HHHH-CM 4 M00 0
At first I just thought to delete the first line and run a script against the output, which I modified from Kieranties post:
$test = Get-Content C:\Share\CDR\testCDR.txt
$pattern = "^(.{2})(.{2})(.{1})(.{2})(.{1})(.{1})\s*(.{15})(.{10})\s*(.{7})\s*(.{7})\s*(.{1})\s*(.{1})(.{1})(.{1})\s*(.*)$"
if($test -match $pattern){
$result = $matches.Values | select -first ($matches.Count-1)
[array]::Reverse($result, 0, $result.Length)
$result = $result -join "|"
$result | Out-File c:\Share\CDR\results1.txt
}
But then i realized I need that first line as it contains the date. I can try to work that out another way though.
I also now see that there are times when the file contains 2 or more lines of CDR info, such as:
18:24 10/15
18240087A +14434444444 3013777777 CME-SBC HRSA-CM 4 M00 0
18240096A +14434444445 3013777778 CME-SBC HRSA-CM 4 M00 0
Whereas the .ps1 file I made does not give the second string, so I tried adding in this:
foreach ($Data in $test)
{
$Data = $Data -split(',')
and it fails to run. How can I do multiple lines (and possibly that first line)? If you know of a tutorial that can help, that's greatly appreciated as well!
PowerShell is a great tool that I love and it can do many things. I see that you are using SQL Server 2008. Depending on the edition of SQL Server you have running on the server, it most likely has SQL Server Integration Services (SSIS), which is an Extract, Transform, and Load (ETL) tool designed to help migrate data in many scenarios, such as yours. The file you describe here is sounds like a fixed width file, which SSIS can easily handle and import and SQL Server has great ways to automate the loads if this is a recurring need (Which it sounds like), including the automation of the sftp task, and even running PowerShell scripts as part of the ETL (I've done that several times).
If your file truly is fixed width and you want to use PowerShell to transform it into a delimited file, the regex approach you have in your answer works well, or there are several approaches using the System.String methods, like .insert() which allows you to insert a delimiter character using a character index in your line (use Get-Content to read the file and create one String object per line, then loop through them using Foreach loop or Foreach-Object and the pipeline). A slightly more difficult approach would be to use the .Substring() method. You could build your new String line using Substring to extract each column and concatenating those values with a delimiter. That's probably a lot for someone new to PowerShell, but one of the best ways to learn and gain proficiency with it is to practice writing the same script multiple ways. You can learn new techniques that may solve other problems you might encounter in the future.
This is a way (really ugly IMO, I think it can better done):
$a = '18240087A +17135555555 3333333333'
$b = #( ($a[0..1] -join ''), ($a[2..3] -join ''), ($a[4..5] -join ''),
($a[6..7] -join ''), ($a[8] -join ''), ($A[10..19] -join ''),
($a[20..30] -join ''), ($a[31..36] -join ''))
$c = $b -join '|'
$c
18|24|00|87|A|+171355555|55 33333333|33
I don't know if is the rigth splitting you need, but changing the values in each [x..y] you can do what better fit your need. Remenber that character array are 0-based, then the first char is 0 and so on.
I don't quite follow the splitting rules. What kind of software writes the text file anyway? Maybe it can be instructed to change the structure?
That being said, inserting pipes is easy enough with .Insert()
$a= '18240087A +17135555555 3333333333'
$a.Substring(0, $a.IndexOf('+')).Insert(2, '|').insert(5,'|').insert(8, '|').insert(11, '|').insert(13, '|')
# Output: 18|24|00|87|A|
# Rest of the line:
$a.Substring($a.IndexOf('+')+1)
# Output: 17135555555 3333333333
From there you can proceed to splitting the rest of the row data.
I've improved my answer based on your response (note, it's probably best you update your actual question to include that information!)
The nice thing about Get-Content in Powershell is that it returns the content as an array split on the end of line characters. Couple that with allowing multiple assignment from an array and you end up with some neat code.
The following has a function to process each line based on your modified version of my original answer. It's then wrapped by a function which processes the file.
This reads the given file, setting the first line to $date and the rest of the content to $content. It then creates an output file adds the date to the output, then loops over the rest of the content performing the regex check and adding the parsed version of the content if the check is successful.
Function Parse-CDRFileLine {
Param(
[string]$line
)
$pattern = "^(.{2})(.{2})(.{1})(.{2})(.{1})(.{1})\s*(.{15})(.{10})\s*(.{7})\s*(.{7})\s*(.{1})\s*(.{1})(.{1})(.{1})\s*(.*)$"
if($line -match $pattern){
$result = $matches.Values | select -first ($matches.Count-1)
[array]::Reverse($result, 0, $result.Length)
$result = $result -join "|"
$result
}
}
Function Parse-CDRFile{
Param(
[string]$filepath
)
# Read content, setting first line to $date, the rest to $content
$date,$content = Get-Content $filepath
# Create the output file, overwrite if neccessary
$outputFile = New-Item "$filepath.out" -ItemType file -Force
# Add the date line
Set-Content $outputFile $date
# Process the rest of the content
$content |
? { -not([string]::IsNullOrEmpty($_)) } |
% { Add-Content $outputFile (Parse-CDRFileLine $_) }
}
Parse-CDRFile "C:\input.txt"
I used your sample input and the result I get is:
18:24 10/15
18|24|0|08|7|A|+14434444444 30|13777777 C|ME-SBC |HRSA-CM|4|M|0|0|0
18|24|0|09|6|A|+14434444445 30|13777778 C|ME-SBC |HRSA-CM|4|M|0|0|0
There are an incredible amount of resources out there but one I particularly suggest is Douglas Finkes Powershell for Developers It's short, concise and full of great info that will get you thinking in the right mindset with Powershell

Find and Replace in a Large File

I want to find a piece of text in a large xml file and want to replace with some other text. The size of the file is around (50GB). I want to do this in command line. I am looking at PowerShell and want to know if it can handle the large size.
Currently I am trying something like this but it does not like it
Get-Content C:\File1.xml | Foreach-Object {$_ -replace "xmlns:xsi=\"http:\/\/www\.w3\.org\/2001\/XMLSchema-instance\"", ""} | Set-Content C:\File1.xml
The text I want to replace is xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" with an empty string "".
Questions
Can PowerShell handle large
files
I don't want the replace to happen in
memory and prefer streaming assuming
that will not bring the server to
its knees.
Are there any other approaches I can take (different
tools/strategy?)
Thanks
I had a similar need (and similar lack of powershell experience) but cobbled together a complete answer from the other answers on this page plus a bit more research.
I also wanted to avoid the regex processing, since I didn't need it either -- just a simple string replace -- but on a large file, so I didn't want it loaded into memory.
Here's the command I used (adding linebreaks for readability):
Get-Content sourcefile.txt
| Foreach-Object {$_.Replace('http://example.com', 'http://another.example.com')}
| Set-Content result.txt
Worked perfectly! Never sucked up much memory (it very obviously didn't load the whole file into memory), and just chugged along for a few minutes then finished.
Aside from worrying about reading the file in chunks to avoid loading it into memory, you need to dump to disk often enough that you aren't storing the entire contents of the resulting file in memory.
Get-Content sourcefile.txt -ReadCount 10000 |
Foreach-Object {
$line = $_.Replace('http://example.com', 'http://another.example.com')
Add-Content -Path result.txt -Value $line
}
The -ReadCount <number> sets the number of lines to read at a time. Then the ForEach-Object writes each line as it is read. For a 30GB file filled with SQL Inserts, I topped out around 200MB of memory and 8% CPU. While, piping it all into Set-Content at hit 3GB of memory before I killed it.
It does not like it because you can't read from a file and write back to it at the same time using Get-Content/Set-Content. I recommend using a temp file and then at the end, rename file1.xml to file1.xml.bak and rename the temp file to file1.xml.
Yes as long as you don't try to load the whole file at once. Line-by-line will work but is going to be a bit slow. Use the -ReadCount parameter and set it to 1000 to improve performance.
Which command line? PowerShell? If so then you can invoke your script like so .\myscript.ps1 and if it takes parameters then c:\users\joe\myscript.ps1 c:\temp\file1.xml.
In general for regexes I would use single quotes if you don't need to reference PowerShell variables. Then you only need to worry about regex escaping and not PowerShell escaping as well. If you need to use double-quotes then the back-tick character is the escape char in double-quotes e.g. "`$p1 is set to $ps1". In your example single quoting simplifies your regex to (note: forward slashes aren't metacharacters in regex):
'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"'
Absolutely you want to stream this since 50GB won't fit into memory. However, this poses an issue if you process line-by-line. What if the text you want to replace is split across multiple lines?
If you don't have the split line issue then I think PowerShell can handle this.
This is my take on it, building on some of the other answers here:
Function ReplaceTextIn-File{
Param(
$infile,
$outfile,
$find,
$replace
)
if( -Not $outfile)
{
$outfile = $infile
}
$temp_out_file = "$outfile.temp"
Get-Content $infile | Foreach-Object {$_.Replace($find, $replace)} | Set-Content $temp_out_file
if( Test-Path $outfile)
{
Remove-Item $outfile
}
Move-Item $temp_out_file $outfile
}
And called like so:
ReplaceTextIn-File -infile "c:\input.txt" -find 'http://example.com' -replace 'http://another.example.com'
The escape character in powershell strings is the backtick ( ` ), not backslash ( \ ). I'd give an example, but the backtick is also used by the wiki markup. :(
The only thing you should have to escape is the quotes - the periods and such should be fine without.