powershell: delete specific line from x to x - powershell

I'm new in powershell and I absolutely dont get it ...
Just want to delete line 7 to 2500 of a text file. First 6 lines should be untouched.
With linux bash everything is so easy, just:
sed -i '7,2500d' $file
Did not find any solution for mighty powershell :-(
Thank you.

Use Get-Content to read the contents of the file into a variable. The variable can be indexed like a regular PowerShell array. Get the parts of the array you need then pipe the variable into Set-Content to write back to the file.
$file = Get-Content test.log
$keep = $file[0..1] + $file[7..($file.Count - 1)]
$keep | Set-Content test.log
Using this as the contents of the file test.log:
One
Two
Three
Four
Five
Six
Seven
Eight
Nine
This script will output the following into test.log (overwriting the contents):
One
Two
Eight
Nine
In your case, you will want to use $file[0..5] + $file[2500..($file.Count - 1)].

To remove a series of lines in a text file, you could do something like this:
$fileIn = 'D:\Test\File1.txt'
$fileOut = 'D:\Test\File2.txt'
$startRemove = 7
$endRemove = 2500
$currentLine = 1
# needs .NET 4
$newText = foreach ($line in [System.IO.File]::ReadLines($fileIn)) {
if ($currentLine -lt $startRemove -or $currentLine -gt $endRemove) { $line}
$currentLine++
}
$newText | Set-Content -Path $fileOut -Force
Or, if your version of .NET is below 4.0
$reader = [System.IO.File]::OpenText($fileIn)
$newText = while($null -ne ($line = $reader.ReadLine())) {
if ($currentLine -lt $startRemove -or $currentLine -gt $endRemove) { $line }
$currentLine++
}
$reader.Dispose()
$newText | Set-Content -Path $fileOut -Force

Select-object -index takes an array, so:
1..10 > file
(get-content file) | select -index (0..5) | set-content file
get-content file
1
2
3
4
5
6
Or:
(cat file)[0..5] | set-content file

Related

Replace first duplicate without regex and increment

I have a text file and I have 3 of the same numbers somewhere in the file. I need to add incrementally to each using PowerShell.
Below is my current code.
$duped = Get-Content $file | sort | Get-Unique
while ($duped -ne $null) {
$duped = Get-Content $file | sort | Get-Unique | Select -Index $dupecount
$dupefix = $duped + $dupecount
echo $duped
echo $dupefix
(Get-Content $file) | ForEach-Object {
$_ -replace "$duped", "$dupefix"
} | Set-Content $file
echo $dupecount
$dupecount = [int]$dupecount + [int]"1"
}
Original:
12345678
12345678
12345678
Intended Result:
123456781
123456782
123456783
$filecontent = (get-content C:\temp\pos\bart.txt )
$output = $null
[int]$increment = 1
foreach($line in $filecontent){
if($line -match '12345679'){
$line = [int]$line + $increment
$line
$output += "$line`n"
$increment++
}else{
$output += "$line`n"
}
}
$output | Set-Content -Path C:\temp\pos\bart.txt -Force
This works in my test of 5 lines being
a word
12345679
a second word
12345679
a third word
the output would be :
a word
12345680
a second word
12345681
a third word
Let's see if i understand the question correctly:
You have a file with X-amount of lines:
a word
12345678
a second word
12345678
a third word
You want to catch each instance of 12345678 and add 1 increment to it so that it would become:
a word
12345679
a second word
12345679
a third word
Is that what you are trying to do?

Powershell .csv merge with column remove

Using the code below I am able to merge several .csv files in 5 seconds.
$getFirstLine = $true
get-childItem "C:\my\dir\*.csv" | foreach {
$filePath = $_
$lines = $lines = Get-Content $filePath
$linesToWrite = switch($getFirstLine) {
$true {$lines}
$false {$lines | Select -Skip 1}
}
$getFirstLine = $false
Add-Content "C:\my\dir\output_code2.csv" $linesToWrite
}
I would like to take this one step further, preferable using piping to remove several of the columns using a command like:
select DateAndTime,DG1_KW,DG2_KW,WT_KW,HTR1_KW,POSS_Load_KW,INV1_KW,INV2_SOC|Export-csv output_test.csv -Notypeinformation
that being the variables in the header of each file.
How would I modify this code to make this work? The idea here is that I am going to be working with hundreds up to thousands of files.
I have other code which can do this but it is no where near as fast.
for instance using 10 .csv files that are 450kb each. the code below takes 20 seconds to process and spits out a .csv file in 20 seconds removing 48 of the 56 columns leaving the variables I need. If I remove part of the code that trims the columns it still takes 12+ seconds.
# Directory containing csv files, include *.*
$directory = "C:\my\dir\*.*";
# Get the csv files
$csvFiles = Get-ChildItem -Path $directory -Filter *.csv;
#$content = $null;
$content = #();
# Process each file
foreach($csv in $csvFiles)
{
$content += Import-Csv $csv;
}
# Write a datetime stamped csv file
$datetime = Get-Date -Format "yyyyMMddhhmmss";
$content |Export-Csv -Path "C:\my\dir\output_code2_$datetime.csv" -NoTypeInformation;
The code I would like to modify runs those same 10 files in 5 seconds but does not remove the 48 columns.
Any Ideas guys?
Ok, you want an example... Let's say your CSVs always look like this:
Col1,Col2,Col3,Col4,Col5,Col6,Col7,Col8,Col9,Col10
data1,data2,data3,data4,data5,data6,data7,data8,data9,data10
dataA,dataB,dataC,dataD,dataE,dataF,dataG,dataH,dataI,dataJ
Now let's say you only want Col1, Col2, Col6, Col9, and Col10. You could do a RegEx replace something like:
$Files = get-childItem "C:\my\dir\*.csv" | Select -Expand FullName
ForEach($File in $Files){
If($SkipFirst){
Get-Content $File | Select -Skip 1 | ForEach{$_ -replace "^((?:.*?\,){2})(?:.*\,){3}(.*?\,)(?:(?:.*?\,){2})(.*?,.*?)$", '$1$2$3'} | Add-Content "C:\my\dir\output_code2.csv"
}Else{
Get-Content $File | ForEach{$_ -replace "^((?:.*?\,){2})(?:.*\,){3}(.*?\,)(?:(?:.*?\,){2})(.*?,.*?)$", '$1$2$3'} | Add-Content "C:\my\dir\output_code2.csv"
}
}
That would extract just the columns that I noted above. See https://regex101.com/r/jY4oO6/1 for detailed breakdown of RegEx string. Effective output would be (skipping first line if so dictated):
Col1,Col2,Col6,Col9,Col10
data1,data2,data6,data9,data10
dataA,dataB,dataF,dataI,dataJ

powershell result of 'if condition' write to a outfile

In powershell i am writing a script using 'if' condition to check a folder for files received in last 2 hours. The code works fine and the output is written to the screen, instead i want it to write to a file which can be emailed.
Request for kind help.
Regards
Abhijeet
EDIT: Code
$f = 'D:\usr\for_check'
$files = ls $f
Foreach ($file in $files)
{
$createtime = $file.CreationTime
$nowtime = get-date
if (($nowtime - $createtime).totalhours -le 2)
{
"$file"
}
}
You can either use the redirection operator > or Out-File
Examples:
"abc" > c:\out.txt
"abc" | Out-File c:\out.txt
Your code is way too complicated. Something like this would be more PoSh:
$src = "D:\usr\for_check"
$out = "C:\output.txt"
$append = $false
Get-ChildItem $src | ? {
$_.CreationTime -ge (Get-Date).AddHours(-2)
} | % { $_.Name } | Out-File $out -Append:$append
You will want to use the >> operator instead of > or out-file operators as they will overwrite the file every time it's used. Whereas the >> operator will write to the file on the next line.
Example:
$file >> c:\out.txt
Writing each line to the file inside the loop can cause a lot of disk I/O.
You can wrap the loop in a script block, and then output all the lines to the file in one write operation.
$f = 'D:\usr\for_check'
$files = ls $f
&{Foreach ($file in $files)
{
$createtime = $file.CreationTime
$nowtime = get-date
if (($nowtime - $createtime).totalhours -le 2)
{
"$file"
}
}
} | set-content c:\outfile.tx

Replacing a text at specified line number of a file using powershell

IF there is one file for example test.config , this file contain work "WARN" between line 140 and 170 , there are other lines where "WARN" word is there , but I want to replace "WARN" between line 140 and 170 with word "DEBUG", and keep the remaining text of the file same and when saved the "WARN" is replaced by "DEBUG" between only lines 140 and 170 . remaining all text is unaffected.
Look at $_.ReadCount which will help. Just as a example I replace only rows 10-15.
$content = Get-Content c:\test.txt
$content |
ForEach-Object {
if ($_.ReadCount -ge 10 -and $_.ReadCount -le 15) {
$_ -replace '\w+','replaced'
} else {
$_
}
} |
Set-Content c:\test.txt
After that, the file will contain:
1
2
3
4
5
6
7
8
9
replaced
replaced
replaced
replaced
replaced
replaced
16
17
18
19
20
2 Lines:
$FileContent = Get-Content "C:\Some\Path\textfile.txt"
$FileContent | % { If ($_.ReadCount -ge 140 -and $_.ReadCount -le 170) {$_ -Replace "WARN","DEBUG"} Else {$_} } | Set-Content -Path "C:\Some\Path\textfile.txt"
Description:
Write content of text file to array "$FileContent"
Pipe $FileContent array to For-EachObject cmdlet "%"
For each item in array, check Line number ($_.ReadCount)
If Line number 140-170, Replace WARN with DEBUG; otherwise write line unmodified.
NOTE: You MUST add the "Else {$_}". Otherwise the text file will only contain the modified lines.
Set-Content to write the content to text file
Using array slicing:
$content = Get-Content c:\test.txt
$out = #()
$out += $content[0..139]
$out += $content[140..168] -replace "warn","DEBUG"
$out += $content[169..($content.count -1)]
$out | out-file out.txt
This is the test file
text
text
DEBUG
DEBUG
TEXT
--
PS:\ gc .\stuff1.txt |% { [system.text.regularexpressions.regex]::replace($_,"WARN","DEBUG") } > out.txt
Out.txt look like this
text
text
DEBUG
DEBUG
TEXT
Might be trivial but it does the job:
$content = gc "D:\posh\stack\test.txt"
$start=139
$end=169
$content | % {$i=0;$lines=#();}{
if($i -ge $start -and $i -le $end){
$lines+=$_ -replace 'WARN', 'DEBUG'
}
else
{
$lines+=$_
}
$i+=1
}{set-content test_output.txt $lines}
So my script is pretty similar, so I am going to post what I ended up doing.
I had a bunch of servers all with the same script in the same location, and I needed to updated a path in all of the scripts.
i just replaced the entire line (line 3 in this script) and rewrote the script back out
my server names and "paths" to replace the old path were stored in an array (you could pull that from a DB if you wanted to automated it more:
$servers = #("Server1","Server2")
$Paths = #("\\NASSHARE\SERVER1\Databackups","\\NASSHARE\SERVER2\Databackups")
$a = 0
foreach ($x in $servers)
{
$dest = "\\" + $x + "\e$\Powershell\Backup.ps1"
$newline = '$backupNASPath = "' + $Paths[$a] + '"'
$lines = #(Get-Content $dest)
$lines[3] = $newline
$lines > $dest
$a++
}
it works, and saved me a ton of time logging into each server and updating each path. ugh
Cheers

Remove Top Line of Text File with PowerShell

I am trying to just remove the first line of about 5000 text files before importing them.
I am still very new to PowerShell so not sure what to search for or how to approach this. My current concept using pseudo-code:
set-content file (get-content unless line contains amount)
However, I can't seem to figure out how to do something like contains.
While I really admire the answer from #hoge both for a very concise technique and a wrapper function to generalize it and I encourage upvotes for it, I am compelled to comment on the other two answers that use temp files (it gnaws at me like fingernails on a chalkboard!).
Assuming the file is not huge, you can force the pipeline to operate in discrete sections--thereby obviating the need for a temp file--with judicious use of parentheses:
(Get-Content $file | Select-Object -Skip 1) | Set-Content $file
... or in short form:
(gc $file | select -Skip 1) | sc $file
It is not the most efficient in the world, but this should work:
get-content $file |
select -Skip 1 |
set-content "$file-temp"
move "$file-temp" $file -Force
Using variable notation, you can do it without a temporary file:
${C:\file.txt} = ${C:\file.txt} | select -skip 1
function Remove-Topline ( [string[]]$path, [int]$skip=1 ) {
if ( -not (Test-Path $path -PathType Leaf) ) {
throw "invalid filename"
}
ls $path |
% { iex "`${$($_.fullname)} = `${$($_.fullname)} | select -skip $skip" }
}
I just had to do the same task, and gc | select ... | sc took over 4 GB of RAM on my machine while reading a 1.6 GB file. It didn't finish for at least 20 minutes after reading the whole file in (as reported by Read Bytes in Process Explorer), at which point I had to kill it.
My solution was to use a more .NET approach: StreamReader + StreamWriter.
See this answer for a great answer discussing the perf: In Powershell, what's the most efficient way to split a large text file by record type?
Below is my solution. Yes, it uses a temporary file, but in my case, it didn't matter (it was a freaking huge SQL table creation and insert statements file):
PS> (measure-command{
$i = 0
$ins = New-Object System.IO.StreamReader "in/file/pa.th"
$outs = New-Object System.IO.StreamWriter "out/file/pa.th"
while( !$ins.EndOfStream ) {
$line = $ins.ReadLine();
if( $i -ne 0 ) {
$outs.WriteLine($line);
}
$i = $i+1;
}
$outs.Close();
$ins.Close();
}).TotalSeconds
It returned:
188.1224443
Inspired by AASoft's answer, I went out to improve it a bit more:
Avoid the loop variable $i and the comparison with 0 in every loop
Wrap the execution into a try..finally block to always close the files in use
Make the solution work for an arbitrary number of lines to remove from the beginning of the file
Use a variable $p to reference the current directory
These changes lead to the following code:
$p = (Get-Location).Path
(Measure-Command {
# Number of lines to skip
$skip = 1
$ins = New-Object System.IO.StreamReader ($p + "\test.log")
$outs = New-Object System.IO.StreamWriter ($p + "\test-1.log")
try {
# Skip the first N lines, but allow for fewer than N, as well
for( $s = 1; $s -le $skip -and !$ins.EndOfStream; $s++ ) {
$ins.ReadLine()
}
while( !$ins.EndOfStream ) {
$outs.WriteLine( $ins.ReadLine() )
}
}
finally {
$outs.Close()
$ins.Close()
}
}).TotalSeconds
The first change brought the processing time for my 60 MB file down from 5.3s to 4s. The rest of the changes is more cosmetic.
$x = get-content $file
$x[1..$x.count] | set-content $file
Just that much. Long boring explanation follows. Get-content returns an array. We can "index into" array variables, as demonstrated in this and other Scripting Guys posts.
For example, if we define an array variable like this,
$array = #("first item","second item","third item")
so $array returns
first item
second item
third item
then we can "index into" that array to retrieve only its 1st element
$array[0]
or only its 2nd
$array[1]
or a range of index values from the 2nd through the last.
$array[1..$array.count]
I just learned from a website:
Get-ChildItem *.txt | ForEach-Object { (get-Content $_) | Where-Object {(1) -notcontains $_.ReadCount } | Set-Content -path $_ }
Or you can use the aliases to make it short, like:
gci *.txt | % { (gc $_) | ? { (1) -notcontains $_.ReadCount } | sc -path $_ }
Another approach to remove the first line from file, using multiple assignment technique. Refer Link
$firstLine, $restOfDocument = Get-Content -Path $filename
$modifiedContent = $restOfDocument
$modifiedContent | Out-String | Set-Content $filename
skip` didn't work, so my workaround is
$LinesCount = $(get-content $file).Count
get-content $file |
select -Last $($LinesCount-1) |
set-content "$file-temp"
move "$file-temp" $file -Force
Following on from Michael Soren's answer.
If you want to edit all .txt files in the current directory and remove the first line from each.
Get-ChildItem (Get-Location).Path -Filter *.txt |
Foreach-Object {
(Get-Content $_.FullName | Select-Object -Skip 1) | Set-Content $_.FullName
}
For smaller files you could use this:
& C:\windows\system32\more +1 oldfile.csv > newfile.csv | out-null
... but it's not very effective at processing my example file of 16MB. It doesn't seem to terminate and release the lock on newfile.csv.