Sum of even rows - powershell

I am new in PowerShell and i would like to get a program that sum, the even rows from a text file for example:
12 12
14 15
13 14
14+15=29
I have already tried the get-content | measure -sum , but it did not work.

You can do the following assuming your text file is named sum.txt:
Get-Content sum.txt -ReadCount 2 |
Where Count -eq 2 |
Foreach-Object {
$numbers = $_[-1] -split ' '
[int]$numbers[0] + [int]$numbers[1]
}
Explanation:
The -ReadCount 2 property reads in two lines at a time and passes those two lines as an object into the pipeline.
The Where {} block is to prevent the last line from outputting if it is an odd numbered line.
$_ is the current pipeline object in the Foreach-Object block. Since two lines are passed in, $_[-1] will return the last line of the two.
-split ' ' separates the two numbers into an array because they are space-separated. Since -split returns a string, we need to cast each number as a numeric type ([int]) in order to perform the addition.

Related

In Powershell, how can I replace a value with its matching value from another csv?

I have two csv files. Acts.csv contains the values I want to change:
Activities,personId
132137,35030;20001
132138,17776
132139,13780
132140,37209
132141,30067;5124;35030;17776;13780;20001;15545;37209;17190
132142,30067;5124;35030;17776;
132187,17776
132188,5124
132189,30067;5124;35030;17776;13780;20001
I want to change the personId to the corresponding value in the rand column of the file rands.csv below:
rand,personId
24830,30067
4557,5124
30795,35030
19711,17776
15481,13780
42181,20001
17331,15545
32468,37209
39411,17190
So, the output (first four lines anyway) should look like this:
Activities,personId
132137,30795;42181
132138,19711
132139,15481
132140,32468
This answer looks like a good start, but do I need to put some kind of loop in the find string?
[regex]::Replace($appConfigFile, "{{(\w*)}}",{param($match) $dictionaryObject[$($match.Groups[1].Value)]})
Yes, you need a loop for first result and some kind of search to look for replacements.
$acts = convertfrom-csv "Activities,personId
132137,35030;20001
132138,17776
132139,13780
132140,37209
132141,30067;5124;35030;17776;13780;20001;15545;37209;17190
132142,30067;5124;35030;17776;
132187,17776
132188,5124
132189,30067;5124;35030;17776;13780;20001" -Delimiter ','
$rand = convertfrom-csv "rand,personId
24830,30067
4557,5124
30795,35030
19711,17776
15481,13780
42181,20001
17331,15545
32468,37209
39411,17190" -Delimiter ','
$acts | select -PipelineVariable act | foreach {
# split personids in each input object
$act.personid -split ';' | foreach {
$rand | where personid -eq $_ # find object in rand by personid
} | Join-String -Property rand -Separator ';' # join replacements back to a string
} | select #{n='Activities';e={$act.Activities}}, #{n='NewPerson';e={$_}}, #{n='OldPerson';e={$act.PersonId}}
Please note that order of values in new string is not guaranteed, and errors are not raised if there is no matching value.

Powershell: Import-csv, rename all headers

In our company there are many users and many applications with restricted access and database with evidence of those accessess. I don´t have access to that database, but what I do have is automatically generated (once a day) csv file with all accessess of all my users. I want them to have a chance to check their access situation so i am writing a simple powershell script for this purpose.
CSV:
user;database1_dat;database2_dat;database3_dat
john;0;0;1
peter;1;0;1
I can do:
import-csv foo.csv | where {$_.user -eq $user}
But this will show me original ugly headres (with "_dat" suffix). Can I delete last four characters from every header which ends with "_dat", when i can´t predict how many headers will be there tomorrow?
I am aware of calculated property like:
Select-Object #{ expression={$_.database1_dat}; label='database1' }
but i have to know all column names for that, as far as I know.
Am I convicted to "overingeneer" it by separate function and build whole "calculated property expression" from scratch dynamically or is there a simple way i am missing?
Thanks :-)
Assuming that file foo.csv fits into memory as a whole, the following solution performs well:
If you need a memory-throttled - but invariably much slower - solution, see Santiago Squarzon's helpful answer or the alternative approach in the bottom section.
$headerRow, $dataRows = (Get-Content -Raw foo.csv) -split '\r?\n', 2
# You can pipe the result to `where {$_.user -eq $user}`
ConvertFrom-Csv ($headerRow -replace '_dat(?=;|$)'), $dataRows -Delimiter ';'
Get-Content -Raw reads the entire file into memory, which is much faster than reading it line by line (the default).
-split '\r?\n', 2 splits the resulting multi-line string into two: the header line and all remaining lines.
Regex \r?\n matches a newline (both a CRLF (\r\n) and a LF-only newline (\n))
, 2 limits the number of tokens to return to 2, meaning that splitting stops once the 1st token (the header row) has been found, and the remainder of the input string (comprising all data rows) is returned as-is as the last token.
Note the $null as the first target variable in the multi-assignment, which is used to discard the empty token that results from the separator regex matching at the very start of the string.
$headerRow -replace '_dat(?=;|$)'
-replace '_dat(?=;|$)' uses a regex to remove any _dat column-name suffixes (followed by a ; or the end of the string); if substring _dat only ever occurs as a name suffix (not also inside names), you can simplify to -replace '_dat'
ConvertFrom-Csv directly accepts arrays of strings, so the cleaned-up header row and the string with all data rows can be passed as-is.
Alternative solution: algorithmic renaming of an object's properties:
Note: This solution is slow, but may be an option if you only extract a few objects from the CSV file.
As you note in the question, use of Select-Object with calculated properties is not an option in your case, because you neither know the column names nor their number in advance.
However, you can use a ForEach-Object command in which you use .psobject.Properties, an intrinsic member, for reflection on the input objects:
Import-Csv -Delimiter ';' foo.csv | where { $_.user -eq $user } | ForEach-Object {
# Initialize an aux. ordered hashtable to store the renamed
# property name-value pairs.
$renamedProperties = [ordered] #{}
# Process all properties of the input object and
# add them with cleaned-up names to the hashtable.
foreach ($prop in $_.psobject.Properties) {
$renamedProperties[($prop.Name -replace '_dat(?=.|$)')] = $prop.Value
}
# Convert the aux. hashtable to a custom object and output it.
[pscustomobject] $renamedProperties
}
You can do something like this:
$textInfo = (Get-Culture).TextInfo
$headers = (Get-Content .\test.csv | Select-Object -First 1).Split(';') |
ForEach-Object {
$textInfo.ToTitleCase($_) -replace '_dat'
}
$user = 'peter'
Get-Content .\test.csv | Select-Object -Skip 1 |
ConvertFrom-Csv -Delimiter ';' -Header $headers |
Where-Object User -EQ $user
User Database1 Database2 Database3
---- --------- --------- ---------
peter 1 0 1
Not super efficient but does the trick.

Using get-content to rename a folder?

I am attempting to rename a folder based on the first 10 characters inside a file using a powershell command.
I got as far as far as pulling the data I need to use to rename but I don't know how to pass it.
Get-Content 'C:\DATA\Company.dat' |
Select-Object -first 10 |
rename 'C:\DATA\FOLDER' 'C:\DATA\FOLDER (first 10)'
the part I'm stuck on is (first 10), I don't know what to pass to that section to complete my task?
Select-Object -first 10 will take the first 10 objects. In your case this will be the first 10 lines of the file, not 10 characters.
You can use something like this
Rename-Item -Path 'C:\DATA\FOLDER' -NewName "C:\DATA\$((Get-Content 'C:\DATA\Company.dat' | Select-Object -first 1).Substring(0,10))"
Using -first 1 to get the first line and .Substring(0,10) to get the first 10 characters.
Edit:
Or as #AdminOfThings mentioned, without the Select-Object
Rename-Item -Path 'C:\DATA\FOLDER' -NewName "C:\DATA\$((Get-Content 'C:\DATA\Company.dat' -raw).Substring(0,10))"
To complement Michael B.'s helpful answer with a 3rd approach:
If the characters of interest are known to be all on the 1st line (which is a safe assumption in your case), you can use Get-Content -First 1 ... (same as: Get-Content -TotalCount 1 ...) to retrieve that 1st line directly (and exclusively), which:
performs better than Get-Content ... | Select-Object -First 1
avoids having to read the entire file into memory with Get-Content -Raw ...
Rename-Item 'C:\DATA\FOLDER' `
"FOLDER $((Get-Content -First 1 C:\DATA\Company.dat).Substring(0, 10))"
Note:
It is sufficient to pass only the new name to Rename-Item's 2nd positional argument (the -NewName parameter); e.g., FOLDER 1234567890, not the whole path. While you can pass the whole path, it must refer to the same location as the input path.
The substring-extraction command is embedded inside an expandable string ("...") by way of $(...) the subexpression operator.
As for what you tried:
Select-Object -First 10 gets the first 10 input objects, which are the file's lines output by Get-Content; in other words: you'll send 10 lines rather than 10 characters through the pipeline, and even if they were characters, they'd be sent one by one.
While it is possible to solve this problem in the pipeline, it would be cumbersome and slow:
-join ( # -join, given an array of chars., returns a string
Get-Content -First 1 C:\DATA\Company.dat | # get 1st line
ForEach-Object ToCharArray | # convert to a char. array
Select-Object -First 10 # get first 10 chars.
) |
Rename-Item 'C:\DATA\FOLDER' { 'FOLDER ' + $_ }
That said, you could transform the above into something faster and more concise:
-join (Get-Content -First 1 C:\DATA\Company.dat)[0..9] |
Rename-Item 'C:\DATA\FOLDER' { 'FOLDER ' + $_ }
Note:
Get-Content -First 1 returns (at most) 1 line, in which case PowerShell returns that line as-is, not wrapped in an array.
Indexing into a string ([...]) with the range operator (..') - e.g., [0..9] - implicitly extracts the characters at the specified positions as an array; it is as if you had called .ToCharArray()[0..9]
Note how the new name is determined via a delay-bind script-block argument ({ ... }) in which $_ refers to the input object (the 10-character string, in this case); it is this technique that enables a renaming command to operate on multiple inputs, where each new name is derived from the specific input at hand.

Delete all lines which contain a duplicate word

I want to delete all the lines that contain one string and keep just the last line.
Eg:
a 1
a 2
a 3
b 1
b 2
I want to delete: a 1 a 2 b 1 and keep only the last lines: a 3 b 2.
I have tried something in powershell but witout success:
gc 1.txt | sort | get-unique
Assuming that you want to:
consider lines that share the same word at the start (a or b in your example) as group,
and return the last line from each such group,
use the Group-Object cmdlet:
Get-Content 1.txt | Group-Object { (-split $_)[0] } | ForEach-Object { $_.Group[-1] }
{ (-split $_)[0] } uses a property expression, via a script block ({ ... }, rather than a property name as the grouping criterion.
-split $_ splits each input line ($_) into an array of substrings by whitespace.
(...)[0] extracts the 1st token, i.e. the first whitespace-separated token on the line (a or b, in your sample data)
As for what you tried (showing your command with aliases expanded):
Get-Content 1.txt | Sort-Object | Get-Unique
Your Sort-Object and Get-Unique calls both operate on the full lines, which is not your intent: since all lines are unique when considered in full, they are all output.
Note that Sort-Object has a -Unique switch, so the following would come closer to what you want, but it wouldn't allow you to control which of the lines that share the same first word to return:
# !! INCORRECT, because you don't control which of the duplicates
# !! is returned, given that sorting is based on only the *first* word
# !! on each line.
PS> Get-Content 1.txt | Sort-Object { (-split $_)[0] } -Unique
a 1
b 1

Adding numbers into 2 totals and put into each its own variable

Hope you can help me with this little puzzle.
I have ONE txt file looking like this:
firstnumbers
348.92
237
230
329.31
secondnumbers
18.21
48.92
37
30
29.31
So a txt file with one Column that has 2 strings and some numbers on each line.
I want to take the total of each column and put it into each variable like say $a and $b
Yes it is 1 column, just to make sure no misunderstanding
It's pretty easy, if I use 2 files with each column of numbers without the headers(strings)
$a = (Get-Content 'firstnumbers.txt' | Measure-Object -Sum).Sum
$b = (Get-Content 'secondnumbers.txt' | Measure-Object -Sum).Sum
But it would be a little more cool to have them in one txt file, like the aforementioned with a header over each row of numbers.
I've tried removing the the headers with i.e. $a.Replace("first", $null).Replace("sec", $null) and then doing a $b.Split(" ")[1,2,3,4,5] ending with | measure -sum
That gives me the correct number of firstnumbers - but it won't work if I don't keep the specific set of numbers each time. They'll change and there's gonna be more or less of them.
It should be pretty easy I'm guessing. I just can't to seem wrap my head around it at the moment.
Any advice would be awesome!
cheers
Something like this should work:
$file = "C:\path\to\your.txt"
[IO.File]::ReadAllText($file) | % {
$_ -replace "`n+([0-9])", ' $1' -split "`n"
} | ? { $_ -ne "" } | % {
$a = $_ -split " ", 2
$v = $a[1] -split " " | Measure-Object -Sum
"{0}`t{1}" -f ($a[0], $v.Sum)
}
Output:
firstnumbers 1145,23
secondnumbers 163,44
Here's another approach, rather than parsing the text as one big blob, you could test each line to see if it contains a # or text, if it's text, then it triggers the creation of a new entry in a hashtable where the sums are stored:
# C:\Temp> get-content .\numbers.txt | foreach{
$val=0;
if([Decimal]::TryParse($_,[ref]$val)){
$sums[$key]+=$val
}else{
$sums += #{"$_"=0}; #add new entry to hashtable
$key=$_;}
} -end {$sums}
Name Value
---- -----
secondnumbers 163.44
firstnumbers 1145.23
Edit: As noted in the comments, the $sums variable persists for each run which causes problems if you run this command twice. You could call Remove-variable sums after each run, or add it to the end processing block like this:
# C:\Temp> get-content .\numbers.txt | foreach{
$val=0;
if([Decimal]::TryParse($_,[ref]$val)){
$sums[$key]+=$val
}else{
$sums += #{"$_"=0}; #add new entry to hashtable
$key=$_;}
} -end {$sums; remove-variable sums;}