I need to delete multiple empty rows (;;;;;;;) from a csv file using PowerShell.
I wrote a PowerShell script that I use to creating new users in active directory. The csv file is created by using excel. Sometimes there are empty rows in the CSV file. The script is automatically deployed using task scheduler so I can’t check the script every day that’s why I really need PowerShell to clean the csv automatically.
I also want PowerShell to deletes all the contents from the CSV file when the user has been created. (End of the script)
Thx for the help
Related
I use the video transcoding tools made by Don Melton over on GitHub to compress self filmed videos. Now I would like to automate this task by using a PowerShell script to loop over the contents of a folder as input arguments for the tool and have the output put into a seperate folder. My problem is that the tool is written in a way that it has no option to provide an output location, instead it always places the output files in the directory where it is called in. So when I cd into an "output" directory "next to" the one where my input files are, I then can call
other-transcode ../input/file.mp4
and the output file of the same name as the input file will be placed in the output directory.
Now when I want to use the command in a script, how do I tell PowerShell to run the command as if it was typed manually into a shell that was in the output directory at the moment?
For context, this is my end goal, but I think it is easier to split the complicated question into multiple ones.
Extract data from SQL Server table to CSV <-WORKS
Zip the CSV (powershell) <-WORKS
Delete ONLY the CSV (powershell) <-FAILS for rowcount of 500k, WORKS for rc of 100k
Step 3 proceeds to delete the CSV file AND the ZIP file.
I can see this happening as I can see the files appear in explorer, then disappear.
I truly do not understand how this is possible.
Both step 2 and step 3 use the same parameter (with double backslashes) for the working directory. I guess this must work as the file is created and also deleted there.
The command to zip the file in step 2 is an expression:
" compress-archive " + #[param::outputfilename_as_csv] + " " + #[User::outputfilename_as_zip]
where 'outputfilename' is a timestamped filename using the [System::ContainerStartTime] to ensure the same timestamp is used throughout and the 'as XXX' is the same with .csv or .zip as required.
The command to delete ONLY the csv file is:
" remove-item " + #[param::outputfilename_as_csv]
Obviously the 'delete CSV' part is correct, the working directory/filename and 'delete' command all work and successfully delete the CSV file.
But why is it also deleting the ZIP?
I don't tell 'remove-item' to process the folder, nor have I included any wildcards. Literally just the one, specifically named file.
Even more strange, if I select the TOP 100k from the view, it works fine, but increase to TOP 500k and suddenly the zip file gets deleted again!
Any ideas?
other info:
'execute process' task is used with 'executable' set to 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe'
Am using SSDT 2015
It was not deleting both files.
The ZIP creation crashes, despite it being well under the powershell 2GB limit, and no zip file is ever left behind.
The end result was that it appeared to delete both files.
I have a powershell script that generates a report, and I have connected it to an io.filesystemwatcher. I am trying to improve the error handling capability. I already have the report generation function (which only takes in a filepath) within a try-catch loop that basically kills word, excel and powerpoint and tries again if it fails. This seems to work well but I want to embed in that another try-catch loop that will restart the computer and generate the report after reboot if it fails a second consecutive time.
I decided to try and modify the registry after reading this article: https://cmatskas.com/configure-a-runonce-task-on-windows/
my plan would be, within the second try-catch loop I will create a textfile called RecoveredPath.txt with the file path being its only contents, and then add something like:
Set-ItemProperty "HKLMU:\Software\Microsoft\Windows\CurrentVersion\RunOnce" -Name '!RecoverReport' -Value "C:\...EmergencyRecovery.bat"
Before rebooting. Within the batch file I have:
set /p RecoveredDir=<RecoveredPath.txt
powershell.exe -File C:\...Report.ps1 %RecoveredDir%
When I try to run the batch script, it doesn't yield any errors but doesn't seem to do anything. I tried adding in an echo statement and it is storing the value of the text file as a variable but doesn't seem to be passing it to powershell correctly. I also tried adding -Path %RecoveredDir% but that yielded an error (the param in report.ps1 is named $Path).
What am I doing incorrectly?
One potential problem is that not enclosing %RecoveredDir% in "..." would break with paths containing spaces and other special chars.
However, the bigger problem is that using mere file name RecoveredPath.txt means that the file is looked for in whatever the current directory happens to be.
In a comment your state that both the batch file and input file RecoveredPath.txt are located in your desktop folder.
However, it is not the batch file's location that matters, it's the process' current directory - and that is most likely not your desktop when your batch file auto-runs on startup.
Given that the batch file and the input file are in the same folder and that you can refer to a batch file's full folder path with %~dp0 (which includes a trailing \), modify your batch file to look as follows:
set /p RecoveredDir=<"%~dp0RecoveredPath.txt"
powershell.exe -File C:\...Report.ps1 "%RecoveredDir%"
Is there an easy way to read a .csv in a VSTS pipeline from a PowerShell script?
I have a script that can tag Azure Resources and it gets the key-value pairs from a .csv file. It works a charm when running it locally and running:
$csv = Import-Csv "d:\tagging\tags.csv"
But I'm struggling to find a way to reference the .csv in VSTS (Devops Services). I've put the .csv with the script in the same repo/folder, and I've created an Azure PowerShell script task.
I need to know what the Import-Csv should look like if it's in VSTS. Do I need to add additional steps so that the agent downloads the .csv when running the script?
This is the current error:
The hosted agent can't find the file and reports "Could not find file 'D:\a_tasks\AzurePowerShell_72s1a1931b-effb-4d2e-8fd8-f8472a07cb62\3.1.6\tags.csv'.
Let's say you put the file in your repo in the location /AwesomeCSV/MyCSV.csv. Your CSV's location, from a build perspective, would be $(Build.SourcesDirectory)/AwesomeCSV/MyCSV.csv.
So basically, pass in $(Build.SourcesDirectory)/AwesomeCSV/MyCSV.csv to the script as an argument, or reference it as an environment variable in your script as $env:BUILD_SOURCESDIRECTORY.
Is there a way to use the "Copy" command in Powershell to copy a source file to a destination folder but not lock the actual file being copied?
Reason for asking is the file being copied is also an input file for a separate process and if the process starts while the Powershell script is running then it will fail if the Powershell script has locked the input file it uses.