When I try to load the result of an Historical Search (through EAC) into my Powershell Script, I get whitespaces in the result between every letter. So for Example what looked in the original csv like
Header1, Header2, Header3,
Content1, Content2, Content3
Now Looks like
" H e a d e r 1 ", " H e a d e r 2 ", " H e a d e r 3 ",
" C o n t en t 1", ...
I already tried re-downloading the files and creating a datatable etc but nothing works because the data is just wrong.
When I open the csv in Editor, the whitespaces aren't there.
Select Statements also don't work.
BTW the same Thing works for the traditional message trace csv
$trace = Import-Csv W:\Path.csv
If anybody knows what might cause this, i would love to know the fix since it's driving me crazy
Update: I checked the csv on Notepad++ and These are not whtitespaces, but \0 values.
Any Ideas how they got there and why they are there?
Related
I would like to customise the indentation and presentation of the in-built scalatest methods Given, When, Then, And, and info.
At the moment a test report looks like:
Blah should
do a test
+ Given a
+ And b
+ When c
+ Then d
+ And e
+ Given f
+ Then g
Similarly in the html output.
I want to, as a first step, optionally indent these reports so it looks like:
Blah should
do a test
+ Given a
+ And b
+ When c
+ Then d
+ And e
+ Given f
+ Then g
I can easily put the spaces after the +, but this (a) looks terrible and (b) doesn't translate to html reports.
Does anyone know a way? The only info I am finding online is to write an entire reporter from scratch, both for console + html output
I'm ultimately aiming to get a hashtable of the path and ISRC of all the MP3 files in my music library for use in organising my library. Right now, I am having trouble getting the ISRC information out of the files. I have checked it is there using other software, but I particularly need to read it using powershell.
I've tried using a few Get-FileMetaData functions, but I think I was looking in the wrong place with that attempt.
In place of reading it the 'proper' way, I attempted to just read the file as plain text with Get-Content and manipulate the string to isolate the ISRC, which I can find when viewing the file in Notepad. The difficulty I ran into is managing the way the text is encoded (if that is the right word). There are whitespace characters inbetween the characters when viewed in notepad, which don't show up in PowerShell but still seem to count toward string length.
I would try to provide some code, but all I've had are dead ends, and I think the issue is in my understanding of what I'm working with. If I've skipped over any important information, please let me know. Tagged with unicode on a vague hunch that the string manipulation involves unicode.
So, how can I properly read the id3v2 tags using powershell (By properly I mean without bodgy string manipulation), or how can I interpret the raw file contents using powershell, i.e. deal with the special characters and whitespaces.
Thanks very much.
Raw content example: (Where the piece of interest is the text following 'TSRC')
ID3 >1TCON ) ÿþS i n g e r & S o n g w r i t r TRCK 1 TPOS 1 TIT2 ÿþv a l e n t i n e TPE1
ÿþD a f n a TXXX ÿþA R T I S T S ÿþD a f n a TALB ÿþv a l e n t i n e TPE2
ÿþD a f n a TLEN 151000TPUB # ÿþM a r g a l i t R e c o r d s TSRC ÿþQ Z 8 L D 1 9 8 6 2 3 3 TXXX - ÿþB A R C O D E ÿþ1 9 3 6 6 4 6 1 1 6 0 3 TYER 2019TDAT 0702APIC ‰ image/jpeg cover ÿØÿà JFIF H H ÿÛ C
Maybe this
Access Music File Metadata in Powershell
answer - using taglib.dll can help you too.
Get-Content has a parameter for -encoding.
If you can work out the encoding of those files, just put it in that parameter.
It's also worth checking your powershell version. I believe this behaviour changed between 5 and 6.
I am trying to save query results to text file automatically, without looping through reader object in VB.NET with using ODBC windows connection.
But I can't find how!
That's what I try so far:
mCmd = New OdbcCommand( _
"SELECT my_id FROM " & myTable & " WHERE myflag='1' \o 'c:/a_result.txt'", mCon)
n = mCmd.ExecuteNonQuery
But that don't work at all.
Please advice or code example how to get it.
And second...
It will be ideally that with saving results to text I get a number of saved rows in variable 'n'.
As for now I get only 0 or 1 depends if query was successful or not.
EDIT:
After some fighting I found a way for do this with more or less success.
To txt file:
mCmd = New OdbcCommand( _
"COPY (SELECT my_id FROM " & myTable & " WHERE myFlag='1' " & _
"ORDER BY my_id) TO 'c:/a_result.txt' DELIMITER AS '|'", mCon)
To csv file:
mCmd = New OdbcCommand( _
"COPY (SELECT my_id FROM " & myTable & " WHERE myFlag='1' " & _
"ORDER BY my_id) TO 'c:/a_result.csv' WITH CSV", mCon)
That works, but I am not able to escape quotes and '\' so I got double signs in output file.
If someone with experience know how to achieve escaping and changing delimiter for csv files I would be glad to see it on given example.
Variable 'n' after query contain a number of exported rows.
The \o sequence is a psql meta-command. That means it is a feature of psql. If you want this functionnality you will have to implement it in your client. It is very easy though.
I have a problem where I have a column of data (codes) in a .csv file (can change format to .xlsx or anything else if needed) that is not all correct. For example, a cell contains the following:
"E86 F03 R64 03 R 64 86 F U "
And I would like to ONLY keep the entries that in the format <1 character><2-3 digit integer> and remove the other stuff. Using the above example, I would like to update the cell to look like the following:
"E86 F03 R64"
My major issue is that I cannot seem to figure out how to search the file for a generic format like <1 character><2-3 digit integer>. I would also be open to suggestions outside of PowerShell such as using an Excel formula. Would anyone be able to assist me with such an issue?
("E86 F03 R64 03 R 64 86 F U ".split() -match '^[a-z]\d{2,3}$') -join ' '
I'm reading a file in scala using
def fileToString(that:String):String= {
var x:String=""
for(line <- Source.fromFile(that).getLines){
x += line + "\n"
}
x
}
This works fine for a scala file. But on a txt file it adds spaces between every character. For example. I read in a .txt file and get this:
C a l l E v e n t L o g ( E r r o r $ , E r r N u m , E r r O b j )
' E n d E r r o r h a n d l i n g b l o c k .
E n d S u b
and I read in the scala file for the program and it comes out normally
EDIT: It seems to be something to do with Encoding. When I change it to UTF-16, it reads the .txt file, but not the scala file. Is there a way to make it universally work?
No it can't work for all files. To read/interpret a file/data you need to know the format/encoding unless you're treating it as a binary blob.
Either save all files in the usual unicode format (UTF-8) or specify the encoding when reading the file.
FromFile takes an implicit codec, you can pass it explicitly.
io.Source.fromFile("123.txt")(io.Codec("UTF-16"))
In general, if you read from a file you need to know its encoding in order to correctly read the characters. I am not sure what the default encoding is that Scala assumes, probably UTF8, but you can either pass a Codec to fromFile, or specify the encoding as a string:
io.Source.fromFile("file.txt", "utf-8")
It's hard to be sure, but it sounds like the two files were written with different encodings. On any Unix system (including Mac) you can use the command od to look at the actual bytes in the file.
UTF-8 is the standard for ordinary text files on most systems, but if you have a mix of UTF-8 and UTF-16, you'll have to know which encoding to use for which files and correctly specify the encoding.
Or be more careful when you create the files to insure that they are all in the same format.