I would like to customise the indentation and presentation of the in-built scalatest methods Given, When, Then, And, and info.
At the moment a test report looks like:
Blah should
do a test
+ Given a
+ And b
+ When c
+ Then d
+ And e
+ Given f
+ Then g
Similarly in the html output.
I want to, as a first step, optionally indent these reports so it looks like:
Blah should
do a test
+ Given a
+ And b
+ When c
+ Then d
+ And e
+ Given f
+ Then g
I can easily put the spaces after the +, but this (a) looks terrible and (b) doesn't translate to html reports.
Does anyone know a way? The only info I am finding online is to write an entire reporter from scratch, both for console + html output
Related
I'm ultimately aiming to get a hashtable of the path and ISRC of all the MP3 files in my music library for use in organising my library. Right now, I am having trouble getting the ISRC information out of the files. I have checked it is there using other software, but I particularly need to read it using powershell.
I've tried using a few Get-FileMetaData functions, but I think I was looking in the wrong place with that attempt.
In place of reading it the 'proper' way, I attempted to just read the file as plain text with Get-Content and manipulate the string to isolate the ISRC, which I can find when viewing the file in Notepad. The difficulty I ran into is managing the way the text is encoded (if that is the right word). There are whitespace characters inbetween the characters when viewed in notepad, which don't show up in PowerShell but still seem to count toward string length.
I would try to provide some code, but all I've had are dead ends, and I think the issue is in my understanding of what I'm working with. If I've skipped over any important information, please let me know. Tagged with unicode on a vague hunch that the string manipulation involves unicode.
So, how can I properly read the id3v2 tags using powershell (By properly I mean without bodgy string manipulation), or how can I interpret the raw file contents using powershell, i.e. deal with the special characters and whitespaces.
Thanks very much.
Raw content example: (Where the piece of interest is the text following 'TSRC')
ID3 >1TCON ) ÿþS i n g e r & S o n g w r i t r TRCK 1 TPOS 1 TIT2 ÿþv a l e n t i n e TPE1
ÿþD a f n a TXXX ÿþA R T I S T S ÿþD a f n a TALB ÿþv a l e n t i n e TPE2
ÿþD a f n a TLEN 151000TPUB # ÿþM a r g a l i t R e c o r d s TSRC ÿþQ Z 8 L D 1 9 8 6 2 3 3 TXXX - ÿþB A R C O D E ÿþ1 9 3 6 6 4 6 1 1 6 0 3 TYER 2019TDAT 0702APIC ‰ image/jpeg cover ÿØÿà JFIF H H ÿÛ C
Maybe this
Access Music File Metadata in Powershell
answer - using taglib.dll can help you too.
Get-Content has a parameter for -encoding.
If you can work out the encoding of those files, just put it in that parameter.
It's also worth checking your powershell version. I believe this behaviour changed between 5 and 6.
I'm trying to understand this snippet code from:
https://code.kx.com/q/kb/loading-from-large-files/
to customize it by myself (e.x partition by hours, minutes, number of ticks,...):
$ cat fs.q
\d .Q
/ extension of .Q.dpft to separate table name & data
/ and allow append or overwrite
/ pass table data in t, table name in n, : or , in g
k)dpfgnt:{[d;p;f;g;n;t]if[~&/qm'r:+en[d]t;'`unmappable];
{[d;g;t;i;x]#[d;x;g;t[x]i]}[d:par[d;p;n];g;r;<r f]'!r;
#[;f;`p#]#[d;`.d;:;f,r#&~f=r:!r];n}
/ generalization of .Q.dpfnt to auto-partition and save a multi-partition table
/ pass table data in t, table name in n, name of column to partition on in c
k)dcfgnt:{[d;c;f;g;n;t]*p dpfgnt[d;;f;g;n]'?[t;;0b;()]',:'(=;c;)'p:?[;();();c]?[t;();1b;(,c)!,c]}
\d .
r:flip`date`open`high`low`close`volume`sym!("DFFFFIS";",")0:
w:.Q.dcfgnt[`:db;`date;`sym;,;`stats]
.Q.fs[w r#]`:file.csv
But I couldn't find any resources to give me detail explain. For example:
if[~&/qm'r:+en[d]t;'`unmappable];
what does it do with the parameter d?
(Promoting this to an answer as I believe it helps answer the question).
Following on from the comment chain: in order to translate the k code into q code (or simply to understand the k code) you have a few options, none of which are particularly well documented as it defeats the purpose of the q language - to be the wrapper which obscures the k language.
Option 1 is to inspect the built-in functions in the .q namespace
q).q
| ::
neg | -:
not | ~:
null | ^:
string | $:
reciprocal| %:
floor | _:
...
Option 2 is to inspect the q.k script which creates the above namespace (be careful not to edit/change this):
vi $QHOME/q.k
Option 3 is to lookup some of the nuggets of documentation on the code.kx website, for example https://code.kx.com/q/wp/parse-trees/#k4-q-and-qk and https://code.kx.com/q/basics/exposed-infrastructure/#unary-forms
Options 4 is to google search for reference material for other/similar versions of k, for example k2/k3. They tend to be similar-ish.
Final point to note is that in most of these example you'll see a colon (:) after the primitives....this colon is required in q/kdb to use the monadic form of the primitive (most are heavily overloaded) while in k it is not required to explicitly force the monadic form. This is why where will show as &: in the q reference but will usually just be & in actual k code
When I try to load the result of an Historical Search (through EAC) into my Powershell Script, I get whitespaces in the result between every letter. So for Example what looked in the original csv like
Header1, Header2, Header3,
Content1, Content2, Content3
Now Looks like
" H e a d e r 1 ", " H e a d e r 2 ", " H e a d e r 3 ",
" C o n t en t 1", ...
I already tried re-downloading the files and creating a datatable etc but nothing works because the data is just wrong.
When I open the csv in Editor, the whitespaces aren't there.
Select Statements also don't work.
BTW the same Thing works for the traditional message trace csv
$trace = Import-Csv W:\Path.csv
If anybody knows what might cause this, i would love to know the fix since it's driving me crazy
Update: I checked the csv on Notepad++ and These are not whtitespaces, but \0 values.
Any Ideas how they got there and why they are there?
Can nay one help to add multiple DB field values in one field.
Say i have 3 DB fields:
Name
Address
Age
I want to display all 3 fields in the same field:
John Peter 28.
I tried doing 3 fields next to each other and it did work but when i wrap text. It looks really bad:
Name
Jo.pe.28
hn te
r
My requirement is show data in one text field, for example: John.Peter.26
If you want to put them in one line (which i guess is the case), its straight forward.
Put this as a text box $F{Name} + "." + $F{Address} + "." + $F{Age}.toString()
Or you can use string concatenation (I dont personally like the syntax, take more effort to understand) $F{Name}.concat(".").concat($F{Address}).concat(".").concat($F{Age})
The SQL Method
Why not concatenate all the 3 fields you need in the query you use itself like (Assuming you are with Postgres.),
select (name || address|| to_char(age)) as data from my_table
In Ireport
As suggested,
$F{Name} + "." + $F{Address} + "." + $F{Age}.toString()
too works if needed to make it work from the report.
Make sure that all your fields are of same data type.
I'm reading a file in scala using
def fileToString(that:String):String= {
var x:String=""
for(line <- Source.fromFile(that).getLines){
x += line + "\n"
}
x
}
This works fine for a scala file. But on a txt file it adds spaces between every character. For example. I read in a .txt file and get this:
C a l l E v e n t L o g ( E r r o r $ , E r r N u m , E r r O b j )
' E n d E r r o r h a n d l i n g b l o c k .
E n d S u b
and I read in the scala file for the program and it comes out normally
EDIT: It seems to be something to do with Encoding. When I change it to UTF-16, it reads the .txt file, but not the scala file. Is there a way to make it universally work?
No it can't work for all files. To read/interpret a file/data you need to know the format/encoding unless you're treating it as a binary blob.
Either save all files in the usual unicode format (UTF-8) or specify the encoding when reading the file.
FromFile takes an implicit codec, you can pass it explicitly.
io.Source.fromFile("123.txt")(io.Codec("UTF-16"))
In general, if you read from a file you need to know its encoding in order to correctly read the characters. I am not sure what the default encoding is that Scala assumes, probably UTF8, but you can either pass a Codec to fromFile, or specify the encoding as a string:
io.Source.fromFile("file.txt", "utf-8")
It's hard to be sure, but it sounds like the two files were written with different encodings. On any Unix system (including Mac) you can use the command od to look at the actual bytes in the file.
UTF-8 is the standard for ordinary text files on most systems, but if you have a mix of UTF-8 and UTF-16, you'll have to know which encoding to use for which files and correctly specify the encoding.
Or be more careful when you create the files to insure that they are all in the same format.