Display Superscript in SSRS reports - ssrs-2008

I m working on SSRS 2008.
i want to display date as 1st January 2011..
but "st" should be in superscipt ..
not like "1st".
is there any way to display "st", "nd","rd" and "th" in superscipt without installing any custom font type(other Font Type).

just copy and paste from the following list:
ABC⁰ ¹ ² ³ ⁴ ⁵ ⁶ ⁷ ⁸ ⁹ ⁺ ⁻ ⁼ ⁽ ⁾
ABC₀ ₁ ₂ ₃ ₄ ₅ ₆ ₇ ₈ ₉ ₊ ₋ ₌ ₍ ₎
ABCᵃ ᵇ ᶜ ᵈ ᵉ ᶠ ᵍ ʰ ⁱ ʲ ᵏ ˡ ᵐ ⁿ ᵒ ᵖ ʳ ˢ ᵗ ᵘ ᵛ ʷ ˣ ʸ ᶻ
ABCᴬ ᴮ ᴰ ᴱ ᴳ ᴴ ᴵ ᴶ ᴷ ᴸ ᴹ ᴺ ᴼ ᴾ ᴿ ᵀ ᵁ ᵂ
ABCₐ ₑ ᵢ ₒ ᵣ ᵤ ᵥ ₓ
ABC½ ¼ ¾ ⅓ ⅔ ⅕ ⅖ ⅗ ⅘ ⅙ ⅚ ⅛ ⅜ ⅝ ⅞ № ℠ ™ © ®
ABC^ ± ¶

Maybe...
You are limited to what can be done with String.Format. Font size and spacing also refer to the whole text box. So it isn't "native"
However, superscript is unicode so you may be able to to do it with some fancy expression that concatenate characters. I'd suggest custom code.
I haven't tried this, but these articles mention it
http://beyondrelational.com/blogs/jason/archive/2010/12/06/subscripts-and-superscripts-in-ssrs-reports.aspx
http://www.codeproject.com/KB/reporting-services/SSRSSuperscript.aspx

I am not looking for credit here as above solution has answered it for you but for beginners sake, I use a code function within my report.
So in my SQL say I have Number field, then I add a new field OrdinalNumber:
SELECT ..., Number,
CASE WHEN (Number % 100) BETWEEN 10 AND 20 THEN 4
WHEN (Number % 10) = 1 THEN 1
WHEN (Number % 10) = 2 THEN 2
WHEN (Number % 10) = 3 THEN 3
ELSE 4 END AS OrdinalNumber,
...
Then my code function:
Function OrdinalText(ByVal OrdinalNumber As Integer) As String
Dim result As String
Select Case OrdinalNumber
Case 1
result = "ˢᵗ"
Case 2
result = "ⁿᵈ"
Case 3
result = "ʳᵈ"
Case Else
result = "ᵗʰ"
End Select
Return result
End Function
Then in the report textbox I use the expression:
=CStr(Fields!Number.Value) & Code.OrdinalText(Fields!OrdinalNumber.Value)

Related

How to count the numbers of elements in parts of a text file using a loop in Perl?

I´m looking for a way to create a script in Perl to count the elements in my text file and do it in parts. For example, my text file has this form:
ID Position Potential Jury agreement NGlyc result
(PART 1)
NP_073551.1_HCoV229Egp2 23 NTSY 0.5990 (8/9) +
NP_073551.1_HCoV229Egp2 62 NTSS 0.7076 (9/9) ++
NP_073551.1_HCoV229Egp2 171 NTTI 0.5743 (5/9) +
...
(PART 2)
QJY77946.1_NA 20 NGTN 0.7514 (9/9) +++
QJY77946.1_NA 23 NTSH 0.5368 (5/9) +
QJY77946.1_NA 51 NFSF 0.7120 (9/9) ++
QJY77946.1_NA 62 NTSS 0.6947 (9/9) ++
...
(PART 3)
QJY77954.1_NA 20 NGTN 0.7694 (9/9) +++
QJY77954.1_NA 23 NTSH 0.5398 (5/9) +
QJY77954.1_NA 51 NFSF 0.7121 (9/9) ++
...
(PART N°...)
Like you can see the ID is the same in each part (one for PART 1, other to PART 2 and then...). The changes only can see in the columns Position//Potential//Jury agreement//NGlyc result Then, my main goal is to count the line with Potential 0,7 >=.
With this in mind, I´m looking for output like this:
Part 1:
1 (one value 0.7 >=)
Part 2:
2 (two values 0.7 >=)
Part 3:
2 (two values 0.7 >=)
Part N°:
X numbers of values 0.7 >=
This output tells me the number of positive values (0.7 >=) for each ID.
The pseudocode I believe would be something like this:
foreach ID in LIST
foreach LINE in FILE
if (ID is in LINE)
... count the line ...
end foreach LINE
end foreach ID
I´m looking for any suggestion (for a package or script idea) or comment to create a better script.
Thanks! Best!
To count the number of lines, for each part, that match some condition on a certain column, you can just loop over the lines, skip the header, parse the part number, and use an array to count the number of lines matching for each part.
After this you can just loop over the counts recorded in the array and print them out in your specific format.
#!/usr/bin/perl
use strict;
use warnings;
my $part = 0;
my #cnt_part;
while(my $line = <STDIN>) {
if($. == 1) {
next;
}elsif($line =~ m{^\(PART (\d+)\)}) {
$part = $1;
}else {
my #cols = split(m{\s+},$line);
if(#cols == 6) {
my $potential = $cols[3];
if(0.7 <= $potential) {
$cnt_part[$part]++;
};
};
};
};
for(my $i=1;$i<=$#cnt_part;$i++){
print "Part $i:\n";
print "$cnt_part[$i] (values 0.7 <=)\n";
};
To run it, just pipe the entire file through the Perl script:
cat in.txt | perl count.pl
and you get an output like this:
Part 1:
1 (values 0.7 <=)
Part 2:
2 (values 0.7 <=)
Part 3:
2 (values 0.7 <=)
If you want to also display the counts into words, you can use Lingua::EN::Numbers (see this program ) and you get an output very similar to the one in your post:
Part 1:
1 (one values 0.7 <=)
Part 2:
2 (two values 0.7 <=)
Part 3:
2 (two values 0.7 <=)
All the code in this post is also available here.

Analyze weather data stored in csv

I have some weather data stored in a csv file in the form of: „id, date, temperature, rainfall“, with id being the weather station and, obviously, date being the date of measurement. The file contains the data of 3 different stations over a period of 10 years.
What I'd like to do is analyze the data of each station and each year. For example: I'd like to calculate day-to-day differences in temperature [abs((n+1)-n)] for each station and each year.
I thought while-loops could be a possibility, with the loop calculating something as long as the id value is equal to the one in the next row.
But I’ve no idea how to do it.
Best regards
If you still need assistance, I would consider importing the .csv file data using "readtable". So long as only the first row are text, MATLAB will create a 'table' variable (this shouldn't be an issue for a .csv file). The individual columns can be accessed via "tablename.header" and can be reestablished as double data type (ex variable_1=tablename.header). You can then concatenate your dataset as you like. As for sorting by date and station id, I would advocate using "sortrows". For example, if the station id is the first column, sortrow(data,1) will sort "data" by the station id. sortrow(data, [1 2]) will sort "data" by the first column, then by the second column. From there, you can write an if statement to compare the station id's and perform the required calculations. I hope my brief answer is somewhat helpful.
A basic code structure would be:
path=['copy and paste file path here']; % show matlab where to look
data=readtable([path '\filename.csv'], 'ReadVariableNames',1); % read the file from csv format to table
variable1=data.header1 % general example of making double type variable from table
variable2=data.header2
variable3=data.header3
double_data=[variable1 variable2 variable3]; % concatenates the three columns together
sorted_data=sortrows(double_data, [1 2]); % sorts double_data by column 1 then column 2
It always helps to have actual data to work on and specifics as to what kind of output format is expected. Basically, ins and outs :) With the little info provided, I figured I would generate random data for you in the first section, and then calculate some stats in the second. I include the loop as an example since that's what you asked, but I highly recommend using vectorized calculations whenever available, such as the one done in summary stats.
%% example for weather stations
% generation of random data to correspond to what your csv file looks like
rng(1); % keeps the random seed for testing purposes
nbDates = 1000; % number of days of data
nbStations = 3; % number of weather stations
measureDates = repmat((now()-(nbDates-1):now())',nbStations,1); % nbDates days of data ending today
stationIds = kron((1:nbStations)',ones(nbDates,1)); % assuming 3 weather stations with IDs [1,2,3]
temp = rand(nbStations*nbDates,1)*70+30; % temperatures are in F and vary between 30 and 100 degrees
rain = max(rand(nbStations*nbDates,1)*40-20,0); % rain fall is 0 approximately half the time, and between 0mm and 20mm the rest of the time
csv = table(measureDates, stationIds, temp, rain);
clear measureDates stationIds temps rain;
% augment the original dataset as needed
years = year(csv.measureDates);
data = [csv,array2table(years)];
sorted = sortrows( data, {'stationIds', 'measureDates'}, {'ascend', 'ascend'} );
% example looping through your data
for i = 1 : size( sorted, 1 )
fprintf( 'Id=%d, year=%d, temp=%g, rain=%g', sorted.stationIds( i ), sorted.years( i ), sorted.temp( i ), sorted.rain( i ) );
if( i > 1 && sorted.stationIds( i )==sorted.stationIds( i-1 ) && sorted.years( i )==sorted.years( i-1 ) )
fprintf( ' => absolute difference with day before: %g', abs( sorted.temp( i ) - sorted.temp( i-1 ) ) );
end
fprintf( '\n' ); % new line
end
% depending on the statistics you wish to do, other more efficient ways of
% accessing summary stats might be accessible, for example:
grpstats( data ...
, {'stationIds','years'} ... % group by categories
, {'mean','min','max','meanci'} ... % statistics we want
, 'dataVars', {'temp','rain'} ... % variables on which to calculate stats
) % doesn't require data to be sorted or any looping
This produces one line printed for each row of data (and only calculates difference in temperature when there is no year or station change). It also produces some summary stats at the end, here's what I get:
stationIds years GroupCount mean_temp min_temp max_temp meanci_temp mean_rain min_rain max_rain meanci_rain
__________ _____ __________ _________ ________ ________ ________________ _________ ________ ________ ________________
1_2016 1 2016 82 63.13 30.008 99.22 58.543 67.717 6.1181 0 19.729 4.6284 7.6078
1_2017 1 2017 365 65.914 30.028 99.813 63.783 68.045 5.0075 0 19.933 4.3441 5.6708
1_2018 1 2018 365 65.322 30.218 99.773 63.275 67.369 4.7039 0 19.884 4.0615 5.3462
1_2019 1 2019 188 63.642 31.16 99.654 60.835 66.449 5.9186 0 19.864 4.9834 6.8538
2_2016 2 2016 82 65.821 31.078 98.144 61.179 70.463 4.7633 0 19.688 3.4369 6.0898
2_2017 2 2017 365 66.002 30.054 99.896 63.902 68.102 4.5902 0 19.902 3.9267 5.2537
2_2018 2 2018 365 66.524 30.072 99.852 64.359 68.69 4.9649 0 19.812 4.2967 5.6331
2_2019 2 2019 188 66.481 30.249 99.889 63.647 69.315 5.2711 0 19.811 4.3234 6.2189
3_2016 3 2016 82 61.996 32.067 98.802 57.831 66.161 4.5445 0 19.898 3.1523 5.9366
3_2017 3 2017 365 63.914 30.176 99.902 61.932 65.896 4.8879 0 19.934 4.246 5.5298
3_2018 3 2018 365 63.653 30.137 99.991 61.595 65.712 5.3728 0 19.909 4.6943 6.0514
3_2019 3 2019 188 64.201 30.078 99.8 61.319 67.082 5.3926 0 19.874 4.4541 6.3312

Fitting custom functions to data

I have a series of data, for example:
0.767838478
0.702426493
0.733858228
0.703275979
0.651456058
0.62427187
0.742353261
0.646359026
0.695630431
0.659101665
0.598786652
0.592840135
0.59199059
which I know fits best to an equation of the form:
y=ae^(b*x)+c
How can I fit the custom function to this data?
Similar question had been already asked on LibreOffice forum without a proper answer. I would appreciate if you could help me know how to do this. Preferably answers applying to any custom function rather than workarounds to this specific case.
There are multiple possible solutions for this. But one approach would be the following:
For determining the aand b in the trend line function y = a*e^(b*x) there are solutions using native Calc functions (LINEST, EXP, LN).
So we could the y = a*e^(b*x)+c taking as y-c= a*e^(b*x) and so if we are knowing c, the solution for y = a*e^(b*x) could be taken too. How to know c? One approach is described in Exponential Curve Fitting. There approximation of b, a and then c are made.
I have the main part of the delphi code from Exponential Curve Fitting : source listing translated to StarBasic for Calc. The part of the fine tuning of c is not translated until now. To-Do for you as professional and enthusiast programmers.
Example:
Data:
x y
0 0.767838478
1 0.702426493
2 0.733858228
3 0.703275979
4 0.651456058
5 0.62427187
6 0.742353261
7 0.646359026
8 0.695630431
9 0.659101665
10 0.598786652
11 0.592840135
12 0.59199059
Formulas:
B17: =EXP(INDEX(LINEST(LN($B$2:$B$14),$A$2:$A$14),1,2))
C17: =INDEX(LINEST(LN($B$2:$B$14),$A$2:$A$14),1,1)
y = a*e^(b*x) is also the function used for the chart's trend line calculation.
B19: =INDEX(TRENDEXPPLUSC($B$2:$B$14,$A$2:$A$14),1,1)
C19: =INDEX(TRENDEXPPLUSC($B$2:$B$14,$A$2:$A$14),1,2)
D19: =INDEX(TRENDEXPPLUSC($B$2:$B$14,$A$2:$A$14),1,3)
Code:
function trendExpPlusC(rangey as variant, rangex as variant) as variant
'get values from ranges
redim x(ubound(rangex)-1) as double
redim y(ubound(rangex)-1) as double
for i = lbound(x) to ubound(x)
x(i) = rangex(i+1,1)
y(i) = rangey(i+1,1)
next
'make helper arrays
redim dx(ubound(x)-1) as double
redim dy(ubound(x)-1) as double
redim dxyx(ubound(x)-1) as double
redim dxyy(ubound(x)-1) as double
for i = lbound(x) to ubound(x)-1
dx(i) = x(i+1) - x(i)
dy(i) = y(i+1) - y(i)
dxyx(i) = (x(i+1) + x(i))/2
dxyy(i) = dy(i) / dx(i)
next
'approximate b
s = 0
errcnt = 0
for i = lbound(dxyx) to ubound(dxyx)-1
on error goto errorhandler
s = s + log(abs(dxyy(i+1) / dxyy(i))) / (dxyx(i+1) - dxyx(i))
on error goto 0
next
b = s / (ubound(dxyx) - errcnt)
'approximate a
s = 0
errcnt = 0
for i = lbound(dx) to ubound(dx)
on error goto errorhandler
s = s + dy(i) / (exp(b * x(i+1)) - exp(b * x(i)))
on error goto 0
next
a = s / (ubound(dx) + 1 - errcnt)
'approximate c
s = 0
errcnt = 0
for i = lbound(x) to ubound(x)
on error goto errorhandler
s = s + y(i) - a * exp(b * x(i))
on error goto 0
next
c = s / (ubound(x) + 1 - errcnt)
'make y for (y - c) = a*e^(b*x)
for i = lbound(x) to ubound(x)
y(i) = log(abs(y(i) - c))
next
'get a and b from LINEST for (y - c) = a*e^(b*x)
oFunctionAccess = createUnoService( "com.sun.star.sheet.FunctionAccess" )
args = array(array(y), array(x))
ab = oFunctionAccess.CallFunction("LINEST", args)
if a < 0 then a = -exp(ab(0)(1)) else a = exp(ab(0)(1))
b = ab(0)(0)
trendExpPlusC = array(a, b, c)
exit function
errorhandler:
errcnt = errcnt + 1
resume next
end function
The formula y = beax is the exponential regression equation for LibreOffice chart trend lines.
LibreOffice exports all settings
All the settings of LibreOffice, all in the LibreOffice folder.
C:\Users\a←When installing the operating system, the name
entered.\AppData←File Manager ~ "Hidden project" to open, the AppData
folder will be displayed.\Roaming\LibreOffice
Back up the LibreOffice folder, when reinstalling, put the LibreOffice folder in its original place.
Note:
1. If the installation is preview edition, because the name of preview edition is LibreOfficeDev, so the LibreOfficeDev folder will be
displayed.
2. Formal edition can be installed together with preview edition, if both formal edition and preview edition are installed, LibreOffice
folder and LibreOfficeDev folder will be displayed.
3. To clear all settings, just delete the LibreOffice folder, then open the program, a new LibreOffice folder will be created.
LibreOffice exports a single toolbar I made
Common path
C:\Users\a←When installing the operating system, the name
entered.\AppData←File Manager ~ "Hidden project" to open, the AppData
folder will be
displayed.\Roaming\LibreOffice\4\user\config\soffice.cfg\modules\Please
connect the branch path of the individual software below.
Branch path
\modules\StartModule\toolbar\The "Start" toolbar I made is placed here.
\modules\swriter\toolbar\The "writer" toolbar I made is placed here.
\modules\scalc\toolbar\The "calc" toolbar I made is placed here.
\modules\simpress\toolbar\The "impress" toolbar I made is placed here.
\modules\sdraw\toolbar\The "draw" toolbar I made is placed here.
\modules\smath\toolbar\The "math" toolbar I made is placed here.
\modules\dbapp\toolbar\The "base" toolbar I made is placed here.
Backup file, when reinstalling, put the file in the original place.
Note:
Because of the toolbar that I made myself, default file name, will automatically use Numbering, so to open the file, can know the name of
the toolbar.
The front file name "custom_toolbar_" cannot be changed, change will cause error, behind's file name can be changed. For example:
custom_toolbar_c01611ed.xml→custom_toolbar_AAA.xml.
Do well of toolbar, can be copied to other places to use. For example: In the "writer" Do well of toolbar, can be copied to "calc"
places to use.
LibreOffice self-made symbol toolbar
Step 1 Start "Recording Macros function" Tools\Options\Advanced\Enable macro recording(Tick), in the
"Tools\Macros", the "Record Macro" option will appear.
Step 2 Recording Macros Tools\Macros\Record Macro→Recording action (click "Ω" to enter symbol→select symbol→Insert)→Stop
Recording→The name Macros stored in "Module1" is Main→Modify Main
name→Save.
Step 3 Add item new toolbar Tools\Customize\Toolbar→Add→Enter a name (example: symbol)→OK, the new toolbar will appear in the top
left.
Step 4 Will Macros Add item new toolbar Tools\Customize\Toolbar\Category\Macros\My
Macros\Standard\Module1\Main→Click "Main"→Add item→Modify→Rename (can
be named with symbol)→OK→OK.

Xtext 2.8+ formatter, formatting HiddenRegion with comment

I am using Xtext 2.9 formatter and I am trying to format hiddenRegion which contains comment. Here is part of my document region i am trying to format:
Columns: 1:offset 2:length 3:kind 4: text 5:grammarElement
Kind: H=IHiddenRegion S=ISemanticRegion B/E=IEObjectRegion
35 0 H
35 15 S ""xxx::a::b"" Refblock:namespace=Namespace
50 0 H
50 1 S "}" Refblock:RCBRACKET
E Refblock PackageHead:Block=Refblock path:PackageHead/Block=Package'xxx_constants'/head=Model/packages[0]
51 0 H
51 1 S ":" PackageHead:COLON
E PackageHead Package:head=PackageHead path:Package'xxx_constants'/head=Model/packages[0]
52 >>>H "\n " Whitespace:TerminalRule'WS'
"# asd" Comment:TerminalRule'SL_COMMENT'
15 "\n " Whitespace:TerminalRule'WS'<<<
B Error'ASSD' Package:expressions+=Expression path:Package'xxx_constants'/expressions[0]=Model/packages[0]
67 5 S "error" Error:'error'
72 1 H " " Whitespace:TerminalRule'WS'
and corresponding part of the grammar
Model:
{Model}
(packages+=Package)*;
Expression:
Error | Warning | Enum | Text;
Package:
{Package}
'package' name=Name head=PackageHead
(BEGIN
(imports+=Import)*
(expressions+=Expression)*
END)?;
Error:
{Error}
('error') name=ENAME parameter=Parameter COLON
(BEGIN
(expressions+=Estatement)+
END)?;
PackageHead:
Block=Refblock COLON;
Problem is that when i try prepend some characters before error keyword
for example
error.regionFor.keyword('error').prepend[setSpace("\n ")]
This indentation is prepended before the comment and not behind it. This results into improper formatting in case of single line comment before the 'error' keyword.
To provide more clarity, here is example code from my grammar and description of desired behavior:
package xxx_constants {namespace="xxx::a::b"}:
# asd
error ASSD {0}:
Hello {0,world}
This is expected result: (one space to the left)
package xxx_constants {namespace="xxx::a::b"}:
# asd
error ASSD {0}:
Hello {0,world}
and this is the actual result with prepend method
package xxx_constants {namespace="xxx::a::b"}:
# asd
error ASSD {0}:
Hello {0,world}
As the document structure says, the HiddenRegion is in this case is the statement:
# asd
error
How can i prepend my characters directly before the keyword 'error' and not before the comment? Thanks.
I assume you're creating an indentation-sensitive language, because you're explicitly calling BEGIN and END.
For indentation-sensitive language my answer is: You'll want to overwrite
org.eclipse.xtext.formatting2.internal.HiddenRegionReplacer.applyHiddenRegionFormatting(List<ITextReplacer>)
The methods append[] and prepend[] you're using are agnostic to comments and at a later time applyHiddenRegionFormatting() is called to decide how that formatting is weaved between comments.
To make Xtext use your own subclass of HiddenRegionReplacer, overwrite
org.eclipse.xtext.formatting2.AbstractFormatter2.createHiddenRegionReplacer(IHiddenRegion, IHiddenRegionFormatting)
For languages that do not do whitespace-sensitive lexing/parsing (that's the default) the answer is to not call setSpace() to create indentation or line-wraps.
Instead, do
pkg.interior[indent]
pkg.regionFor.keyword(":").append[newLine]
pkg.append[newLine]

Aligning and italicising table column headings using Rmarkdown and pander

I am writing a rmarkdown document knitting to pdf with tables taken from portions of lists from the ezANOVA package. The tables are made using the pander package. Toy Rmarkdown file with toy dataset below.
---
title: "Table Doc"
output: pdf_document
---
```{r global_options, include=FALSE}
#set global knit options parameters.
knitr::opts_chunk$set(fig.width=12, fig.height=8, fig.path='Figs/',
echo=FALSE, warning=FALSE, message=FALSE, dev = 'pdf')
```
```{r, echo=FALSE}
# toy data
id <- rep(c(1,2,3,4), 5)
group1 <- factor(rep(c("A", "B"), 10))
group2 <- factor(rep(c("A", "B"), each = 10))
dv <- runif(20, min = 0, max = 10)
df <- data.frame(id, group1, group2, dv)
```
``` {r anova, echo = FALSE}
library(ez)
library(plyr)
library(pander)
# create anova object
anOb <- ezANOVA(df,
dv = dv,
wid = id,
between = c(group1, group2),
type = 3,
detailed = TRUE)
# extract the output table from the anova object, reduce it down to only desired columns
anOb <- data.frame(anOb[[1]][, c("Effect", "F", "p", "p<.05")])
# format entries in columns
anOb[,2] <- format( round (anOb[,2], digits = 1), nsmall = 1)
anOb[,3] <- format( round (anOb[,3], digits = 4), nsmall = 1)
pander(anOb, justify = c("left", "center", "center", "right"))
```
Now I have a few problems
a) For the last three columns I would like to have the column heading in the table aligned in the center, but the actual column entries underneath those headings aligned to the right.
b) I would like to have the column headings 'F' and 'p' in italics and the 'p' in the 'p<.05' column in italics also but the rest in normal font. So they read F, p and p<.05
I tried renaming the column headings using plyr::rename like so
anOb <- rename(anOb, c("F" = "italic(F)", "p" = "italic(p)", "p<.05" = ""))
But it didn't work
In markdown, you have to use the markdown syntax for italics, which is wrapping text between a star or underscore:
> names(anOb) <- c('Effect', '*F*', '*p*', '*p<.05*')
> pander(anOb)
-----------------------------------------
Effect *F* *p* *p<.05*
--------------- ------ -------- ---------
(Intercept) 52.3 0.0019 *
group1 1.3 0.3180
group2 2.0 0.2261
group1:group2 3.7 0.1273
-----------------------------------------
If you want to do that in a programmatic way, you can also use the pandoc.emphasis helper function to add the starts to a string.
But your other problem is due to a bug in the package, for which I've just proposed a fix on GH. Please feel free to give a try to that branch and report back on GH -- I will try to get some time later this week to clean up the related unit tests and merge the branch if everything seem to be OK.