How do I deterimine if a double is not infinity, -infinity or Nan in SSRS? - double

If got a dataset returned from SSAS where some records may be infinity or -infinity (calculated in SSAS not in the report).
I want to calculate the average of this column but ignore those records that are positive or negative infinity.
My thought is to create a calculated field that would logically do this:
= IIF(IsInfinity(Fields!ASP.Value) or IsNegativeInfinity(Fields!ASP.Value), 0 Fields!ASP.Value)
What I can't figure out is how to do the IsInfinity or IsNegativeInfinity.
Or conversely is there a way to calculate Average for a column ignoring those records?

Just stumbled across this problem and found a simple solution for determining whether a numeric field is infinity.
=iif((Fields!Amount.Value+1).Equals(Fields!Amount.Value), false,true)

I'm assuming you are using the business intelligence studio rather than the report builder tool.
Maybe you are trying a formula because you can't change the SSAS MDX query, but if you could then the infinity would likely be caused by a divide by zero. The NaN is likely caused by trying to do Maths with NULL values.
Ideally change the cube itself so that the measure is safe from divide by zero (e.g. IIF [measure] = 0,don't do the division just return "", otherwise do it) . Second option would be create a calculated measure in the MDX query that does something similar.
As for a formula, there are no IsInfinity functions so you would have to look at the value of the field and see if its 1.#IND or 1.#INF or NaN.

Related

Leave null celles without colors

I have Tableau table with few measures.
I want my pallette to take place in every cell with value and leave cells with null values(blank) without color.
The current situation is that null values are colored as 0 values. How can I distinct the nulls from the 0 values?
I tried to create calculated field but tableau doesn't allow to create calculated field on aggregations
Create a calculated field [COLORSCALE YOUR FIELD] where the null-values are replaced by something off-scale. e.g. if your number range is between 0.0 and 100.0 you could use:
IIF(ISNULL([YOUR FIELD]),-100,[YOUR FIELD]*X)
Where X is a scaling factor. Adjust the scaling factor and the "Start"/"End" options in the "Color" Menue to suit your purpose.
Another option based on mzunhammer's suggestion would be to use the min() function instead of assigning a hard coded value to replace the nulls, and may allow you to avoid the use of the scaling factor.
iif(isnull([Your Field]), min([Your Field])-y, [Your Field])
that way the null is always going to be y less than the lowest value even if the lowest values are changing as the dataset updates.

Table Across Average based on First String Value

Is there a way to calculate the average across a table based on only the value in the first column (School Name) when the first column is a string value? The current values in School Average are not correct due to the additional column values (Grade and Teacher) needed prior to the measures.
This is an ideal use case for LOD calculations. An LOD calculation allows you to specify the dimensions for a calculation as part of the calculation definition -- instead of (solely) by the Tableau shelves and cards.
To start, you can define a calculated field called,say, School_Avg as
{ fixed [School Name] : avg([Grade]) }
Assuming you have a field called Grade.
There is much more to learn about LOD calculations. See the on line help to learn more.

PgSQL - Error while executing a select

I am trying to write a simple select query in PgSQL but I keep getting an error. I am not sure what I am missing. Any help would be appreciated.
select residuals, residuals/stddev_pop(residuals)
from mySchema.results;
This gives an error
ERROR: column "results.residuals" must appear in the GROUP BY clause or be used in an aggregate function
Residuals is a numeric value (continuous variable)
What am I missing?
stddev_pop is an aggregate function. That means that it takes a set of rows as its input. Your query mentions two values in the SELECT clause:
residuals, this is a value from a single row.
stddev_pop(residuals), this is an aggregate value and represents multiple rows.
You're not telling PostgreSQL how it should choose the singular residuals value to go with the aggregate standard deviation and so PostgreSQL says that residuals
must appear in the GROUP BY clause or be used in an aggregate function
I'm not sure what you're trying to accomplish so I can't tell you how to fix your query. A naive suggestion would be:
select residuals, residuals/stddev_pop(residuals)
from mySchema.results
group by residuals
but that would leave you computing the standard deviation of groups of identical values and that doesn't seem terribly productive (especially when you're going to use the standard deviation as a divisor).
Perhaps you need to revisit the formula you're trying to compute as well as fixing your SQL.
If you want to compute the standard deviation separately and then divide each residuals by that then you'd want something like this:
select residuals,
residuals/(select stddev_pop(residuals) from mySchema.results)
from mySchema.results

Ignoring vectors containing NaN entries in Matlab calculations

This code prices bonds according to the fitSvensson function. How do I get Matlab to ignore NaN values in the CleanPrice vector when a date is selected for which some bonds have a NaN entry for a missing price. How can I get it to ignore that bond altogether when deriving the zero curve? It seems many solutions to NaNs resort to interpolation or setting to zero, but this will lead to an erroneous curve.
Maturity=gcm3.data.CouponandMaturity(1:36,2);
[r,c]=find(gcm3.data.CleanPrice==datenum('11-May-2012'));
row=r
SettleDate=gcm3.data.CouponandMaturity(row,3);
Settle = repmat(SettleDate,[length(Maturity) 1]);
CleanPrices =transpose(gcm3.data.CleanPrice(row,2:end));
CouponRate = gcm3.data.CouponandMaturity(1:36,1);
Instruments = [Settle Maturity CleanPrices CouponRate];
PlottingPoints = gcm3.data.CouponandMaturity(1,2):gcm3.data.CouponandMaturity(36,2);
Yield = bndyield(CleanPrices,CouponRate,Settle,Maturity);
SvenssonModel = IRFunctionCurve.fitSvensson('Zero',SettleDate,Instruments)
ParYield=SvenssonModel.getParYields(Maturity);
The Data looks like this where each column is a bond, column 1 is the dates and the elements the clean price. As you can see, the first part of the data contains lots of NaNs for bonds yet to have prices. After a point all bonds have prices but unfortunately there are instances where one or two day's prices are missing.
Ideally, if a NaN is present I would like it to ignore that bond on that date if possible as the more curves generated (irrespective of number of bonds used) the better. If this is not possible then ignoring that date is an option but will result in many curves not generating.
This is a general solution to your problem. I don't have that toolbox on my work computer so I can't test whether it works with the IRFunctionCurve.fitSvensson command
[row,~]=find(gcm3.data.CleanPrice(:,1)==datenum('11-May-2012'));
col_set=find(~isnan(gcm3.data.CleanPrice(row,2:end)));
CleanPrices=transpose(gcm3.data.CleanPrice(row,col));

Weighted Average Fields

I'm totally new to doing calculations in T-SQL. I guess I'm wondering what is a weighted average and how do you do it in T-SQL for a field?
First off as far as I know a weighted average is simply just multiplying 2 columns then average it by dividing by something.
Here's an example of a calculated field I have in my view, after calling one of our UDFs. Now this field in my view needs to also be a weighted average....no idea where to start to turn this into a weighted average.
So ultimately this UDF returns the AverageCostG. I call the UDF from my view and so here's the guts of the UDF:
Set #AverageCostG = ((#AvgFullYear_Rent * #Months) +
(#PrevYearRent * #PrevYearMonths))
/ #Term
so in my view I'm calling the UDF above to get back that #AverageCostG
CREATE View MyTestView
AS
select v.*, --TODO: get rid of *, that's just for testing, select actual field names
CalculateAvgRentG(d.GrossNet, d.BaseMonthlyRent, d.ILI, d.Increase, d.Term) as AverageRent_G,
....
from SomeOtherView v
Now I need to make this AverageRent_G calc field in my view also a weighted average somehow...
Do I need to know WHAT they want weighted or is it assumed that hey, it's obvious.. I do not know what I need to know in order to do the weighted average for these guys...like what specs I need if any from them other than this calculation I've created based off the UDF call. I mean do I have to do some crazy select join or something in addition to multiplying 2 fields and dividing by something to average it? How do I know what fields they are to be used int he weighted average and from where? I will openly admit I'm totally new to BI T-SQL development as I'm an ASP.NET MVC C#/Architect dev...and lost to this calculation stuff in T-SQL.
I have tried to research this but just need some basic hand holding the first time through this, my head hurts right now caue I don't know what info I need to obtain from them and then what to do exactly to make that calc field weighted.
They'll have to tell you what the weighting factor is. This is my guess.
SUM([weight] * CalculateAvgRentG(...)) / SUM([weight])