SSRS total in Row Grouping for Column Values not the Field Falue - ssrs-2008

We are using SSRS in which we have following Query Result
LOC PD SM PG Product BUDGET Amount Month Date
DL PD1 Anil RR SC 125000 1000.30 April 2015-04-03
DL PD1 Anil RR SC 125000 2500.30 April 2015-04-03
DL PD1 Anil RD SC 130000 1580.01 April 2015-04-03
DL PD2 Anil PCH SC 150000 3611.00 April 2015-04-03
DL PD2 Sanjay AG AH 225000 1566.67 May 2015-05-04
DL PD2 Sanjay AG IW 225000 3380.48 May 2015-05-04
DL PD2 Sanjay MG IW 75000 2237.62 May 2015-05-04
DL Dist Sunil UP AH 300000 523.33 May 2015-05-04
DL Dist Sunil UP AH 300000 1258.17 April 2015-04-06
While Implementing this in SSRS in following Herarchy we get following result
Apr'15 - Mar'16 September 2015
Loc PD SM PG Budget Amount MthBdt SC AH IW % 07-09-15 08-09-15
DL PD1 Anil RR 1,25,000 3,501 10416.67 0 0 0 0 0 0
RD 1,30,000 1,580 10833.33 0 0 0 0 0 0
Anil Total 3,80,000 5,081 31,667 0 0 0 0.00 % 0 0
PD1 Total 3,80,000 5,081 31,667 0 0 0.00 % 0 0
PD2 Sanjay AG 2,25,000 4,947 18,750
MG 75,000 2,238 6,250
Sanjay Total 5,25,000 7,185 43,750 0 0 0.00 % 0 0
Anil PCH 1,50,000 3,611 12,500 0 0 0 0.00 % 0 0
Anil Total 1,50,000 3,611 12,500 0 0 0 0.00 % 0 0
PD2 Total 6,75,000 10,796 56,250 0 0 0 0.00 % 0 0
dist Sunil 3,00,000 1802 25000 0 0 0 0.00 % 0 0
Sunil Total 6,00,000 1,782 50000 0 0.00 %
Dist Total 6,00,000 1,782 50000 0 0.00 %
DL TOTAL 16,55,000.00 17,135 1,37,917 0 0 0 0.00 % 0 0
Raw total of Amount is correct but is not correct in terms of Budget as Budget in query is link with PG, so should not get calculated transactions wise it should calculate column wise
it should display 2,55,000 insted of 3,80,000 in Anil Total , we have tried (Sum(Field!Budgey.value)) its giving this result and if we put only (Field!Budgey.value) its giving only 1,25,000
please guide is there any way where we can calculate the total of this value ?

I didn't found any solutions through SSRS, but following changes in Query is worked for me, what i added is count of the PG
PGCount = COUNT(*) OVER (PARTITION BY PG)
and created new BUDGET by dividing Budget to Count, and the same field value set in SSRS has resolve the Totaling Issue without any error
Thank

Related

Matlab #fmincon error: "Supplied objective function must return a scalar value"

EDIT: To help clarify my question, I'm looking to get a fit to the following data:
I can get a fit using the cftool function, but using a least squares approach doesn't make sense with my binary data. Just to illustrate...
So, my goal is to fit this data using the fmincon function.
ORIGINAL POST:
I have data from a movement control experiment in which participants were timed while they performed a task, and given a score (failure or success) based on their performance. As you might expect, we assume participants will make less errors as they have more time to perform the task.
I'm trying to fit a function to this data using fmincon, but get the error "Error using fmincon (line 609)
Supplied objective function must return a scalar value."
I don't understand a) what this means, or b) how I can fix it.
I provide some sample data and code below. Any help greatly appreciated.
%Example Data:
time = [12.16 11.81 12.32 11.87 12.37 12.51 12.63 12.09 11.25
7.73 8.18 9.49 10.29 8.88 9.46 10.12 9.76 9.99 10.08
7.48 7.88 7.81 6.7 7.68 8.05 8.23 7.84 8.52 7.7
6.26 6.12 6.19 6.49 6.25 6.51 6 6.79 5.89 5.93 3.97 4.91 4.78 4.43
3.82 4.72 4.72 4.31 4.81 4.32 3.62 3.71 4.29 3.46 3.9 3.73 4.15
3.92 3.8 3.4 3.7 2.91 2.84 2.7 2.83 2.46 3.19 3.44 2.67 3.49 2.71
3.17 2.97 2.76 2.71 2.88 2.52 2.86 2.83 2.64 2.02 2.37 2.38
2.53 3.03 2.61 2.59 2.59 2.44 2.73 ]
error = [0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 0 1 0 1 0 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1];
%Code:
% initial parameters - a corresponds to params(1), b corresponds to params(2)
a = 3.0;
b = -0.01;
LL = #(params) 1/1+params(1)*(log(time).^params(2));
LL([a b]);
pOpt = fmincon(LL,[a b],[],[]);
The mistakes comes from the function LL, that returns a number of values equal to the length of time.
To properly use fmincon, you need to have a function that returns only one value.
I believe logistic regression would fit your data and purposes nicely. In that case, why not simply use Matlab's built-in function for multinomial logistic regression?
B = mnrfit(time,error)
Regarding your function LL, are you sure you have entered the function correctly and are not missing a parentheses?
LL = #(params) 1/(1+params(1)*(log(time).^params(2)));
Without the parentheses, you function is equivalent to 1 + a*log(x)^b

ANOSIM: Error in sort.int(x, na.last = na.last, decreasing = decreasing, ...)

I am trying to perform Anosim {vegan} on my ecological data and I keep getting the same error message. I don't think this is a duplicate question from another one already posted and would like to fully show what's happening.
I have got my numeric dataframe ("sps") consisting of 17 rows (sites) and 313 columns (species), and a second dataframe ("env.data") containing a column with 17 factors. I would therefore want to test if there are any significant differences between my 17 groups.
Here is a sample of my data:
> sps[,2:5]
A. faranauti A. tecta A. lyra A. arbuscula
Sargasso Sea 0 0 2 0
Equatorial Brazil 0 0 0 0
Canarias Sea 0 0 0 0
Corner Seamounts 0 0 0 2
Gulf of Mexico 0 0 0 0
Labrador Sea 0 0 0 0
Equatorial Africa 0 0 0 0
Tropic Seamount 0 0 0 107
NewEngland Seamount Chain 0 0 0 0
Norwegian Basin 0 0 0 0
Eastern North Atlantic 0 0 3 0
Logachev and BritishIsles 0 0 0 4
Reykjanes Ridge 0 0 0 0
MAR North 0 0 0 14
Flemish Cap 0 0 0 217
MAR South 1 1 0 0
Azores Seamount Chain 0 0 0 12
> class(sps)
[1] "data.frame"
> head(env.data)
idcell geo_area
1 1 Sargasso Sea
2 2 Equatorial Brazil
3 3 Canarias Sea
4 4 Corner Seamounts
5 5 Gulf of Mexico
6 6 Labrador Sea
> str(env.data)
'data.frame': 17 obs. of 2 variables:
$ idcell : Factor w/ 17 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
$ geo_area: Factor w/ 17 levels "Canarias Sea",..: 15 5 1 2 7 8 4 17 12 13..
Following {vegan}, I have first calculated a dissimilarity matrix with Sorensen as the distance method. I then use this dissimilarity matrix as my input for anosim:
dist.sorensen <- vegdist(sps, method= "bray", binary = TRUE, na.rm= TRUE,
diag = TRUE)
sorensen.anosim <- anosim(dat=dist.sorensen, env.data$geo_area, permutations
= 999)
> summary(sorensen.anosim )
Call:
anosim(dat = dist.sorensen, grouping = env.data$geo_area, permutations =
999)
Dissimilarity: binary bray
ANOSIM statistic R:
Significance: 0.001
Permutation: free
Number of permutations: 999
Error in sort.int(x, na.last = na.last, decreasing = decreasing, ...) :
'x' must be atomic
I have also tried anosim with the raw species data and I get the same error:
raw.anosim <- anosim(sps, env.data$geo_area, permutations = 999, distance =
"bray")
Any ideas? My "sps" dataframe (x) is numeric. My "env.data" dataset (groupings) has a factor column with 17 levels. I can't see where the error comes from, unless it's intrinsic to my data. Many of the 313 species listed in my original dataframe have been recorded only once across my 17 sites (very probably due to sampling bias). However, I get clusters after performing "vegdist (Sorensen index)" and "hclust".

Create table in KDB with columns from results

I'm trying to create a table in KDB where the columns are the results of a query. For example , I have loaded in stock data and search for a given time window what prices the stock traded at. I created a function
getTrades[Sybmol; Date; StartTime; StopTime]
This will search through my database and return the prices that traded between the start and stop time. So my results for Apple for a 30 second window might be:
527.10, 527.45, 527.60, 526.90 etc.
What I want to do is now create a table using xbar where I have rows of every second and columns of all the prices that trade in StartTime and StopTime. I will then place an X in the column if the price traded in that 1 second. I think I can handle most of this but the main thing I'm struggling with is converting the results I got above into the name of the table. I'm also struggling with how to make it flexible so my table will have 5 columns in one scenario (5 prices traded) but 10 in another so essentially it varies depending on how many price levels traded in the window I'm searching.
Thanks.
The best and cleanest way to do programmatic selects is with the functional form of select.
from q for mortals,
?[t;c;b;a]
where t is a table, a is a dictionary of aggregates, b is a dictionary of groupbys and c is a list of constraints.
In other words, select a by b from t where c.
This will allow you to dynamically create a, which can be of arbitrary size.
You can find more information here:
http://code.kx.com/q4m3/9_Queries_q-sql/#912-functional-forms
Pivot Table
I think that pivot table will be suitable in this case. Using jgleeson example:
time price
------------------
11:27:01.600 106
11:27:02.600 102
11:27:02.600 102
11:27:03.100 100
11:27:03.100 102
11:27:03.100 102
11:27:03.100 104
11:27:03.600 104
11:27:03.600 102
11:27:04.100 106
11:27:05.100 105
11:27:06.600 106
11:27:07.100 101
11:27:07.100 104
11:27:07.600 105
11:27:07.600 105
11:27:07.600 101
not null exec (exec `$string asc distinct price from s)#(`$string price)!price by time:1 xbar time.second from s:select from t where time within 11:27:00 11:27:30
and returns:
time | 100 101 102 103 104 105 106
--------| ---------------------------
11:27:01| 0 0 0 0 0 0 1
11:27:02| 0 0 1 0 0 0 0
11:27:03| 1 0 1 0 1 0 0
11:27:04| 0 0 0 0 0 0 1
11:27:05| 0 0 0 0 0 1 0
11:27:06| 0 0 0 0 0 0 1
11:27:07| 0 1 0 0 1 1 0
It can support any numbers of unique prices.
This looks a bit convoluted... but I think this might be what you're after.
Sample table t with time and price columns:
t:`time xasc([]time:100?(.z.T+500*til 100);price:100?(100 101 102 103 104 105 106))
This table should replicate what you get from the first step of your function call - "select time,price from trade where date=x, symbol=y, starttime=t1, endtime=t2".
To return the table in the format specified:
q) flip (`time,`$string[c])!flip {x,'y}[key a;]value a:{x in y}[c:asc distinct tt`price] each group (!) . reverse value flip tt:update time:time.second from t
time 100 101 102 103 104 105 106
------------------------------------
20:34:29 0 1 0 0 0 1 0
20:34:30 0 0 0 0 0 0 1
20:34:31 0 0 1 0 0 0 0
20:34:32 0 0 1 0 1 0 0
...
This has bools instead of X as bools are probably easier to work with.
Also please excuse the one-liner... If I get a chance I'll break it up and try to make it more readable.
A more simplified version is :
q)t:`time xasc([] s:100#`s ; time:100?(.z.T+500*til 100);price:100?(100 101 102 103 104 105 106))
q)t1:update `$string price,isPrice:1b from t
q)p:(distinct asc t1`price)
q)exec p#(10b!"X ")#(price!isPrice) by time:1 xbar time.second from t1
time | 100 101 102 103 104 105 106
--------| ---------------------------
20:39:00| X X X
20:39:01| X X X X
20:39:02| X
20:39:04| X
20:39:05| X X X X

matrix from a matrix matlab

I am trying to get the function to output an array T that has each value inside the fixed outer rows and columns, averaged with itself and the 4 numbers surrounding it. I made X to recieve all 9 of the values from my larger array, S to select only the ones I wanted and A to use when averaging, yet it will not work, I believe the problem lies in the X(ii,jj) = T((ii-1):(ii+1), (jj-1):(jj+1)). Any help much appreciated
function T = tempsim(rows, cols, topNsideTemp, bottomTemp, tol)
T = zeros(rows,cols);
T(1,:) = topNsideTemp;
T(:,1) = topNsideTemp;
T(:,rows) = topNsideTemp;
T(rows,:) = bottomTemp;
S = [0 1 0; 1 1 1; 0 1 0];
X = zeros(3,3);
A = zeros(3,3);
for ii = 2:(cols-1);
jj = 2:(rows-1);
X(ii,jj) = T((ii-1):(ii+1), (jj-1):(jj+1))
A = X.*S;
T = (sum(sum(A)))/5
What you are doing looks like a convolution as Jouni points out. So using that knowledge, I came up with following code:
function T = tempsim(rows, cols, topNsideTemp, bottomTemp, tol)
sz = [rows,cols];
topEdge = sub2ind(sz, ones(1,cols) , 1:cols);
bottomEdge = sub2ind(sz, ones(1,cols)*rows, 1:cols);
leftEdge = sub2ind(sz, 1:rows , ones(1,rows));
rightEdge = sub2ind(sz, 1:rows , ones(1,rows)*cols);
otherEdges = [topEdge leftEdge rightEdge];
edges = [bottomEdge otherEdges];
%% set initial grid
T0 = zeros(sz);
T0(otherEdges) = topNsideTemp;
T0(bottomEdge) = bottomTemp;
%% average filter
F = [0 1 0
1 1 1
0 1 0];
F = F/sum(F(:));
%% simulation
T = T0; % initial condition
T = conv2(T, F, 'same');
T(edges) = T0(edges); % this keeps the edges set to the initial values
If you run this, you will get following results:
T = tempsim(10,10,100,-100)
T0 =
100 100 100 100 100 100 100 100 100 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 -100 -100 -100 -100 -100 -100 -100 -100 100
T =
100 100 100 100 100 100 100 100 100 100
100 40 20 20 20 20 20 20 40 100
100 20 0 0 0 0 0 0 20 100
100 20 0 0 0 0 0 0 20 100
100 20 0 0 0 0 0 0 20 100
100 20 0 0 0 0 0 0 20 100
100 20 0 0 0 0 0 0 20 100
100 20 0 0 0 0 0 0 20 100
100 0 -20 -20 -20 -20 -20 -20 0 100
100 -100 -100 -100 -100 -100 -100 -100 -100 100
I also showed T0 for clarity as you can see that T(2,2) == 40, which is equal to (100 + 100 + 0 + 0 + 0)/5 from the same position in T0.
From the context, I guess you'll be studying the convergence of this problem. If that's the case, you will have to repeat the last 2 lines until it converges.
But depending on your actual problem, I think you can improve the initial conditions to speed up convergence by initializing the grid to a temperature different from 0. In the current code your boundary conditions will heat up the complete grid, which takes some time. If you just provide a proper guess for the bulk temperature (in lieu of 0), this can speed up the convergence considerably. In my example I need about 40 steps for convergence up to a certain tolerance, with a proper guess (50 in my case) this can be reduced to about 20 steps for the same tolerance level. For larger grid, I expect to see even larger gains in efficiency.
This converges to the following values (and the mirror image for the other values):
100 100 100 100 100
100 96.502 93.464 91.254 90.097
100 92.989 86.925 82.533 80.245
100 89.229 79.995 73.386 69.974
100 84.579 71.615 62.556 57.963
100 77.78 59.86 47.904 42.037
100 66.515 41.786 26.614 19.565
100 45.939 13.075 -4.3143 -11.72
100 3.4985 -32.392 -46.997 -52.455
100 -100 -100 -100 -100
You can verify that this solution is an approximate fixpoint by verifying that for each element in the bulk it is equal to the calculated average within a certain tolerance.

pspice -> ngspice conversion -- Absolute Value Function

I am trying to port an existing PSPICE model (the HP Memristor Model) to ngspice... is there an absolute value function for ngspice?
Original PSPICE Model:
.SUBCKT modelmemristor plus minus PARAMS: +phio=0.95 Lm=0.0998 w1=0.1261 foff=3.5e-6 +ioff=115e-6 aoff=1.2 fon=40e-6 ion=8.9e-6 +aon=1.8 b=500e-6 wc=107e-3
G1 plus internal value={sgn(V(x))*(1/V(dw))^2*0.0617* (V(phiI)*exp(-V(B)*V(sr))-(V(phiI)+abs(V(x)))* exp(-V(B)*V(sr2)))}
Esr sr 0 value={sqrt(V(phiI))}
Esr2 sr2 0 value={sqrt(V(phiI)+abs(V(x)))} Rs internal minus 215
Eg x 0 value={V(plus)-V(internal)} Elamda Lmda 0 value={Lm/V(w)}
Ew2 w2 0 value={w1+V(w)- (0.9183/(2.85+4*V(Lmda)-2*abs(V(x))))}
EDw dw 0 value={V(w2)-w1}
EB B 0 value={10.246*V(dw)}
ER R 0 value={(V(w2)/w1)*(V(w)-w1)/(V(w)-V(w2))} EphiI phiI 0 value= {phio-abs(V(x))*((w1+V(w2))/(2*V(w)))- 1.15*V(Lmda)*V(w)*log(V(R))/V(dw)}
C1 w 0 1e-9 IC=1.2
R w 0 1e8MEG
Ec c 0 value={abs(V(internal)-V(minus))/215}
Emon1 mon1 0 value={((V(w)-aoff)/wc)-(V(c)/b)} Emon2 mon2 0 value={(aon-V(w))/wc-(V(c)/b)}
Goff 0 w value={foff*sinh(stp(V(x))*V(c)/ioff)* exp(-exp(V(mon1))-V(w)/wc)}
Gon w 0 value={fon*sinh(stp(-V(x))*V(c)/ion)* exp(-exp(V(mon2))-V(w)/wc)}
.ENDS modelmemristor
* test memristor
xmemr 1 0 modelmemristor phio=0.95 Lm=0.0998 w1=0.1261 foff=3.5e-6 ioff=115e-6 aoff=1.2 fon=40e-6 ion=8.9e-6 aon=1.8 b=500e-6 wc=107e-3
.SUBCKT modelmemristor plus minus phio=0.95 Lm=0.0998 w1=0.1261 foff=3.5e-6 ioff=115e-6 aoff=1.2 fon=40e-6 ion=8.9e-6 aon=1.8 b=500e-6 wc=107e-3
G1 plus internal cur={sgn(V(x))*(1/V(dw))^2*0.0617* (V(phiI)*exp(-V(B)*V(sr))-(V(phiI)+abs(V(x)))* exp(-V(B)*V(sr2)))}
Esr sr 0 vol={sqrt(V(phiI))}
Esr2 sr2 0 vol={sqrt(V(phiI)+abs(V(x)))}
Rs internal minus 215
Eg x 0 vol={V(plus)-V(internal)}
Elamda Lmda 0 vol={Lm/V(w)}
Ew2 w2 0 vol={w1+V(w)- (0.9183/(2.85+4*V(Lmda)-2*abs(V(x))))}
EDw dw 0 vol={V(w2)-w1}
EB B 0 vol={10.246*V(dw)}
ER R 0 vol={(V(w2)/w1)*(V(w)-w1)/(V(w)-V(w2))}
EphiI phiI 0 vol= {phio-abs(V(x))*((w1+V(w2))/(2*V(w)))- 1.15*V(Lmda)*V(w)*log(V(R))/V(dw)}
C1 w 0 1e-9 IC=1.2
R w 0 1e8MEG
Ec c 0 vol={abs(V(internal)-V(minus))/215}
Emon1 mon1 0 vol={((V(w)-aoff)/wc)-(V(c)/b)}
Emon2 mon2 0 vol={(aon-V(w))/wc-(V(c)/b)}
Goff 0 w cur={foff*sinh(u(V(x))*V(c)/ioff)* exp(-exp(V(mon1))-V(w)/wc)} $ stp -> u
Gon w 0 cur={fon*sinh(u(-V(x))*V(c)/ion)* exp(-exp(V(mon2))-V(w)/wc)} $ stp -> u
.ENDS modelmemristor
Regards
Holger